e

Surface modification method and surface-modified elastic body

The present invention aims to provide a surface modification method for a rubber vulcanizate or a thermoplastic elastomer, which can impart excellent sliding properties and excellent durability against repeated sliding motion and can allow the surface to maintain the sealing properties, without using expensive self-lubricating plastics. The present invention relates to a surface modification method for modifying a rubber vulcanizate or a thermoplastic elastomer as an object to be modified, the method including: step 1 of forming polymerization initiation points on the object to be modified; step 2 of radically polymerizing a monomer, starting from the polymerization initiation points, by irradiation with LED light at 300 nm to 400 nm to grow polymer chains on a surface of the object to be modified; and step 3 of esterifying, transesterifying or amidating side chains of the polymer chains.




e

UV-curable coating compositions with self-healing capabilities, coating films, and methods of producing coating films

The present invention is directed to a coating composition including a (meth)acrylate binder resin, a UV initiator, an organic solvent, and silica particles surface-treated with a (meth)acrylate compound, a coating film including a cured product of the coating composition, and a method of producing the coating film. The present invention makes it possible to provide a coating material having high transmittance and a low level of haze, and excellent scratch resistance and self-healing capabilities.




e

Pressure-sensitive adhesives with mixed photocrosslinking system

The present disclosure provides a method of providing an adhesive composition comprising the steps of combining crosslinkable composition including: a) a (meth)acryloyl monomer mixture with the b) photocrosslinking agent mixture, and irradiating with UVC radiation to polymerize and crosslink the composition.




e

Scalable network security with fast response protocol

This disclosure provides a network security architecture that permits installation of different software security products as virtual machines (VMs). By relying on a standardized data format and communication structure, a general architecture can be created and used to dynamically build and reconfigure interaction between both similar and dissimilar security products. Use of an integration scheme having defined message types and specified query response framework provides for real-time response and easy adaptation for cross-vendor communication. Examples are provided where an intrusion detection system (IDS) can be used to detect network threats based on distributed threat analytics, passing detected threats to other security products (e.g., products with different capabilities from different vendors) to trigger automatic, dynamically configured communication and reaction. A network security provider using this infrastructure can provide hosted or managed boundary security to a diverse set of clients, each on a customized basis.




e

Electronic device and method for displaying resources

An electronic device, including: one or more hardware interfaces each for connecting to a signal source to receive at least one type of application resources; a control chip electrically connected to the one or more hardware interfaces, the control chip being configured to classify and integrate one or more types of application resources received through the one or more hardware interfaces, and generate a display signal for a main interface including a number of areas arranged according to a predetermined layout, wherein each area is configured to display information regarding a same type of the classified and integrated application resources, and different areas are configured to display information regarding different types of the classified and integrated application resources; and a display screen electrically connected to the control chip to display the main interface according to the display signal.




e

Workload migration between virtualization softwares

A virtual machine (VM) migration from a source virtual machine monitor (VMM) to a destination VMM on a computer system. Each of the VMMs includes virtualization software, and one or more VMs are executed in each of the VMMs. The virtualization software allocates hardware resources in a form of virtual resources for the concurrent execution of one or more VMs and the virtualization software. A portion of a memory of the hardware resources includes hardware memory segments. A first portion of the memory segments is assigned to a source logical partition and a second portion is assigned to a destination logical partition. The source VMM operates in the source logical partition and the destination VMM operates in the destination logical partition. The first portion of the memory segments is mapped into a source VMM memory, and the second portion of the memory segments is mapped into a destination VMM memory.




e

Providing indirect data addressing in an input/output processing system where the indirect data address list is non-contiguous

A method includes configuring a processing circuit to perform: receiving a control word for an I/O operation, forwarding a transport command control block (TCCB) from the channel subsystem to a control unit, gathering data associated with the I/O operation, and transmitting the gathered data to the control unit in the I/O processing system. Gathering the data includes accessing entries of a list of storage addresses that collectively specifying the data. Based on an entry of the list comprising a not-set first flag and a corresponding first storage address, gathering data from a corresponding storage location, and based on an entry of the list comprising a set first flag and a corresponding second storage address, obtaining a next entry of the list from a second storage location.




e

Method and apparatus for obtaining equipment identification information

Embodiments of the present invention relate to a method and an apparatus for obtaining equipment identification information, where the method includes: detecting, by using a first GPIO port, a first discharging duration for a capacitor to discharge through a resistor to be tested; detecting, by using a second GPIO port, a second discharging duration for the capacitor to discharge through a fixed value resistor; and obtaining a resistance of the resistor to be tested according to the first discharging duration, the second discharging duration, and a resistance of the fixed value resistor. The embodiments of the present invention are capable of increasing identification efficiency of the GPIO port.




e

Bridge circuit

A bridge circuit of an embodiment includes: a command transfer portion which is configured by wired logic into which a host controller capable of sending a command that corresponds to each of a plurality of devices inputs the command, and which is configured to transfer the inputted command to the plurality of devices; a command analysis portion which is configured by wired logic, and which is configured to analyze the command from the host controller; and a response reply portion which is configured by wired logic, and which is capable of reading out a response based on an analysis result of the command analysis portion from a register that holds a response corresponding to the command and sending the response to the host controller.




e

Driver interface functions to interface client function drivers

In embodiments of driver interface functions to interface client function drivers, a set of serial communication protocol driver interfaces are exposed by a core driver stack, and the serial communication protocol driver interfaces include driver interface functions to interface with client function drivers that correspond to client devices configured for data communication in accordance with the serial communication protocol. A client function driver can check for the availability of a driver interface function before interfacing with the core driver stack via the serial communication protocol driver interfaces. A contract version identifier can also be received from the client function driver via an extension of the driver interface functions, where the contract version identifier indicates a set of operation rules by which the client function driver interfaces with the core driver stack.




e

Automatic pinning and unpinning of virtual pages for remote direct memory access

In one exemplary embodiment, a computer-implemented method includes receiving, at a remote direct memory access (RDMA) device, a plurality of RDMA requests referencing a plurality of virtual pages. Data transfers are scheduled for the plurality of virtual pages, wherein the scheduling occurs at the RDMA device. The number of the virtual pages that are currently pinned is limited for the RDMA requests based on a predetermined pinned page limit.




e

Modifying a dispersed storage network memory data access response plan

A dispersed storage network memory includes a pool of storage nodes, where the pool of storage nodes stores a multitude of encoded data files. A storage node obtains and analyzes data access response performance data for each of the storage nodes to produce a modified data access response plan that includes identity of an undesired performing storage node and an alternative data access response for the undesired performing storage node. The storage nodes receive corresponding portions of a data access request for at least a portion of one of the multitude of encoded data files. The undesired performing storage node or another storage node processes one of the corresponding portions of the data access request in accordance with the alternative data access response.




e

System and method for generating a virtual PCI-type configuration space for a device

An electronic data tablet has a controller and transition manager. The controller is to store in a memory of the tablet virtual configuration space information for a peripheral device of a computer, and the transition manager is to control the controller to operate in a first mode and a second mode. The virtual configuration space information is stored in the tablet memory when the first mode is to be switched to the second mode. When the second mode is switched to the first mode, the virtual configuration space information is accessed to control recognition of the peripheral device of the computer without performing a re-scanning operation.




e

Input/output monitoring mechanism

Machines, systems and methods for I/O monitoring in a plurality of compute nodes and a plurality of service nodes utilizing a Peripheral Component Interconnect express (PCIe) are provided. In one embodiment, the method comprises assigning at least one virtual function to a services node and a plurality of compute nodes by the PCIe interconnect and a multi-root I/O virtualization (MR-IOV) adapter. The MR-IOV adapter enables bridging of a plurality of compute node virtual functions with corresponding services node virtual functions. A front-end driver on the compute node requests the services node virtual function to send data and the data is transferred to the services node virtual function by the MR-IOV adapter. A back-end driver running in the services node receives and passes the data to a software service to modify/monitor the data. The back-end driver sends the data to another virtual function or an external entity.




e

Portable computing device as control mechanism

A portable or mobile computing device, such as a smart phone or portable media player, can be used to control one or more electronic devices over an appropriate wireless channel. In one example, a user can utilize a smart phone as a mouse for a notebook computer or Internet-capable television. The user can move the portable device on a surface and press appropriate selectable elements on the portable device, as if the user is using a wireless mouse. The portable device can send the commands over the wireless channel to the electronic device, which can provide inputs and/or control signals to the electronic device. In some embodiments, the user can take advantage of the processing capability of the portable device to work directly with elements such as a wireless keyboard and wireless monitor, without the need for a notebook or other such computing element therebetween.




e

System and method of interacting with data at a wireless communication device

A method of interacting with data at a wireless communication device is provided. The wireless communication device has access to a first set of capabilities. Data is received at the wireless communication device via a wireless transmission. The data represents visual content that is viewable via a display device. A graphical user interface, including a delayed action selector, is provided via the display device. An input is received within a limited period of time after displaying the delayed action selector. The input is associated with a command to delay execution of an action with respect to the data until the wireless communication device has access to a second set of capabilities. The action is not supported by the first set of capabilities but is supported by the second set of capabilities. An indication of receipt of the input is provided at the wireless communication device.




e

Using host transfer rates to select a recording medium transfer rate for transferring data to a recording medium

Provided are a storage device, controller, and method for using host transfer rates to select a recording medium transfer rate for transferring data to a recording medium. A host transfer rate of data with respect to a buffer is measured. Provided are a plurality of recording medium transfer rates at which data is transferred between the buffer and the recording medium. A determination is made of an amount of decrease in the host transfer rate. The recording medium transfer rate is selected based on the amount of decrease in the host transfer rate. A transfer rate at which the storage device transfers data is set to the selected recording medium transfer rate.




e

Hardware streaming unit

A processor having a streaming unit is disclosed. In one embodiment, a processor includes one or more execution units configured to execute instructions of a processor instruction set. The processor further includes a streaming unit configured to execute a first instruction of the processor instruction set, wherein executing the first instruction comprises the streaming unit loading a first data stream from a memory of a computer system responsive to execution of a first instruction. The first data stream comprises a plurality of data elements. The first instruction includes a first argument indicating a starting address of the first stream, a second argument indicating a stride between the data elements, and a third argument indicative of an ending address of the stream. The streaming unit is configured to output a second data stream corresponding to the first data stream.




e

Semiconductor memory device and operation method thereof

A semiconductor memory device includes a selection signal generation unit configured to generate a plurality of selection signals that are sequentially activated, a path selection unit configured to select a transmission path of sequentially input information data in response to the plurality of selection signals, a plurality of first storage units, each configured to have a first storage completion time and store an output signal of the path selection unit, and a plurality of second storage units, each configured to have a second storage completion time, which is longer than the first storage completion time, and store a respective output signal of the plurality of first storage units.




e

Method for combining non-latency-sensitive and latency-sensitive input and output

Systems, mediums, and methods are provided for scheduling input/output requests to a storage system. The input output requests may be received, categorized based on their priority, and scheduled for retrieval from the storage system. Lower priority requests may be divided into smaller sub-requests, and the sub-requests may be scheduled for retrieval only when there are no pending higher priority requests, and/or when higher priority requests are not predicted to arrive for a certain period of time. By servicing the small sub-requests rather than the entire lower priority request, the retrieval of the lower priority request may be paused in the event that a high priority request arrives while the lower priority request is being serviced.




e

Methods and systems for mapping a peripheral function onto a legacy memory interface

A memory system includes a CPU that communicates commands and addresses to a main-memory module. The module includes a buffer circuit that relays commands and data between the CPU and the main memory. The memory module additionally includes an embedded processor that shares access to main memory in support of peripheral functionality, such as graphics processing, for improved overall system performance. The buffer circuit facilitates the communication of instructions and data between CPU and the peripheral processor in a manner that minimizes or eliminates the need to modify CPU, and consequently reduces practical barriers to the adoption of main-memory modules with integrated processing power.




e

Data transfer device and method

A transfer control circuit stores data in a FIFO memory, outputs data in the FIFO memory in response to a data request signal, and outputs a state signal in accordance with an amount of stored data in the FIFO memory. An output data generating unit outputs image data having a horizontal image size in accordance with a horizontal count value and a horizontal synchronizing signal, and thereafter, outputs blank data. When the state signal indicates that the FIFO memory is in a “EMPTY” or “MODERATE” storage state, a blank control unit outputs a blank addition signal until the FIFO memory changes to a “FULL” storage state.




e

Vertex array access bounds checking

Aspects of the invention relate generally to validating array bounds in an API emulator. More specifically, an OpenGL (or OpenGL ES) emulator may examine each array accessed by a 3D graphic program. If the program requests information outside of an array, the emulator may return an error when the graphic is drawn. However, when the user (here, a programmer) queries the value of the array, the correct value (or the value provided by the programmer) may be returned. In another example, the emulator may examine index buffers which contain the indices of the elements on the other arrays to access. If the program requests a value which is not within the range, the emulator may return an error when the graphic is drawn. Again, when the programmer queries the value of the array, the correct value (or the value provided by the programmer) may be returned.




e

Data storage device and operating method thereof

A data storage device includes a first memory device configured to store data having a first property, a second memory device configured to store data having a second property, and a controller. The controller selects data stored in the first memory device, and transfers the selected data to the second memory device or stores the selected data in another physical location of the first memory device selectively depending on an update count (UC) of an address at which the selected data is stored.




e

Multipass programming in buffers implemented in non-volatile data storage systems

The various implementations described herein include systems, methods and/or devices used to enable multipass programming in buffers implemented in non-volatile data storage systems (e.g., using one or more flash memory devices). In one aspect, a portion of memory (e.g., a page in a block of a flash memory device) may be programmed many (e.g., 1000) times before an erase is required. Some embodiments include systems, methods and/or devices to integrate Bloom filter functionality in a non-volatile data storage system, where a portion of memory storing one or more bits of a Bloom filter array may be programmed many (e.g., 1000) times before the contents of the portion of memory need to be moved to an unused location in the memory.




e

Method and apparatus for calibrating a memory interface with a number of data patterns

Apparatuses and methods of calibrating a memory interface are described. Calibrating a memory interface can include loading and outputting units of a first data pattern into and from at least a portion of a register to generate a first read capture window. Units of a second data pattern can be loaded into and output from at least the portion of the register to generate a second read capture window. One of the first read capture window and the second read capture window can be selected and a data capture point for the memory interface can be calibrated according to the selected read capture window.




e

System and method to process event reporting in an adapter

Method and system for an adapter is provided. The adapter includes a plurality of function hierarchies, with each function hierarchy including a plurality of functions and each function being associated with an event. The adapter also includes a plurality of processors for processing one or more events generated by the plurality of functions. The adapter further includes a first set of arbitration modules, where each arbitration module is associated with a function hierarchy and receives interrupt signals from the functions within the associated function hierarchy and selects one of the interrupt signals. The adapter also includes a second set of arbitration modules, where each arbitration module receives processor specific interrupt signals and selects one of the interrupt signals for processing an event associated with the selected interrupt signal.




e

Interrupt control method and multicore processor system

In an interrupt control method of a multicore processor system including cores, a cache coherency mechanism, and a device, a first core detecting an interrupt signal from the device writes into an area prescribing an interrupt flag in the cache memory of the first core, first data indicating detection of the interrupt signal, and notifies the other cores of an execution request for interrupt processing corresponding to the interrupt signal, consequent to the cache coherency mechanism establishing coherency among at least cache memories of the other cores when the first data is written; and a second core different from the first core, maintaining the first data written as the interrupt flag, and notified of the execution request executes the interrupt processing, and writes over the area prescribing the interrupt flag written in the cache memory of the second core, with second data indicating no-detection of the interrupt signal.




e

Technique for communicating interrupts in a computer system

A technique to enable efficient interrupt communication within a computer system. In one embodiment, an advanced programmable interrupt controller (APIC) is interfaced via a set of bits within an APIC interface register using various interface instructions or operations, without using memory-mapped input/output (MMIO).




e

Handling interrupts in a multi-processor system

A data processing apparatus has a plurality of processors and a plurality of interrupt interfaces each for handling interrupt requests from a corresponding processor. An interrupt distributor controls routing of interrupt requests to the interrupt interfaces. A shared interrupt request is serviceable by multiple processors. In response to the shared interrupt request, a target interrupt interface issues an interrupt ownership request to the interrupt distributor, without passing the shared interrupt request to the corresponding processor, if it estimates that the corresponding processor is available for servicing the shared interrupt request. The shared interrupt request is passed to the corresponding processor when an ownership confirmation is received from the interrupt distributor indicating that the processor has been selected for servicing the shared interrupt request.




e

Dongle device with video encoding and methods for use therewith

A universal serial bus (USB) dongle device includes a USB interface that receives selection data from a host device that indicates a selection of a first video format from a plurality of available formats. The USB interface also receives an input video signal from the host device in the first video format and a power signal from the host device. An encoding module generates a processed video signal in a second video format based on the input video signal, wherein the first video format differs from the second video format. The USB interface transfers the processed video signal to the host device.




e

Information processing apparatus, method thereof, and storage medium

An information processing apparatus includes a plurality of modules connected in a ring shape via a bus, and each module processes a packet flowing in a single direction on the ring in a predetermined order. The module includes a communication unit for transmitting a packet received from a first direction in the ring via the bus to a second direction, a discrimination unit for discriminating a packet from among the packets received from the first direction as a processing packet to be processed by the module, and a processing unit which is connected with the communication unit one by one and configured to process the processing packet. The communication unit transmits the packet processed by the processing unit at an interval equivalent to processing time or more for a processing packet processed by a module in a latter stage in the predetermined order among packets transmitted by the communication unit to the second direction.




e

Optimizing a rate of transfer of data between an RF generator and a host system within a plasma tool

A bus interconnect interfaces a host system to a radio frequency (RF) generator that is coupled to a plasma chamber. The bus interconnect includes a first set of host ports, which are used to provide a power component setting and a frequency component setting to the RF generator. The ports of the first set of host ports are used to receive distinct variables that change over time. The bus interconnect further includes a second set of generator ports used to send a power read back value and a frequency read back value to the host system. The bus interconnect includes a sampler circuit integrated with the host system. The sampler circuit is configured to sample signals at the ports of the first set at selected clock edges to capture operating state data of the plasma chamber and the RF generator.




e

Versatile lane configuration using a PCIe PIe-8 interface

Each PCIe device may include a media access control (MAC) interface and a physical (PHY) interface that support a plurality of different lane configurations. These interfaces may include hardware modules that support 1×32, 2×16, 4×8, 8×4, 16×2, and 32×1 communication. Instead of physically connecting each of the hardware modules in the MAC interface to respective hardware modules in the PHY interface using dedicated traces, the device may include two bus controllers that arbitrate which hardware modules are connected to a internal bus coupling the two interfaces. When a different lane configuration is desired, the bus controller couples the corresponding hardware module to the internal bus. In this manner, the different lane configurations share the same lanes (and wires) of the bus as the other lane configurations. Accordingly, the shared bus only needs to include enough lanes (and wires) necessary to accommodate the widest lane configuration.




e

PCI express channel implementation in intelligent platform management interface stack

Certain embodiments of the present disclosure are directed to a baseboard management controller (BMC) that includes a PCI express (PCIe) interface controller configured to provide access to a PCIe channel over a PCIe link, and firmware. The firmware includes a PCIe module being configured to access the PCIe channel through the PCIe interface controller and registered as a PCIe function. A software stack of the BMC communicates, through the PCIe module, with a PCIe device over the PCIe channel.




e

Bridge between a peripheral component interconnect express interface and a universal serial bus 3.0 device

A bridge includes a Peripheral Component Interconnect Express interface supporting at least two lanes, an Extensible Host Controller Interface, and a Universal Serial Bus 3.0 root hub. The Peripheral Component Interconnect Express interface is used for coupling to a host. Each lane of the at least two lanes provides a highest data transmission speed. The Extensible Host Controller Interface is coupled to the Peripheral Component Interconnect Express interface for storing data transmitted by the Peripheral Component Interconnect Express interface. The Universal Serial Bus 3.0 root hub includes a first controller and a second controller. The first controller and the second controller are used for controlling data transmission of four ports, and a highest data transmission speed provided by each port of the four ports is not more than the highest data transmission speed provided by the lane.




e

Method to facilitate fast context switching for partial and extended path extension to remote expanders

A method, apparatus, and system for switching from an existing target end device to a next target end device in a multi-expander storage topology by using Fast Context Switching. The method enhances Fast Context Switching by allowing Fast Context Switching to reuse or extend part of an existing connection path to an end device directly attached to a remote expander. The method can include reusing or extending at least a partial path of an established connection between an initiator and the existing target end device for a connection between the initiator and the next target end device, whereby the existing target end device and the next target end device are locally attached to different expanders.




e

System and method for detecting accidental peripheral device disconnection

A detection device for detecting the manner in which a peripheral device is removed from an electronic device is proposed. The detection device can be on the peripheral device or the electronic device and detects whether the peripheral device was removed in a manner that indicates the removal was intentional or unintentional.




e

Media file synchronization

The description generally relates to a system designed to synchronize the rendering of a media file between a master device and a sister device. The system is designed so that a media file is simultaneously rendered on a master device and a sister device beginning from identical temporal starting points.




e

Data transfer control apparatus, data transfer control method, and computer product

A data transfer control apparatus includes a transferring unit that transfers data from a transfer source memory to a transfer destination memory, according to an instruction from a first processor; and a first processor configured to detect a process execute by the first processor, determine whether transfer of the data is urgent, based on the type of the detected process, and control the transferring unit or the first processor to transfer the data, based on a determination result.




e

Apparatuses enabling concurrent communication between an interface die and a plurality of dice stacks, interleaved conductive paths in stacked devices, and methods for forming and operating the same

Various embodiments include apparatuses, stacked devices and methods of forming dice stacks on an interface die. In one such apparatus, a dice stack includes at least a first die and a second die, and conductive paths coupling the first die and the second die to the common control die. In some embodiments, the conductive paths may be arranged to connect with circuitry on alternating dice of the stack. In other embodiments, a plurality of dice stacks may be arranged on a single interface die, and some or all of the dice may have interleaving conductive paths.




e

Electronic devices and methods for sharing peripheral devices in dual operating systems

A method for sharing peripheral devices in dual operating systems for an electronic device having at least one peripheral device is provided. The method includes: receiving a setting value for the peripheral device under the first operating system from a user; activating a second operating system; transmitting the setting value to the second operating system; and switching from the first operating system to the second operating system, wherein the second operating system sets the peripheral device with the setting value and enables the electronic device to operate under the second operating system.




e

Determination of physical connectivity status of devices based on electrical measurement

Embodiments of the invention are generally directed to determination of physical connectivity status of devices based on electrical measurement. An embodiment of a method includes discovering a connection of a first device with a second device, and performing an electrical measurement of the second device by the first device via the connection between the first device and the second device, where performing the electrical measurement includes sensing by the first device of an element of the second device. The method further includes, if the sensing by the first device fails to detect the element of the second device and a predetermined condition for the electrical measurement is enabled, then determining by the first device that the connection with the second device has been lost.




e

System and method for a workload management and scheduling module to manage access to a compute environment according to local and non-local user identity information

A system, method and computer-readable media for managing a compute environment are disclosed. The method includes importing identity information from an identity manager into a module performs workload management and scheduling for a compute environment and, unless a conflict exists, modifying the behavior of the workload management and scheduling module to incorporate the imported identity information such that access to and use of the compute environment occurs according to the imported identity information. The compute environment may be a cluster or a grid wherein multiple compute environments communicate with multiple identity managers.




e

Reducing cross queue synchronization on systems with low memory latency across distributed processing nodes

A method for efficient dispatch/completion of a work element within a multi-node data processing system. The method comprises: selecting specific processing units from among the processing nodes to complete execution of a work element that has multiple individual work items that may be independently executed by different ones of the processing units; generating an allocated processor unit (APU) bit mask that identifies at least one of the processing units that has been selected; placing the work element in a first entry of a global command queue (GCQ); associating the APU mask with the work element in the GCQ; and responsive to receipt at the GCQ of work requests from each of the multiple processing nodes or the processing units, enabling only the selected specific ones of the processing nodes or the processing units to be able to retrieve work from the work element in the GCQ.




e

Method and system for heterogeneous filtering framework for shared memory data access hazard reports

A system and method for detecting, filtering, prioritizing and reporting shared memory hazards are disclosed. The method includes, for a unit of hardware operating on a block of threads, mapping a plurality of shared memory locations assigned to the unit to a tracking table. The tracking table comprises initialization information for each shared memory location. The method also includes, for an instruction of a program within a barrier region, identifying a potential conflict by identifying a second access to a location in shared memory within a block of threads executed by the hardware unit. First information associated with a first access and second information associated with the second access to the location is determined. Filter criteria is applied to the first and second information to determine whether the instruction causes a reportable hazard. The instruction is reported when it causes the reportable hazard.




e

Computing job management based on priority and quota

In one embodiment, the invention provides a method of managing a computing job based on a job priority and a submitter quota.




e

Resource abstraction via enabler and metadata

Embodiments of the invention provide systems and methods for managing an enabler and dependencies of the enabler. According to one embodiment, a method of managing an enabler can comprise requesting a management function via a management interface of the enabler. The management interface can provide an abstraction of one or more management functions for managing the enabler and/or dependencies of the enabler. In some cases, prior to requesting the management function metadata associated with the management interface can be read and a determination can be made as to whether the management function is available or unavailable. Requesting the management function via the management interface of the enabler can be performed in response to determining the management function is available. In response to determining the management function is unavailable, one or more alternative functions can be identified based on the metadata and the one or more alternative functions can be requested.




e

Virtual machine provisioning based on tagged physical resources in a cloud computing environment

A cloud system may create physical resource tags to store relationships between cloud computing offerings, such as computing service offerings, storage offerings, and network offerings, and the specific physical resources in the cloud computing environment. Cloud computing offerings may be presented to cloud customers, the offerings corresponding to various combinations of computing services, storage, networking, and other hardware or software resources. After a customer selects one or more cloud computing offerings, a cloud resource manager or other component within the cloud infrastructure may retrieve a set of tags and determine a set of physical hardware resources associated with the selected offerings. The physical hardware resources associated with the selected offerings may be subsequently used to provision and create the new virtual machine and its operating environment.




e

Managing utilization of physical processors of a shared processor pool in a virtualized processor environment

Systems, methods and computer program products may provide managing utilization of one or more physical processors in a shared processor pool. A method of managing utilization of one or more physical processors in a shared processor pool may include determining a current amount of utilization of the one or more physical processors and generating an instruction message. The instruction message may be at least partially determined by the current amount of utilization. The method may further include sending the instruction message to a guest operating system, the guest operating system having a number of enabled virtual processors.