m

Handling interrupts in a multi-processor system

A data processing apparatus has a plurality of processors and a plurality of interrupt interfaces each for handling interrupt requests from a corresponding processor. An interrupt distributor controls routing of interrupt requests to the interrupt interfaces. A shared interrupt request is serviceable by multiple processors. In response to the shared interrupt request, a target interrupt interface issues an interrupt ownership request to the interrupt distributor, without passing the shared interrupt request to the corresponding processor, if it estimates that the corresponding processor is available for servicing the shared interrupt request. The shared interrupt request is passed to the corresponding processor when an ownership confirmation is received from the interrupt distributor indicating that the processor has been selected for servicing the shared interrupt request.




m

Dongle device with video encoding and methods for use therewith

A universal serial bus (USB) dongle device includes a USB interface that receives selection data from a host device that indicates a selection of a first video format from a plurality of available formats. The USB interface also receives an input video signal from the host device in the first video format and a power signal from the host device. An encoding module generates a processed video signal in a second video format based on the input video signal, wherein the first video format differs from the second video format. The USB interface transfers the processed video signal to the host device.




m

Information processing apparatus, method thereof, and storage medium

An information processing apparatus includes a plurality of modules connected in a ring shape via a bus, and each module processes a packet flowing in a single direction on the ring in a predetermined order. The module includes a communication unit for transmitting a packet received from a first direction in the ring via the bus to a second direction, a discrimination unit for discriminating a packet from among the packets received from the first direction as a processing packet to be processed by the module, and a processing unit which is connected with the communication unit one by one and configured to process the processing packet. The communication unit transmits the packet processed by the processing unit at an interval equivalent to processing time or more for a processing packet processed by a module in a latter stage in the predetermined order among packets transmitted by the communication unit to the second direction.




m

Optimizing a rate of transfer of data between an RF generator and a host system within a plasma tool

A bus interconnect interfaces a host system to a radio frequency (RF) generator that is coupled to a plasma chamber. The bus interconnect includes a first set of host ports, which are used to provide a power component setting and a frequency component setting to the RF generator. The ports of the first set of host ports are used to receive distinct variables that change over time. The bus interconnect further includes a second set of generator ports used to send a power read back value and a frequency read back value to the host system. The bus interconnect includes a sampler circuit integrated with the host system. The sampler circuit is configured to sample signals at the ports of the first set at selected clock edges to capture operating state data of the plasma chamber and the RF generator.




m

PCI express channel implementation in intelligent platform management interface stack

Certain embodiments of the present disclosure are directed to a baseboard management controller (BMC) that includes a PCI express (PCIe) interface controller configured to provide access to a PCIe channel over a PCIe link, and firmware. The firmware includes a PCIe module being configured to access the PCIe channel through the PCIe interface controller and registered as a PCIe function. A software stack of the BMC communicates, through the PCIe module, with a PCIe device over the PCIe channel.




m

Bridge between a peripheral component interconnect express interface and a universal serial bus 3.0 device

A bridge includes a Peripheral Component Interconnect Express interface supporting at least two lanes, an Extensible Host Controller Interface, and a Universal Serial Bus 3.0 root hub. The Peripheral Component Interconnect Express interface is used for coupling to a host. Each lane of the at least two lanes provides a highest data transmission speed. The Extensible Host Controller Interface is coupled to the Peripheral Component Interconnect Express interface for storing data transmitted by the Peripheral Component Interconnect Express interface. The Universal Serial Bus 3.0 root hub includes a first controller and a second controller. The first controller and the second controller are used for controlling data transmission of four ports, and a highest data transmission speed provided by each port of the four ports is not more than the highest data transmission speed provided by the lane.




m

Method to facilitate fast context switching for partial and extended path extension to remote expanders

A method, apparatus, and system for switching from an existing target end device to a next target end device in a multi-expander storage topology by using Fast Context Switching. The method enhances Fast Context Switching by allowing Fast Context Switching to reuse or extend part of an existing connection path to an end device directly attached to a remote expander. The method can include reusing or extending at least a partial path of an established connection between an initiator and the existing target end device for a connection between the initiator and the next target end device, whereby the existing target end device and the next target end device are locally attached to different expanders.




m

System and method for detecting accidental peripheral device disconnection

A detection device for detecting the manner in which a peripheral device is removed from an electronic device is proposed. The detection device can be on the peripheral device or the electronic device and detects whether the peripheral device was removed in a manner that indicates the removal was intentional or unintentional.




m

Media file synchronization

The description generally relates to a system designed to synchronize the rendering of a media file between a master device and a sister device. The system is designed so that a media file is simultaneously rendered on a master device and a sister device beginning from identical temporal starting points.




m

Data transfer control apparatus, data transfer control method, and computer product

A data transfer control apparatus includes a transferring unit that transfers data from a transfer source memory to a transfer destination memory, according to an instruction from a first processor; and a first processor configured to detect a process execute by the first processor, determine whether transfer of the data is urgent, based on the type of the detected process, and control the transferring unit or the first processor to transfer the data, based on a determination result.




m

Apparatuses enabling concurrent communication between an interface die and a plurality of dice stacks, interleaved conductive paths in stacked devices, and methods for forming and operating the same

Various embodiments include apparatuses, stacked devices and methods of forming dice stacks on an interface die. In one such apparatus, a dice stack includes at least a first die and a second die, and conductive paths coupling the first die and the second die to the common control die. In some embodiments, the conductive paths may be arranged to connect with circuitry on alternating dice of the stack. In other embodiments, a plurality of dice stacks may be arranged on a single interface die, and some or all of the dice may have interleaving conductive paths.




m

Electronic devices and methods for sharing peripheral devices in dual operating systems

A method for sharing peripheral devices in dual operating systems for an electronic device having at least one peripheral device is provided. The method includes: receiving a setting value for the peripheral device under the first operating system from a user; activating a second operating system; transmitting the setting value to the second operating system; and switching from the first operating system to the second operating system, wherein the second operating system sets the peripheral device with the setting value and enables the electronic device to operate under the second operating system.




m

Determination of physical connectivity status of devices based on electrical measurement

Embodiments of the invention are generally directed to determination of physical connectivity status of devices based on electrical measurement. An embodiment of a method includes discovering a connection of a first device with a second device, and performing an electrical measurement of the second device by the first device via the connection between the first device and the second device, where performing the electrical measurement includes sensing by the first device of an element of the second device. The method further includes, if the sensing by the first device fails to detect the element of the second device and a predetermined condition for the electrical measurement is enabled, then determining by the first device that the connection with the second device has been lost.




m

System and method for a workload management and scheduling module to manage access to a compute environment according to local and non-local user identity information

A system, method and computer-readable media for managing a compute environment are disclosed. The method includes importing identity information from an identity manager into a module performs workload management and scheduling for a compute environment and, unless a conflict exists, modifying the behavior of the workload management and scheduling module to incorporate the imported identity information such that access to and use of the compute environment occurs according to the imported identity information. The compute environment may be a cluster or a grid wherein multiple compute environments communicate with multiple identity managers.




m

Reducing cross queue synchronization on systems with low memory latency across distributed processing nodes

A method for efficient dispatch/completion of a work element within a multi-node data processing system. The method comprises: selecting specific processing units from among the processing nodes to complete execution of a work element that has multiple individual work items that may be independently executed by different ones of the processing units; generating an allocated processor unit (APU) bit mask that identifies at least one of the processing units that has been selected; placing the work element in a first entry of a global command queue (GCQ); associating the APU mask with the work element in the GCQ; and responsive to receipt at the GCQ of work requests from each of the multiple processing nodes or the processing units, enabling only the selected specific ones of the processing nodes or the processing units to be able to retrieve work from the work element in the GCQ.




m

Method and system for heterogeneous filtering framework for shared memory data access hazard reports

A system and method for detecting, filtering, prioritizing and reporting shared memory hazards are disclosed. The method includes, for a unit of hardware operating on a block of threads, mapping a plurality of shared memory locations assigned to the unit to a tracking table. The tracking table comprises initialization information for each shared memory location. The method also includes, for an instruction of a program within a barrier region, identifying a potential conflict by identifying a second access to a location in shared memory within a block of threads executed by the hardware unit. First information associated with a first access and second information associated with the second access to the location is determined. Filter criteria is applied to the first and second information to determine whether the instruction causes a reportable hazard. The instruction is reported when it causes the reportable hazard.




m

Computing job management based on priority and quota

In one embodiment, the invention provides a method of managing a computing job based on a job priority and a submitter quota.




m

Resource abstraction via enabler and metadata

Embodiments of the invention provide systems and methods for managing an enabler and dependencies of the enabler. According to one embodiment, a method of managing an enabler can comprise requesting a management function via a management interface of the enabler. The management interface can provide an abstraction of one or more management functions for managing the enabler and/or dependencies of the enabler. In some cases, prior to requesting the management function metadata associated with the management interface can be read and a determination can be made as to whether the management function is available or unavailable. Requesting the management function via the management interface of the enabler can be performed in response to determining the management function is available. In response to determining the management function is unavailable, one or more alternative functions can be identified based on the metadata and the one or more alternative functions can be requested.




m

Virtual machine provisioning based on tagged physical resources in a cloud computing environment

A cloud system may create physical resource tags to store relationships between cloud computing offerings, such as computing service offerings, storage offerings, and network offerings, and the specific physical resources in the cloud computing environment. Cloud computing offerings may be presented to cloud customers, the offerings corresponding to various combinations of computing services, storage, networking, and other hardware or software resources. After a customer selects one or more cloud computing offerings, a cloud resource manager or other component within the cloud infrastructure may retrieve a set of tags and determine a set of physical hardware resources associated with the selected offerings. The physical hardware resources associated with the selected offerings may be subsequently used to provision and create the new virtual machine and its operating environment.




m

Managing utilization of physical processors of a shared processor pool in a virtualized processor environment

Systems, methods and computer program products may provide managing utilization of one or more physical processors in a shared processor pool. A method of managing utilization of one or more physical processors in a shared processor pool may include determining a current amount of utilization of the one or more physical processors and generating an instruction message. The instruction message may be at least partially determined by the current amount of utilization. The method may further include sending the instruction message to a guest operating system, the guest operating system having a number of enabled virtual processors.




m

System, method and program product for cost-aware selection of stored virtual machine images for subsequent use

A system, method and computer program product for allocating shared resources. Upon receiving requests for resources, the cost of bundling software in a virtual machine (VM) image is automatically generated. Software is selected by the cost for each bundle according to the time required to install it where required, offset by the time to uninstall it where not required. A number of VM images having the highest software bundle value (i.e., highest cost bundled) is selected and stored, e.g., in a machine image store. With subsequent requests for resources, VMs may be instantiated from one or more stored VM images and, further, stored images may be updated selectively updated with new images.




m

End to end modular information technology system

Embodiments of the invention are directed to a system, method, or computer program product for providing an information technology build service for building a platform in response to a service request. The invention receives a service request for the platform build from a requester, receives a plurality of platform parameters from the requester, determines whether the service request requires one or more physical machines or one or more virtual machines, and if the service request requires one or more virtual machines, initiates build of the one or more virtual machines. The invention also provisions physical and virtual storage based on received parameters, provisions physical and virtual processing power based on received parameters, and manages power of resources during the build, the managing comprising managing power ups, power downs, standbys, idles and reboots of one or more physical components being used for the build.




m

Method and apparatus for generating metadata for digital content

A method and an apparatus for generating metadata for digital content are described, which allow to review the generated metadata already in course of ongoing generation of metadata. The metadata generation is split into a plurality of processing tasks, which are allocated to two or more processing nodes. The metadata generated by the two or more processing nodes is gathered and visualized on an output unit.




m

System and method for managing mainframe computer system usage

In mainframe computer system, workload tasks are accomplished using a logically partitioned data processing system, where the partitioned data processing system is divided into multiple logical partitions. In a system and method managing such a computer system, each running workload tasks that can be classified based on time criticality, and groups of logical partitions can be freely defined. Processing capacity limits for the logical partitions in a group of logical partitions based upon defined processing capacity thresholds and upon an iterative determination of how much capacity is needed for time critical workload tasks. Workload can be balanced between logical partitions within a group, to prevent surplus processing capacity being used to run not time critical workload on one logical partition when another logical partition running only time critical workload tasks faces processing deficit.




m

System and method for below-operating system trapping and securing loading of code into memory

A system for protecting an electronic device against malware includes a memory, an operating system configured to execute on the electronic device, and a below-operating-system security agent. The below-operating-system security agent is configured to trap an attempted access of a resource of the electronic device, access one or more security rules to determine whether the attempted access is indicative of malware, and operate at a level below all of the operating systems of the electronic device accessing the memory. The attempted access includes attempting to write instructions to the memory and attempting to execute the instructions.




m

System and method for event-driven prioritization

Methods include receiving at a receiving device a plurality of reports, each corresponding to at least one item and comprising data associated with one or more performance metrics. The methods further include identifying events for each report corresponding to at least one item using the data in the report. In addition, the methods include determining a report score for each report based on a number and type of the identified events. The methods also include outputting the report scores.




m

System and method for performing memory management using hardware transactions

The systems and methods described herein may be used to implement a shared dynamic-sized data structure using hardware transactional memory to simplify and/or improve memory management of the data structure. An application (or thread thereof) may indicate (or register) the intended use of an element of the data structure and may initialize the value of the data structure element. Thereafter, another thread or application may use hardware transactions to access the data structure element while confirming that the data structure element is still part of the dynamic data structure and/or that memory allocated to the data structure element has not been freed. Various indicators may be used determine whether memory allocated to the element can be freed.




m

Network control apparatus and method for port isolation

Some embodiments provide a method for managing a logical switching element that includes several logical ports. The logical switching element receives and sends data packets through the logical ports. The logical switching element is implemented in a set of managed switching elements that forward data packets in a network. The method provides a set of tables for specifying forwarding behaviors of the logical switching element. The method performs a set of database join operations on the tables to specify in the tables that the logical forwarding element drops a data packet received through a first logical port when the data packet is headed to a second logical port different than the first logical port.




m

Management of inter-dependent configurations of virtual machines in a cloud

A server computer system determines that configuring a first virtual machine in a cloud depends on a configuration result of configuring a second virtual machine. The server computer system configures the second virtual machine in the cloud and configures the first virtual machine in the cloud using the configuration result of the second virtual machine.




m

System and method for automated assignment of virtual machines and physical machines to hosts

A system and method for reconfiguring a computing environment comprising a consumption analysis server, a placement server, an infrastructure management client and a data warehouse in communication with a set of data collection agents and a database. The consumption analysis server operates on measured resource utilization data to yield a set of resource consumptions in regularized time blocks, collects host and virtual machine configurations from the computing environment and determines available capacity for a set of target hosts. The placement server assigns a set of target virtual machines to the target set of hosts in a new placement. In one mode of operation the new placement is nearly optimal. In another mode of operation, the new placement is “good enough” to achieve a threshold score based on an objective function of resource capacity headroom. The new placement is implemented in the computing environment.




m

Managing safe removal of a passthrough device in a virtualization system

Methods and systems for managing a removal of a passthrough device from a guest managed by a hypervisor in virtualized computing environment. A hypervisor receives a request from the guest for access to a passthrough device. The hypervisor sets, in a memory, a last accessed state associated with a virtual machine executing the guest. The hypervisor forwards the request to the passthrough device and configures the host CPU to send a subsequent access request directly to the passthrough device. In response to a virtual machine reset, the hypervisor clears the last accessed state and instructs the host CPU to send a post-reset access request to the hypervisor.




m

Virtualization and dynamic resource allocation aware storage level reordering

A system and method for reordering storage levels in a virtualized environment includes identifying a virtual machine (VM) to be transitioned and determining a new storage level order for the VM. The new storage level order reduces a VM live state during a transition, and accounts for hierarchical shared storage memory and criteria imposed by an application to reduce recovery operations after dynamic resource allocation actions. The new storage level order recommendation is propagated to VMs. The new storage level order applied in the VMs. A different storage-level order is recommended after the transition.




m

Method and system for providing storage services

Method and system are provided for managing components of a storage operating environment having a plurality of virtual machines that can access a storage device managed by a storage system. The virtual machines are executed by a host platform that also executes a processor-executable host services module that interfaces with at least a processor-executable plug-in module for providing information regarding the virtual machines and assists in storage related services, for example, replicating the virtual machines.




m

Verification of controls in information technology infrastructure via obligation assertion

A processing device comprises a processor coupled to a memory and implements an obligation management system for information technology infrastructure, with the obligation management system being configured to process a plurality of obligations on behalf of a relying party to verify implementation of corresponding controls in information technology infrastructure of a claimant. A given one of the obligations has an associated obligation fulfiller that is inserted or otherwise deployed as a component within the information technology infrastructure of the claimant and is configured to provide evidence of the implementation of one or more of the controls responsive to an obligation assertion so as to establish an associated trust aspect of the claimant. The information technology infrastructure may comprise distributed virtual infrastructure of a cloud service provider. The claimant may comprise the cloud service provider and the relying party may comprise a tenant of the cloud service provider.




m

Apparatus and methods for adaptive thread scheduling on asymmetric multiprocessor

Techniques for adaptive thread scheduling on a plurality of cores for reducing system energy are described. In one embodiment, a thread scheduler receives leakage current information associated with the plurality of cores. The leakage current information is employed to schedule a thread on one of the plurality of cores to reduce system energy usage. On chip calibration of the sensors is also described.




m

Using pause on an electronic device to manage resources

An electronic device for using pause to manage resources is described. The electronic device includes a processor and instructions stored in memory. The electronic device monitors a pause duration and determines whether to perform a resource management operation based on the pause duration. The electronic device performs the resource management operation based on the pause duration.




m

Remediating gaps between usage allocation of hardware resource and capacity allocation of hardware resource

A usage allocation of a hardware resource to each of a number of workloads over time is determined using a demand model. The usage allocation of the resource includes a current and past actual usage allocation of the resource, a future projected usage allocation of the resource, and current and past actual usage of the resource. A capacity allocation of the resource is determined using a capacity model. The capacity allocation of the resource includes a current and past capacity and a future projected capacity of the resource. Whether a gap exists between the usage allocation and the capacity allocation is determined using a mapping model. Where the gap exists between the usage allocation of the resource and the capacity allocation of the resource, a user is presented with options determined using the mapping model and selectable by the user to implement a remediation strategy to close the gap.




m

Managing access to a shared resource by tracking active requestor job requests

The technology of the present application provides a networked computer system with at least one workstation and at least one shared resource such as a database. Access to the database by the workstation is managed by a database management system. An access engine reviews job requests for access to the database and allows job requests access to the resource based protocols stored by the system.




m

Two-tiered dynamic load balancing using sets of distributed thread pools

By employing a two-tier load balancing scheme, embodiments of the present invention may reduce the overhead of shared resource management, while increasing the potential aggregate throughput of a thread pool. As a result, the techniques presented herein may lead to increased performance in many computing environments, such as graphics intensive gaming.




m

Converting dependency relationship information representing task border edges to generate a parallel program

According to an embodiment, based on task border information, and first-type dependency relationship information containing N number of nodes corresponding to data accesses to one set of data, containing edges representing dependency relationship between the nodes, and having at least one node with an access reliability flag indicating reliability/unreliability of corresponding data access; task border edges, of edges extending over task borders, are identified that have an unreliable access node linked to at least one end, and presentation information containing unreliable access nodes is generated. According to dependency existence information input corresponding to the set of data, conversion information indicating absence of data access to the unreliable access nodes is output. According to the conversion information, the first-type dependency relationship information is converted into second-type dependency relationship information containing M number of nodes (0≦M≦N) corresponding to data accesses to the set of data and containing edges representing inter-node dependency relationship.




m

Parallel computer system and program

There is provided a parallel computer system for performing barrier synchronization using a master node and a plurality of worker nodes based on the time to allow for an adaptive setting of the synchronization time. When a task process in a certain worker node has not been completed by a worker determination time, the particular worker node performs a communication to indicate that the process has not been completed, to a master node. When the communication has been received by a master determination time, the master node performs a communication to indicate that the process time is extended by a correction process time, in order to adjust and extend the synchronization time. In this way, it is possible to reduce the synchronization overhead associated with the execution of an application with a relatively large variation in the process time from a synchronization point to the next synchronization point.




m

Reconfigurable processor and method

Disclosed are a reconfigurable processor and processing method, a reconfiguration control apparatus and method, and a thread modeler and modeling method. A memory area of a reconfigurable processor may be divided into a plurality of areas, and a context enabling a thread process may be stored in respective divided areas, in advance. Accordingly, when a context switching is performed from one thread to another thread, the other thread may be executed by using information stored in an area corresponding to the other thread.




m

Information processing device and task switching method

Disclosed is an information processing device and a task switching method that can reduce the time required for switching of tasks in a plurality of coprocessors. The information processing device (30) includes a processor core (301); coprocessors (311 to 31n) including operation units (321 to 32n) that perform operation in response to a request from the processor core (301) and operation storage units (331 to 22n) that store the contents of operation of the operation units (321 to 32n), save storage units (351 to 35n) that store the saved contents of operation, a task switching control unit (302) that outputs a save/restore request signal when switching a task on which operation is performed by the coprocessors (311 to 31n), and save/restore units (341 to 34n) that perform at least one of saving of the contents of operation in the operation storage units (331 to 33n) to the save storage units (351 to 35n) and restoration of the contents of operation in the save storage units (351 to 35 n) to the operation storage units (331 to 33n) in response to the save/restore request signal.




m

Policy enforcement in virtualized environment

Policy enforcement in an environment that includes virtualized systems is disclosed. Virtual machine information associated with a first virtual machine instance executing on a host machine is received. The information can be received from a variety of sources, including an agent, a log server, and a management infrastructure associated with the host machine. A policy is applied based at least in part on the received virtual machine information.




m

***WITHDRAWN PATENT AS PER THE LATEST USPTO WITHDRAWN LIST***Data transfer control apparatus, data transfer control method, and computer product

A data transfer control apparatus includes a transferring unit that transfers data from a transfer source memory to a transfer destination memory, according to an instruction from a first processor; and a first processor configured to detect a process execute by the first processor, determine whether transfer of the data is urgent, based on the type of the detected process, and control the transferring unit or the first processor to transfer the data, based on a determination result.




m

Methods and apparatus for resource capacity evaluation in a system of virtual containers

Methods and apparatus are provided for evaluating potential resource capacity in a system where there is elasticity and competition between a plurality of containers. A dynamic potential capacity is determined for at least one container in a plurality of containers competing for a total capacity of a larger container. A current utilization by each of the plurality of competing containers is obtained, and an equilibrium capacity is determined for each of the competing containers. The equilibrium capacity indicates a capacity that the corresponding container is entitled to. The dynamic potential capacity is determined based on the total capacity, a comparison of one or more of the current utilizations to one or more of the corresponding equilibrium capacities and a relative resource weight of each of the plurality of competing containers. The dynamic potential capacity is optionally recalculated when the set of plurality of containers is changed or after the assignment of each work element.




m

Method and apparatus for continuously producing 1,1,1,2,3-pentafluoropropane with high yield

A method and apparatus for method of continuously producing 1,1,1,2,3-pentafluoropropane with high yield is provided. The method includes (a) bringing a CoF3-containing cobalt fluoride in a reactor into contact with 3,3,3-trifluoropropene to produce a CoF2-containing cobalt fluoride and 1,1,1,2,3-pentafluoropropane, (b) transferring the CoF2-containing cobalt fluoride in the reactor to a regenerator and bringing the transferred CoF2-containing cobalt fluoride into contact with fluorine gas to regenerate a CoF3-containing cobalt fluoride, and (c) transferring the CoF3-containing cobalt fluoride in the regenerator to the reactor and employing the transferred CoF3-containing cobalt fluoride in Operation (a). Accordingly, the 1,1,1,2,3-pentafluoropropane can be continuously produced with high yield from the 3,3,3-trifluoropropene using a cobalt fluoride (CoF2/CoF3) as a fluid catalyst, thereby improving the reaction stability and readily adjusting the optimum conversion rate and selectivity.




m

Liquid crystal compound having fluorovinyl group, liquid crystal composition and liquid crystal display device

A liquid crystal compound having a high stability to heat, light and so forth, a high clearing point, a low minimum temperature of a liquid crystal phase, a small viscosity, a suitable optical anisotropy, a large dielectric anisotropy, a suitable elastic constant and an excellent solubility in other liquid crystal compounds, a liquid crystal composition containing the compound, and a liquid crystal display device including the composition. The compound is represented by formula (1): wherein, for example, R1 is fluorine or alkyl having 1 to 10 carbons; ring A1 and ring A2 are 1,4-phenylene, or 1,4-phenylene in which at least one of hydrogen is replaced by fluorine; Z1, Z2 and Z3 are a single bond; L1 and L2 are hydrogen or fluorine; X1 is fluorine or —CF3; and m is 1, and n is 0.




m

Cyclohexene-3,6-diyl compound, liquid crystal composition and liquid crystal display device

To provide a compound, when the compound has both a high clearing point and a low crystallization temperature, having a wide temperature range of a liquid crystal phase and also an excellent solubility in other liquid crystal compounds, and further having general physical properties necessary for the compound, namely, stability to heat, light and so forth, a suitable optical anisotropy and a suitable dielectric anisotropy. A compound is represented by formula (1): wherein, for example, Ra and Rb are alkyl having 1 to 10 carbons; A1, A2, A3 and A4 are 1,4-phenylene; Z1, Z2, Z3 and Z4 are a single bond or alkylene having 1 to 4 carbons; and m, n, q and r are independently 0, 1, or 2, and a sum of m, n, q and r is 1, 2, 3 or 4.




m

Optically active ammonium salt compound, production intermediate thereof, and production method thereof

An optically active bisbenzyl compound or a racemic bisbenzyl compound represented by formula (2) that has axial chirality: where: R1 represents a halogen, or an optionally substituted: linear, branched, or cyclic C1-8 alkyl, C2-8 alkenyl, C2-8 alkynyl, C6-14 aryl, C3-8 heteroaryl, linear, branched, or cyclic C1-8 alkoxy, or C7-16 aralkyl;R21 each independently represents hydrogen, halogen, nitro, or an optionally substituted: linear, branched, or cyclic C1-8 alkyl, C2-8 alkenyl, C2-8 alkynyl, C6-14 aryl, linear, branched, or cyclic C1-8 alkoxy, or a C7-16 aralkyl;R3 represents hydrogen, or an optionally substituted: C6-14 aryl, a C3-8 heteroaryl, or a C7-16 aralkyl; andY2 represents a halogen, or an optionally substituted: C1-8 alkylsulfonyloxy, C6-14 arylsulfonyloxy, or C7-16 aralkylsulfonyloxy.