2X Load Balancer 4.02 serial key or number

2X Load Balancer 4.02 serial key or number

2X Load Balancer 4.02 serial key or number

2X Load Balancer 4.02 serial key or number

Family +06 IBM Power Server (MME)

IBM Europe Sales Manual
Revised: September 8,




Product life cycle dates



Type ModelAnnouncedAvailableMarketing WithdrawnService Discontinued
MME-


Back to top

Abstract



The IBM is a Power E, a high-performance, secure, enterprise system optimized for the compute-intensive performance demands of large-scale, mission-critical transaction, database and analytics applications. With up to 80 IBM POWER8 processors, an efficient modular design, built-in IBM PowerVM virtualization technologies and Capacity on Demand innovation, the Power E server is designed to enable applications to run faster and sustain service levels for the most demanding data center applications and private cloud deployments.

Model abstract MME

The IBM Power System E Model MME features: System with processor, memory, and base I/O; Up to sixty-four GHz 8-core POWER8 processor cores; Up to eighty GHz core POWER8 processor cores; Up to 4 TB of MHz DDR3 CDIMM memory; Eight PCIe G3 x16 I/O expansion slots per system node enclosure, a maximum 16 per system; System control unit, providing redundant system master clock and redundant system master Flexible Service Processor (FSP) and support for the Op Panel, the system VPD, and the base DVD; inch PCIe Gen3 4U I/O expansion drawer and PCIe FanOut modules, supporting a maximum of 48 slots; PCIe Gen1, Gen2 and Gen3 adapter cards supported in both the system node and I/O expansion drawer; EXP24S SFF Gen2-bay Drawer with twenty-four inch form-factor SAS bays; Dynamic logical partition (LPAR) support for adjusting workload placement of processor and memory resources; Active Memory Expansion that is optimized onto the processor chip; day temporary Elastic CoD processor and memory enablement features; and Power Enterprise Pools, supporting unsurpassed enterprise flexibility for workload balancing and system maintenance.



Back to top

Highlights



The IBM Power System E server is designed as an industry-leading enterprise class system with outstanding price/performance, mainframe-inspired reliability and availability features, flexible capacity upgrades, and innovative virtualization technologies. The Power E model MME features:

  • System with processor, memory, and base I/O
    • Up to sixty-four GHz 8-core POWER8 processor cores
    • Up to eighty GHz core POWER8 processor cores
    • Up to 4 TB of MHz DDR3 CDIMM memory (up to 2 TB per system node)
    • Eight PCIe Gen3 x16 slots per system node enclosure, a maximum of 16 per 2-node system
  • System control unit, providing redundant system master clock and redundant system master Flexible Service Processor (FSP) and support for the Op Panel, the system VPD, and the base DVD
  • Optional inch PCIe Gen3 4U I/O Expansion Drawer, each providing 12 PCIe slots
  • EXP24S SFF Gen2-bay Drawer with twenty-four inch form-factor (SFF) SAS bays
  • Dynamic logical partition (LPAR) support for adjusting workload placement of processor and memory resources
  • Active Memory Expansion for IBM AIX that is optimized onto the processor chip
  • day temporary Elastic CoD processor and memory enablement features
  • Power Enterprise Pools, supporting unsurpassed enterprise flexibility for workload balancing and system maintenance


Back to top

Description



Power E enhancements

The Power E (MME) server doubles its previous maximum memory capacity by adding support for the GB memory CDMMs ( GB memory feature (#EM8M)). Filling all memory slots with these CDIMMs provides 4 TB per system node or a maximum of 8 TB in a two-node system. The same memory plugging rules as previously announced for the Power E server apply, such as the need for a minimum of 4 CDIMMs per SCM and the need for all CDIMMs on an SCM to be the same capacity. Note that an SCM using the GB DIMMs uses a slightly lower memory channel bandwidth to ease cooling requirements. Firmware level , or later, is a prerequisite for the GB memory feature.

Power E/E enhancements

The PCIe I/O slot maximum is doubled for both the Power E and E servers. Previously, a maximum of two PCIe Gen3 I/O Drawers per system node was announced. Now up to four drawers per node can be attached. Using two 6-slot Fan-out Modules per drawer provides a maximum of 48 PCIe slots per system node. With two system nodes, up to 96 PCIe Gen3 slots (8 I/O drawers) are supported. With a 4-node Power E server up to PCIe Gen3 slots (16 I/O drawers) are supported. Firmware , or later, is required to use the doubling of I/O drawers.

Additional PCIe I/O drawer configuration flexibility is provided to the Power E and E servers. Previously, only zero or two PCIe I/O drawers could be attached to a system node. Now zero, one, two, three, or four PCIe I/O drawers can be attached per system node. Plus a "half" drawer consisting of just one PCIe fan-out module in the I/O drawer is also supported, allowing a lower-cost configuration if fewer PCIe slots are required. Thus a system node supports the following half drawer options: one half drawer, two half drawers, three half drawers, or four half drawers. Because there is a maximum of four EMX0 drawers per node, a single system node cannot have more than four half drawers. A server with more system nodes can support more half drawers up to four per node. A system can also mix half drawers and full PCIe Gen3 I/O drawers. The maximum of four PCIe Gen3 drawers per system node applies whether a full or half PCIe drawer.

Drawers can be added to the server at a later time, but system downtime must be scheduled for adding a PCIe3 Optical Cable Adapter or a PCIe Gen3 I/O drawer (EMX0) or fan-out module (#EMXF).

An additional meter AOC cable (#ECC9) is announced for the Power E and E servers, providing additional cabling and distance flexibility.

The number of PCIe adapters supported in the Power E/E system node is more than doubled. This provides additional configuration flexibility. The following additional adapters, which had been previously announced on other servers, are now supported in the system node:

  • PCIe LP POWER GXT Graphics Accelerator (#)
  • PCIe LP 4Gb 2-Port Fibre Channel Adapter (#)
  • PCIe2 LP 2-Port 4X IB QDR Adapter 40Gb (#)
  • PCIe2 LP 2-port 10GbE SR Adapter (#)
  • PCIe2 LP 3D Graphics Adapter x1 (#EC41)
  • PCIe2 LP 4-Port (10GbE+1GbE) SR+RJ45 Adapter (#EN0T)
  • PCIe2 LP 4-port (10GbE+1GbE) Copper SFP+RJ45 Adapter (#EN0V)
  • PCIe LP 2-Port Async EIA Adapter (#EN28)
  • PCIe LP 10Gb FCoE 2-port Adapter (#)
  • PCIe LP 4-Port 10// Base-TX Ethernet Adapter (#)
  • PCIe LP 2-Port 1GbE SX Adapter (#)
  • PCIe LP 4-Port Async EIA Adapter (#)
  • PCIe LP 2-Port 1GbE TX Adapter (#)
  • PCIe2 LP 2-Port 10GbE RoCE SFP+ Adapter (#EC27)
  • PCIe2 LP 4-port (10Gb FCoE & 1GbE) LR&RJ45 Adapter (#EN0N)

The first eight adapters above were listed in an October statement of direction (SOD). This SOD listed 10 low-profile adapters; however, IBM no longer plans to announce support of the following two low-profile Ethernet adapters in the Power E/E system node:

  • PCIe LP 10GbE SR 1-port Adapter (#)
  • PCIe2 LP 4-Port 10GbE&1GbE SR&RJ45 Adapter (#)

Thus the combination of the above eight newly supported adapters plus the two adapters that will not be supported reflect the modified completion of the SOD. Note that these two low-profile adapters that will not be supported have equivalent full-high adapters which are supported in the PCIe Gen3 I/O Expansion Drawer.

In addition to the above adapters being supported, two new PCIe3 adapters are supported in the system node. These are a 4-port 10Gb Ethernet (#EN16/#EN18) adapter and a 2-port 10Gb Ethernet (#EC37/#EC2M) adapter.

Four additional PCIe adapters that were previously supported on other servers are now supported in the PCIe Gen3 I/O drawer:

  • 10Gb FCoE PCIe Dual Port Adapter (#)
  • PCIe2 4-Port (10GbE and 1GbE) SR and RJ45 Adapter (#)
  • PCIe MB Cache Dual Port 3Gb SAS RAID Adapter (#)
  • PCIe3 4-port (10Gb FCoE and 1GbE) LR and RJ45 Adapter (#EN0M)

In addition to the above four adapters, two new PCIe3 adapters are supported in the PCIe Gen3 I/O Drawer. These are a 4-port 10Gb Ethernet adapter (#EN15/#EN17), and a 2-port 10Gb Ethernet adapter (#EC2N/#EC38).

With additional PCIe Gen3 I/O drawers, the maximum number of PCIe adapters is increased. Associated with the increase in the number of SAS adapters in an increase to the EXP24S HDD/SSD drawer (#). The E now supports up to 3, drives with a new maximum of EXP24S drawers. The E now supports up to 4, drives with a new maximum of EXP24S drawers. The maximum of 16 EXP24S drawers per PCIe Gen3 I/O drawer due to cabling considerations remains unchanged.



Back to top

PCIe Gen3 adapters



4-port 10Gb Ethernet Adapter

The PCIe3 4-port 10GbE Adapter doubles the number of 10GbE ports per adapter, saving PCIe slots on POWER8 servers. The adapter is supported in full-high slots such as the PCIe Gen3 I/O drawer and the 4U scale-out system units and in the Power E/E system node slots. It does not fit in the 2U scale-out server low-profile slots. In addition to Ethernet NIC support, it also provides SR-IOV NIC support, which can allow further virtualization and slot savings.

The adapter has four SFP+ ports. All four ports use SR optical fiber cabling or all four use copper twinax cabling. Optical transceivers are included with the SR adapter feature. Copper twinax transceivers are included on the separately ordered copper cables.

|LP+ for system node |Full-high tail stock | SR Optical Fiber|#EN16 |#EN15 | | Copper twinax |#EN18 |#EN17

IBM i, AIX, and Linux support this adapter, both with and without VIOS. IBM i supports the use of this adapter for IBM i LAN console. NIM and Linux Network Install function is supported. See the feature code description for prerequisite software levels.

2-port 10Gb NIC/RoCE Adapter

The PCIe3 2-port 10GbE NIC/RoCE Adapter is a refresh of the current 10Gb NIC/RoCE Adapter, helping to ensure adapter availability for POWER8 servers. The adapter can be configured for either NIC or for RoCE capability. NIM and Linux Install is supported, an enhancement compared to the existing PCIe2 NIC/RoCE adapters (#EC27/#EC28/#EC29/#EC30).

RoCE can provide a performance and efficiency boost for the applications using the interface. The adapter is available either for SR optical fiber cabling or for copper twinax cabling. Optical transceivers are included with the SR adapter feature. Copper twinax transceivers are included on the separately ordered copper cables.

|Low-profile |Full-high |Low-profile -|Full-high - |- SR Optical|- SR Optical|Copper twinax|Copper twinax Linux-only|#EL40 |#EL54 |#EL3X |#EL53 | | | | Multi-OS |#EC2M |#EC2N |#EC37 |#EC38

AIX and Linux support this adapter with and without VIOS. IBM i supports the adapter with VIOS. See the feature code description for prerequisite software levels.

SR-IOV for POWER8 servers

Fulfilling the October SOD, SR-IOV NIC capability is announced for a full range of POWER8 servers, expanding on the Power E/E system node.

SR-IOV can provide more efficient hardware virtualization and can provide quality of service controls improving the manageability of virtualized adapters.

The following Ethernet PCIe adapters are supported with the SR-IOV NIC capability:

|Low profile|Full high |Low profile |Full high - |- multi OS |- multi OS|- Linux only|Linux only PCIe3 4-port (10GbE+1GbE)| #EN0J 1 | #EN0H | #EL38 | #EL56 SR Optical fiber | | | | PCIe3 4-port (10GbE+1GbE)| #EN0L 1 | #EN0K | #EL3C | #EL57 copper twinax | | | | PCIe3 4-port (10GbE+1GbE)| #EN0N | #EN0M | N/A | N/A LR Optical fiber | | | | PCIe3 4-port 10GbE | #EN16 2 | #EN15 | N/A | N/A SR optical fiber | | | | PCIe3 4-port 10GbE | #EN18 2 | #EN17 | N/A | N/A copper twinax | | | |
Note: -1 SR-IOV announced February for Power E/E system node. Now available in other POWER8 servers.
Note: -2 Adapter is only available in Power E/E system node, not 2U server.

These adapters each have four ports, and all four ports are enabled with SR-IOV function. The entire adapter (all four ports) is configured for SR-IOV or none of the ports is.

The client can chose to configure as SR-IOV and assign % of a port to one partition. The FCoE capability of the adapter is not supported when using SR-IOV. SR-IOV can provide simple virtualization without VIOS with greater server efficiency as more of the virtualization work is done in the hardware and less in the software. SR-IOV can also provide bandwidth quality of service (QoS) controls to help ensure that client-specified partitions have a minimum level of Ethernet port bandwidth and thus improve the ability to share Ethernet ports more effectively.

As a QoS example, assume there are three partitions: Partition A, B, and C. If Partition A is assigned a minimum of 20% of the bandwidth of a port, Partitions B and C cannot use more than 80% of the bandwidth unless Partition A is using less than 20%. Partition A can use more than 20% if bandwidth is available. This applies to outbound NIC traffic from the port. It does not apply to inbound NIC traffic into the port, but remember that these ports run in full duplex and thus QoS for outgoing traffic remains in effect even with high levels of inbound NIC traffic.

SR-IOV is considered simple virtualization and does not include higher-level virtualization functions provided by VIOS such as Live Partition Mobility. SRIOV can optionally be combined with VIOS to leverage VIOS's higher level of functionality, but continues to provide bandwidth QoS. The portion of the Ethernet port's bandwidth controlled by VIOS would use software virtualization.

The expanded POWER8 SR-IOV NIC capability with a planned availability of June requires:

  • AIX TL9 SP5 and APAR IV, or later
  • AIX TL3 SP5 and APAR IV, or later
  • IBM i TR10, or later
  • IBM i TR2, or later
  • Red Hat Enterprise Linux , or later
  • Red Hat Enterprise Linux 7, or later
  • SUSE Linux Enterprise Server 11 SP3, or later
  • SUSE Linux Enterprise Server 12, or later
  • Ubuntu , or later
  • VIOS , or later
  • Firmware level , or later

This expanded capability is on the Power scale-out servers, is on the PCIe Gen3 I/O drawer, and with the new 4-port 10Gb Ethernet adapter on the enterprise system node.

IBM Manufacturing is not announcing the capability to preconfigure SR-IOV-capable I/O using SR-IOV. Such hardware will be shipped from IBM when ordered with a server in the same way adapters have been shipped previously. The client will need to configure the SR-IOV capability.

The following POWER8 PCIe slots are SR-IOV capable:

  • All Power E/E system node slots.
  • Slots C1 and C4 of the 6-slot Fan-out Module in a PCIe Gen3 I/O drawer.
  • Slots C6, C7, C10, and C12 of a Power S (1S 4U) or SL (1S 2U) server.
  • Slots C2, C3, C4, C5, C6, C7, C10, and C12 of a S or SL server (2-socket, 4U) with both sockets populated. If only one socket is populated, then C6, C7, C10, and C
  • Slots C2, C3, C5, C6, C7, C10, and C12 of a S or SL server (2-socket, 2U) with both sockets populated. If only one socket is populated, then C6, C7, C10, and C

An HMC is required to configure SR-IOV.

e-Config recognizes VIOS is not a prerequisite for Power E and E system lower-profile feature EN0J and EN0L adapters because of the SR-IOV capability. But e-config will not be updated until June 2, , with the same insight for the full-high feature EN0H/EN0K/EN0M adapters for IBM i configurations. These full-high adapters can be ordered if VIOS is present on the configuration prior to e-config's planned enhancement. e-config recognizes that features EN15, EN16, EN17, and EN18 do not require VIOS for IBM i configurations.

Built with innovation that puts data to work across the enterprise, IBM Power Systems provide the foundation for organizations to bring insight to the point of impact twice as fast. The Power E server delivers exceptional performance using enterprise POWER8 processors, each a Single Chip Module (SCM) with cores running over 4 GHz and executing up to eight threads per-core (SMT8). Each SCM has dual memory controllers to support up to 1 TB of memory and utilize up to MB off-chip eDRAM L4 (off-chip) cache. I/O bandwidth is also dramatically increased via dual PCIe Gen3 I/O controllers integrated onto each SCM to further reduce latency.

The IBM Power System E POWER8 system node uses either 8-core or core symmetric multiprocessing (SMP) processor chips with KB of L2 and 8 MB of L3 cache per core, DDR3 CDIMM memory, dual memory controllers, and an industry-standard G3 PCIe I/O bus designed to use 32 lanes organized in two sets of x The peak memory and I/O bandwidths per system node have increased over % compared to POWER7+ servers. The two primary system building blocks are one system control unit and one or more system nodes. Additional I/O support is provided with a inch PCIe Gen3 I/O expansion drawer and an EXP24S SFF Gen2 expansion drawer. The processors, memory, and base I/O are packaged within the system nodes. The system nodes are rack based.

The POWER8 processor single chip modules (SCM) are provided in each system node. Processors are interconnected by two sets of system buses. Each SCM contains two memory controllers per processor module. Four GHz 8-core SCMs or four GHz core SCMs are used in each system node, providing either 32 cores (#EPBA) or 40 cores (#EPBC). As few as eight cores can be activated or up to % of the cores can be activated. Incrementing one core at a time is available through built-in capacity on demand (CoD) functions to the full capacity of the system.

The Power E server can have up to two system nodes per system. With core nodes, the maximum is a core system. With core nodes, the maximum is an core system.

The system control unit provides redundant system master clocks and redundant system master service processors (FSPs). Additionally, it contains the Op Panel, the System VPD, and a DVD.

MHz memory CDIMMs are available as 64 GB (#EM8J), GB (#EM8K), and GB (#EM8L) memory features. Each memory feature provides four CDIMMs. CDIMSS are custom DIMMs that enhance memory performance and memory reliability. Each system node has 32 CDIMM slots that support a maximum of eight memory features. Using GB CDIMM features yields a maximum of 2 TB per node. A two-node system has a maximum of eight memory features and 4 TB Capacity. Memory activations of 50% of the installed capacity are required.

The inch PCIe Gen3 4U I/O expansion drawer (#EMX0) provides slots to hold PCIe adapters that cannot be placed into a system node. In , up to two PCIe I/O expansion drawers can be attached per system node, providing PCIe Gen3 adapter slots. Thus a two-node system has a maximum of 48 PCIe Gen3 slots in I/O drawers plus PCIe slots in the system node.

Direct attached storage is supported with the EXP24S SFF Gen2-bay Drawer (#), an expansion drawer with twenty-four inch form-factor SAS bays.

The day temporary elastic CoD processor and memory enablement features enable a system to temporarily activate all inactive processor and memory CoD resources for a maximum of 90 days before you must order another temporary elastic enablement feature number.

With Power Enterprise Pools, IBM continues to enhance the ability to freely move processor and memory activations from one system to another system in the same pool, without the need for IBM involvement. This capability now allows the movement of resources not only between like systems, but also between generations of Power Systems, and thus delivers unsurpassed flexibility for workload balancing and system maintenance. Power Enterprise Pools deliver the support to meet clients' business goals when it comes to the following:

  • Providing organizations with a dynamic infrastructure, reduced cost of performance management, improved service levels, and controlled risk management
  • Improving the flexibility, load balancing, and disaster recovery planning and operations of your Power Systems infrastructure
  • Enhanced reliability, availability, and serviceability (RAS) to handle the requirements to accommodate a global economy

Power Enterprise Pools mobile activations are available for use on the Power , , and systems and now on the Power E and E systems.

Summary of features

The following features are available on the Power E

  • One or two 5U inch rack-mount system node drawers
  • One 2U inch rack-mount system control unit drawer
  • Only 12U for a system with two system node drawers
  • One processor feature per system node:
    • GHz, (4 x 0/8W) core POWER8 processor (#EPBA)
    • GHz, (4 X 0/10W) core POWER8 processor (#EPBC)
  • Static or mobile processor activation features available on a per core basis
  • 32 CDIMM slots per system node, a minimum of 16 populated per node
  • POWER8 DDR3 memory CDIMMs:
    • 0/64 GB (4 X 16 GB), MHz (#EM8J)
    • 0/ GB (4 X 32 GB), MHz (#EM8K)
    • 0/ GB (4 X 64 GB), MHz (#EM8L)
  • Active Memory Expansion -- optimized onto the processor chip (#EM82)
  • Elastic CoD Processor Day -- no-charge, temporary usage of inactive, Elastic CoD resources on initial orders (#EPJ3 or #EPJ5)
  • 90 Days Elastic CoD Temporary Processor Enablement (#EP9T)
  • Eight PCIe Gen3 x16 I/O low profile expansion slots per system node (maximum 16 with 2-node system or 32 in a 4-node system)
  • One slim-line, SATA media bay per system control unit enclosure (DVD drive defaulted on order, option to de-select)
  • Redundant hot-swap ac power supplies in each system node drawer
  • Two HMC ports per FSP in system control unit enclosure service processor (FSP) (maximum of one # and one #)
  • Dynamic logical partition (LPAR) support and processor and memory CUoD
  • PowerVM Virtualization built:
    • Micro-Partitioning
    • Dynamic logical partitioning
    • Shared processor pools
    • Shared storage pools
    • Live Partition Mobility
    • Active Memory Sharing
    • Active Memory Deduplication
    • NPIV support
    • PowerVP Performance Monitor
  • Optional PowerHA for AIX, IBM i, and Linux
  • Optional PCIe Gen3 I/O Expansion Drawer with PCIe Gen3 slots:
    • Zero or two PCIe Gen3 Drawers per system node (#EMX0)
    • Zero or two or four PCIe Gen3 drawers per 2-node system
    • Each Gen3 I/O drawer holds two 6-slot PCIe3 Fan-out Modules (#EMXF)
    • Each Gen3 I/O drawer attaches to the system node through two PCIe3 Optical Cable Adapters (#EJ07)

Processor cores and memory

  • Each system must have a minimum of eight active processor cores. Each processor feature (#EPBA, #EPBC) will deliver a set of four identical single chip modules (SCMs). All processor features in the system must be identical.
  • Cable features are required to connect system node drawers to the system control unit and to other system nodes.
    • For a single system node configuration, feature ECCA is required.
    • For a dual system node configuration, features ECCA and ECCB are required.
  • Each system node drawer has 32 memory CDIMM slots holding up to eight DDR3 memory features.
  • Each system node drawer must have a minimum of four memory features or 16 DDR3 CDIMMs. Select from features EM8J (64 GB), EM8K ( GB), or EM8L ( GB), (four CDIMMs per feature).
  • The minimum activations ordered with all initial orders of memory features EM8J, EM8K, and EM8L must be 50% of their installed capacity.
  • The minimum activations ordered with MES orders of memory features EM8J, EM8K, and EM8L will depend on the total installed capacity of features EM8J, EM8K, and EM8L. This enables you to purchase newly ordered memory with less than 50% activations when the currently installed capacity exceeds 50% of the existing features EM8J, EM8K, and EM8L capacity.
  • The minimum activations installed for all memory must be 50% of their installed capacity.
  • DDR3 memory features EM8J, EM8K, and EM8L can be mixed on the same POWER8 system node drawer. If placing two memory features on the same SCM they must be identical.
  • It is recommended that memory be installed evenly across all system node drawers and all SCMs in the system. Balancing memory across the installed system planar cards allows memory access in a consistent manner and typically results in better performance for your configuration.
  • Though maximum memory bandwidth is achieved by filling up all the memory slots, plans for future memory additions should be taken into account when deciding which memory feature size to use at the time of initial system order.

System node PCIe slots

  • Each Power E system node enclosure provides excellent configuration flexibility and expandability with eight half-length, half-high (low profile) x16 PCIe Gen3 slots. The slots are labeled C1 through C8.
    • These PCIe slots can be used for either low-profile PCIe adapters or for attaching a PCIe I/O drawer.
    • A new form factor blind swap cassette (BSC) is used to house the low-profile adapters that go into these slots. The server is shipped with a full set of BSC, even if the BSC is empty. A feature code to order additional low-profile BSC is not required or announced.
    • If additional Gen3 PCIe slots beyond the system node slots are required, a system node x16 slot is used to attach a six-slot expansion module in the I/O drawer. An I/O drawer holds two expansion modules that are attached to any two x16 PCIe slots in the same system node or in different system nodes.
  • PCIe Gen1, Gen2, and Gen3 adapter cards are supported in these Gen3 slots. The set of PCIe adapters that are supported is found in the Sales Manual, identified by feature code number.
  • Concurrent repair and add/removal of PCIe adapter cards is done by HMC guided menus or by operating system support utilities.
  • The system nodes sense which IBM PCIe adapters are installed in their PCIe slots; and if an adapter requires higher levels of cooling, they automatically speed up the fans to increase airflow across the PCIe adapters.
  • Each system node supports up to four CAPI adapters, which can be located in slots C2, C4, C6, or C8.

PCIe Gen3 I/O Expansion Drawer:

  • The inch 4 EIA (4U) PCIe Gen3 I/O Expansion Drawer (#EMX0) and two PCIe FanOut Modules (#EMXF) provide 12 PCIe I/O full-length, full-height slots. One FanOut Module provides six PCIe slots labeled C1 through C6. C1 and C4 are x16 slots and C2, C3, C5, and C6 are x8 slots.
  • PCIe Gen1, Gen2, and Gen3 full-high adapter cards are supported. The set of full-high PCIe adapters that are supported is found in the Sales Manual, identified by feature code number. See the PCI Adapter Placement manual for the MHE or MME for details and rules associated with specific adapters supported and their supported placement in x8 or x16 slots.
  • A PCIe X16 to Optical CXP converter adapter (#EJ07) and M (#ECC6) or M (#ECC8) CXP 16X Active Optical cables (AOC) connect the system node to a PCIe FanOut module in the I/O expansion drawer. One feature ECC6 or one ECC8 ships two AOC cables from IBM.
  • The two AOC cables connect to two CXP ports on the fan-out module and to two CXP ports on the EJ07 adapter. The top port of the fan-out module must be cabled to the top port of the EJ07 port. Likewise, the bottom two ports must be cabled together.
  • It is recommended but not required that one I/O drawer be attached to two different system nodes in the same server (one drawer module attached to one system node and the other drawer module attached to a different system node). This can help provide cabling for higher availability configurations.
  • It is generally recommended that any attached PCIe Gen3 I/O Expansion Drawer be located in the same rack as the POWER8 server for ease of service, but they can be installed in separate racks if the application or other rack content requires it. If attaching a large number of cables such as SAS cables or CAT5/CAT6 Ethernet cables to a PCIe Gen3 I/O drawer, then it is generally better to place that EMX0 in a separate rack for better cable management. Limitation: When this cable is ordered with a system in a rack specifying IBM Plant integration, IBM Manufacturing will ship SAS cables longer than 3 meters in a separate box and not attempt to place the cable in the rack. This is because the longer SAS cable is probably used to attach to an EXP24S drawer in a different rack.
  • Concurrent repair and add/removal of PCIe adapter cards is done by HMC guided menus or by operating system support utilities.
  • A blind swap cassette (BSC) is used to house the full-high adapters that go into these slots. The BSC is the same BSC as used with the previous generation server's 12X attached I/O drawers (#, #, #, #). The drawer is shipped with a full set of BSC, even if the BSC is empty. A feature code to order additional full-high BSC is not required or announced.
  • There is a maximum of 16 EXP24s Drawers per PCIe Gen3 Drawer (#EMX0) to enable SAS cables to be properly handled by the EMX0 cable management bracket.

EXP24S Disk/SSD Drawer

  • The EXP24S SFF Gen2-bay Drawer (#) is an expansion drawer with twenty-four inch form-factor SAS bays. Slot filler panels are included for empty bays when initially shipped. The EXP24S supports up to 24 hot-swap SFF-2 SAS hard disk drives (HDDs) or solid-state drives (SSDs). It uses only 2 EIA of space in a inch rack. The EXP24S includes redundant ac power supplies and uses two power cords.
  • With AIX, Linux, and VIOS, you can order the EXP24S with four sets of six bays, two sets of 12 bays, or one set of 24 bays (mode 4, 2, or 1). With IBM i, you can order the EXP24S as one set of 24 bays (mode 1). Mode setting is done by IBM Manufacturing and there is no option provided to change the mode after it is shipped from IBM.
  • The EXP24S SAS ports are attached to a SAS PCIe adapter or pair of adapters using SAS YO or X cables.
  • To maximize configuration flexibility and space utilization, the system node does not have integrated SAS bays or integrated SAS controllers. PCIe adapters and the EXP24S can be used to provide direct access storage.
  • To further reduce possible single points of failure, EXP24S configuration rules consistent with previous Power Systems are used. IBM i configurations require the drives to be protected (RAID or mirroring). Protecting the drives is highly recommended, but not required for other operating systems. All Power operating system environments that are using SAS adapters with write cache require the cache to be protected by using pairs of adapters.
  • It is recommended for SAS cabling ease that the EXP24S drawer be located in the same rack in which the PCIe adapter is located. Note, however, it is often a good availability practice to split a SAS adapter pair across two PCIe drawers/nodes for availability and that may make the SAS cabling ease recommendation difficult or impossible to implement.
  • HDDs and SSDs that were previously located in POWER7 system units or in feature or 12X-attached I/O drawers (SFF-1 bays) can be "re-trayed" and placed in EXP24S drawers. See feature conversions previously announced on the POWER7 servers. Ordering a conversion ships an SFF-2 tray or carriage onto which the client can place their existing drive after removing it from the existing SFF-1 tray/carriage. The order also changes the feature code so that IBM configuration tools can better understand what is required.
  • There is a maximum of 16 EXP24s Drawers per PCIe Gen3 Drawer (#EMX0) to enable SAS cables to be properly handled by the EMX0 cable management bracket.

DVD and boot devices

  • A device capable of reading a DVD must be attached to the system and available to perform operating system installation, maintenance, problem determination, and service actions such as maintaining system firmware and I/O microcode at their latest levels. Alternatively, the system must be attached to a network with software such as AIX NIM server or Linux Install Manager configured to perform these functions.
  • System boot is supported through three options:
    1. Disk or SSD located in an EXP24S drawer attached to a PCIe adapter
    2. A network through LAN adapters
    3. A SAN attached to Fibre Channel or FCoE adapters and indicated to the server by the specify code
  • Assuming option 1 above, the minimum system configuration requires at least one SAS disk drive in the system for AIX and Linux and two for IBM i. If you are using option 3 above, a disk or SSD drive is not required.
  • Each system control unit enclosure can have one slim-line bay that can support one DVD drive (#EU13). The feature EU13 DVD is cabled to a USB PCIe adapter located in either the system node or in a PCIe Gen3 I/O drawer. A USB to SATA converter is included in the configuration without a separate feature code. The M USB cable #EBK4 is only used to attach the #EU13 DVD drive.
  • For IBM i, a DVD drive must be available on the server when required. The DVD can be in the system control unit or it can be located in an external enclosure such as a U3 Multimedia drawer.

Racks

The Power E is designed to fit a standard inch rack. IBM Development has tested and certified the system in the IBM Enterprise rack (T42, or #). The client can choose to place the server in other racks if they are confident those racks have the strength, rigidity, depth, and hole pattern characteristics that are needed. The client should work with IBM Service to determine other racks' appropriateness. The Power E rails can adjust their depth to fit a rack that is inches - inches in depth.

The Power E must be ordered with an IBM 42U enterprise rack (T42 or #). An initial system order is placed in a T42 rack. A same-serial-number model upgrade MES is placed in a feature rack. This is done to ease and speed client installation, provide a more complete and higher quality environment for IBM Manufacturing system assembly and testing, and provide a more complete shipping package.

The T42 or feature is a 2-meter enterprise rack providing 42U or 42 EIA of space. Clients who don't want this rack can remove it from the order, and IBM Manufacturing will then remove the server from the rack after testing and ship the server in separate packages without a rack. Use the chargeable factory-deracking code ER21 on the order to do this.

Additional E PCIe Gen3 I/O drawers (#EMX0) for an already installed server can be MES ordered with or without a rack. When clients want IBM Manufacturing to place these MES I/O drawers into a rack and ship them together (factory integration), then the racks should be ordered as feature codes on the same order as the I/O drawers. Use feature (42U enterprise rack) for this order.

Three rack front door options are supported with Power E system nodes for the 42U enterprise rack (T42 or #), the acoustic door (#), the attractive geometrically accented door (#ERG7) and the cost-effective plain front door (#). The front trim kit is also supported (#). The Power logo rack door (#) is not supported.

It is strongly recommended that the bottom 2U of the rack be left open for cable management when below-floor cabling is used. Likewise, if overhead cabling is used, it is strongly recommended that the top 2U be left open for cable management. If clients are using both overhead and below-floor cabling, leaving 2U open on both the top and bottom of the rack is a good practice. Rack configurations placing equipment in these 2U locations can be more difficult to service if there are a lot of cables running by them in the rack.

The system node and system control unit must be immediately physically adjacent to each other in a contiguous space. The cables connecting the system control unit and the system node are built to very specific lengths. In a two-node configuration, system node 1 is on top, and then the system control unit in the middle and system node 2 is on the bottom. Use specify feature ER16 to reserve 5U space in the rack for a future system node and avoid the work of shifting equipment in the rack in the future.

With the 2 meter T42 or feature , a rear rack extension of 8 inches or cm (#ERG0) provides space to hold cables on the side of the rack and keep the center area clear for cooling and service access. Including this extension is very, very strongly recommended where large numbers of thicker I/O cables are present or may be added in the future. The definition of a "large number" depends on the type of I/O cables used. Probably around 64 short-length SAS cables per side of a rack or around 50 longer-length (thicker) SAS cables per side of a rack is a good rule of thumb. Generally, other I/O cables are thinner and easier to fit in the sides of the rack and the number of cables can be higher. SAS cables are most commonly found with multiple EXP24S SAS drawers (#) driven by multiple PCIe SAS adapters. For this reason, it can be a very good practice to keep multiple EXP24S drawers in the same rack as the PCIe Gen3 I/O drawer or in a separate rack close to the PCIe Gen3 I/O drawer, using shorter, thinner SAS cables. The feature ERG0 extension can be good to use even with a smaller numbers of cables as it enhances the ease of cable management with the extra space it provides.

Multiple service personnel are required to manually remove or insert a system node drawer into a rack, given its dimensions and weight and content. To avoid any delay in service it is very strongly recommended that the client obtain an optional lift tool (#EB2Z). One EB2Z lift tool can be shared among many servers and I/O drawers. The EB2Z lift tool provides a hand crank to lift and position up to kg ( lb). The EB2Z lift tool is meters x meters (44 in. x in.) Note that a single system node can weigh up to kg ( lb).

System node power

  • Four ac power supplies provide 2 + 2 redundant power for enhanced system availability. A system node is designed to continue functioning with just two working power supplies. A failed power supply can be hot swapped but must remain in the system until the replacement power supply is available for exchange.
  • Four ac power cords are used for each system node (one per power supply) and are ordered using the AC Power Chunnel feature (#EBAA). The chunnel carries power from the rear of the system node to the hot swap power supplies located in the front of the system node where they are more accessible for service.

System control unit power

  • The system control unit is powered from the system nodes. UPIC cables provide redundant power to the system control unit. Two UPIC cables attach to system node drawer 1 and two UPIC cables attach to system node drawer 2. They are ordered with features ECCA and ECCB. Just one UPIC cord is enough to power the system control unit and the rest are in place for redundancy.

Power distribution units (PDU)

  • The Power E server factory integrated into an IBM rack uses horizontal PDUs located in the EIA drawer space of the rack instead of the typical vertical PDUs found in the side pockets of a rack. This is done to aid cable routing. Each horizontal PDU occupies 1U. Vertically mounting the PDUs to save rack space can cause cable routing challenges and interfere with optimal service access.
  • When mounting the horizontal PDUs, it is a good practice to place them almost at the top or almost at the bottom of the rack, leaving 2U or more of space at the very top or very bottom of the open for cable management. Mounting a horizontal PDU in the middle of the rack is generally not optimal for cable management.
  • Two possible PDU ratings are supported: 60A/63A (orderable in most countries) and 30A/32A.
    • The 60A/63A PDU supports four system node power supplies and one I/O expansion drawer or eight I/O expansion drawers.
    • The 30A/32A PDU supports two system node power supplies and one I/O expansion drawer or four I/O expansion drawers.
  • Rack-integrated system orders require two of either feature or
    • Feature -- Intelligent PDU with Universal UTG Connector is for an intelligent ac power distribution unit (PDU+) that will allow the user to monitor the amount of power being used by the devices that are plugged in to this PDU+. This ac power distribution unit provides twelve C13 power outlets. It receives power through a UTG connector. It can be used for many different countries and applications by varying the PDU to Wall Power Cord, which must be ordered separately. Each PDU requires one PDU to Wall Power Cord. Supported power cords include the following features: , , , , , , , , and
    • Feature -- Power Distribution Unit mounts in a inch rack and provides twelve C13 power outlets. Feature has six 16A circuit breakers, with two power outlets per circuit breaker. System units and expansion units must use a power cord with a C14 plug to connect to the feature One of the following line cords must be used to distribute power from a wall outlet to the feature feature , , , , , , , , or

Hot-plug options

The following options are hot-plug capable:

  • PCIe I/O adapters.
  • System node ac power supplies: Two functional power supplies must remain installed at all times while the system is operating.
  • System node fans.
  • System control unit fans.
  • System control unit Op Panel.
  • System control unit DVD drive.
  • UPIC power cables from system node to system control unit.

If the system boot device or system console is attached using an I/O adapter feature, that adapter may not be hot-plugged if a nonredundant topology has been implemented.

You can access hot-plug procedures in the product documentation at

cromwellpsi.com

PowerVM

PowerVM Enterprise virtualization is built into the Power E system and provides the complete set of PowerVM virtualization functionality needed for Power enterprise servers with POWER8 technology. This enables efficient resource sharing through virtualization, which allows workload consolidation and secure workload isolation as well as the flexibility to redeploy resources dynamically.

Other PowerVM technologies include the following:

  • System Planning Tool simplifies the process of planning and deploying Power Systems LPARs and virtual I/O.
  • Virtual I/O Server (VIOS) is a single-function appliance that resides in an IBM Power processor-based partition. It facilitates the sharing of physical I/O resources between client partitions AIX, Linux, or IBM i within the server.
  • With Live Partition Mobility, you can move a running AIX, Linux, or IBM i LPAR from one physical server to another with no downtime. Use this capability to do the following:
    • Migrate from older generation Power servers to the Power E system.
    • Evacuate workloads from a system before performing scheduled maintenance.
    • Move workloads across a pool of different physical resources as business needs shift.
    • Move workloads away from under-utilized machines so that they can be powered off to save on energy and cooling costs. Active Memory Sharing allows memory to be dynamically moved between running partitions for optimal resource usage.
    • PowerVP Virtualization Performance monitor provides real-time monitoring of a virtualized system showing the mapping of VMs to physical hardware.

Note: Alternative configuration options are available on a special bid basis from your IBM representative or Business Partner.

Active Memory Expansion

Active Memory Expansion is an innovative POWER7, POWER7+, and POWER8 technology supporting the AIX operating system that allows the effective maximum memory capacity to be much larger than the true physical memory maximum. Sophisticated compression/decompression of memory content can allow memory expansion up to % or more. This can allow a partition to do significantly more work or support more users with the same physical amount of memory. Similarly, it can allow a server to run more partitions and do more work for the same physical amount of memory.

Active Memory Expansion uses CPU resource to compress/decompress the memory contents. The trade-off of memory capacity for processor cycles can be an excellent choice, but the degree of expansion varies on how compressible the memory content is. It also depends on having adequate spare CPU capacity available for this compression/decompression. Tests in IBM laboratories using sample workloads showed excellent results for many workloads in terms of memory expansion per additional CPU utilized. Other test workloads had more modest results. Feedback from many POWER7 and POWER7+ clients using the function has been very positive.

POWER7+ and POWER8 chips include a hardware accelerator designed to boost Active Memory Expansion efficiency and use less POWER core resource. The POWER8 accelerator includes some minor enhancements and also leverages POWER8 higher bandwidth and lower latency characteristics.

You have a great deal of control over Active Memory Expansion usage. Each individual AIX partition can turn on or turn off Active Memory Expansion. Control parameters set the amount of expansion desired in each partition to help control the amount of CPU used by the Active Memory Expansion function. An IPL is required for the specific partition that is turning memory expansion. Once turned on, monitoring capabilities are available in standard AIX performance tools such as lparstat, vmstat, topas, and svmon.

A planning tool is included with AIX, allowing you to sample actual workloads and estimate both how expandable the partition's memory is and how much CPU resource is needed. Any Power Systems model can run the planning tool. In addition, a one-time, day trial of Active Memory Expansion is available to enable more exact memory expansion and CPU measurements. You can request the trial using the Capacity on Demand web page

cromwellpsi.com

Active Memory Expansion is enabled by chargeable hardware feature EM82, which can be ordered with the initial order of the system node or as an MES order. A software key is provided when the enablement feature is ordered, which is applied to the system node. An IPL is not required to enable the system node. The key is specific to an individual system node and is permanent. It cannot be moved to a different server.

The additional CPU resource used to expand memory is part of the CPU resource assigned to the AIX partition running Active Memory Expansion. Normal licensing requirements apply.

IBM i operating system

For clients loading the IBM i operating system, the four-digit numeric QPRCFEAT value used on the MME or MHE is the same as the four-digit numeric feature number for the processors used in the system. For example, if the processor feature number in a system is EPBA, the QPRCFEAT value for the system would be EPBA.

  • The QPRCFEAT value in a Power E system node does not change with the addition of more processors or additional CEC enclosures.
  • The QPRCFEAT value in a Power E system node would change only if the feature number of the processors was changed due to a processor upgrade.

Capacity backup offering (applies to IBM i only)

The Power System Capacity Backup (CBU) designation can help meet your requirements for a second system to use for backup, high availability, and disaster recovery. It enables you to temporarily transfer IBM i processor license entitlements and Enterprise Enablement entitlements purchased for a primary machine to a secondary CBU-designated system. Temporarily transferring these resources instead of purchasing them for your secondary system may result in significant savings. Processor activations cannot be transferred as part of this CBU offering, but programs such as Power Enterprise Pools are available for the function.

The CBU specify feature number is available only as part of a new server purchase or during an MES upgrade from an existing system to a MME. Certain system prerequisites must be met, and system registration and approval are required before the CBU specify feature can be applied on a new server. A used system that has an existing CBU feature cannot be registered. The only way to attain a CBU feature that can be registered is with a plant order.

Standard IBM i terms and conditions do not allow either IBM i processor license entitlements or OLTP (Enterprise Enablement) entitlements to be transferred permanently or temporarily. These entitlements remain with the machine they were ordered for. When you register the association between your primary and on-order CBU system, you must agree to certain terms and conditions regarding the temporary transfer.

After a CBU system designation is approved and the system is installed, you can temporarily move your optional IBM i processor license entitlement and Enterprise Enablement entitlements from the primary system to the CBU system when the primary system is down or while the primary system processor cores are inactive. The CBU system can then better support fail-over and role swapping for a full range of test, disaster recovery, and high availability scenarios. Temporary entitlement transfer means that the entitlement is a property transferred from the primary system to the CBU system and may remain in use on the CBU system as long as the registered primary and CBU system are in deployment for the high availability or disaster recovery operation.

The primary system for a E server can be any of the following:

  • FHB
  • MHE
  • MME
  • MMB
  • MMC
  • MMD
  • MHB
  • MHC
  • MHD

These systems have IBM i software licenses with an IBM i P30 software tier, or higher. The primary machine must be in the same enterprise as the CBU system.

Before you can temporarily transfer IBM i processor license entitlements from the registered primary system, you must have more than one IBM i processor license on the primary machine and at least one IBM i processor license on the CBU server. An activated processor must be available on the CBU server to use the transferred entitlement. You may then transfer any IBM i processor entitlements above the minimum one, assuming the total IBM i workload on the primary system does not require the IBM i entitlement you would like to transfer during the time of the transfer. During this temporary transfer, the CBU system's internal records of its total number of IBM i processor license entitlements are not updated, and you may see IBM i license noncompliance warning messages from the CBU system. Such messages that arise in this situation do not mean you are not in compliance.

Before you can temporarily transfer entitlements, you must have more than one Enterprise Enablement entitlement on the primary server and at least one Enterprise Enablement entitlement on the CBU system. You may then transfer the entitlements that are not required on the primary server during the time of transfer and that are above the minimum of one entitlement. Note that if you are using software replication (versus PowerHA), you may well need more than a minimum of one entitlement on the CBU to support the replication workload.

For example, if you have an core Power as your primary system with four IBM i processor license entitlements (three above the minimum) and two Enterprise Enablement entitlements (one above the minimum), you can temporarily transfer up to three IBM i entitlements and one Enterprise Enablement entitlement. During the temporary transfer, the CBU system's internal records of its total number of IBM i processor entitlements is not updated, and you may see IBM i license noncompliance warning messages from the CBU system.

If your primary or CBU machine is sold or discontinued from use, any temporary entitlement transfers must be returned to the machine on which they were originally acquired.

For CBU registration and further information, visit

cromwellpsi.com

Power Enterprise Pools

Power Enterprise Pools provide a level of flexibility and value for systems that operate together as a pool of resources. Power Enterprise Pools mobile activations are available for use on the Power , , and systems and now on the new Power E and E systems. They can be assigned to any system in a predefined pool by the user with simple HMC commands. IBM does not need to be notified when these resources are reassigned within a pool. The simplicity of operations offers new flexibility when managing large workloads in a pool of systems. This capability is especially appealing to aid in providing continuous application availability during maintenance windows. Not only can workloads easily move to alternate systems, but now the activations can move as well.

Now with the availability of the new Power E and E systems, IBM continues to enhance the ability to freely move processor and memory activations from one system to another system in the same pool, without the need for IBM involvement. This capability now allows the movement of resources not only between like systems but also between generations of Power Systems, and thus delivering unsurpassed flexibility for workload balancing and system maintenance.

Now, more than ever, Power Enterprise Pools delivers the support to meet client's business goals when it comes to the following:

  • Providing organizations with a dynamic infrastructure, reduced cost of performance management, improved service levels, and controlled risk management
  • Improving the flexibility, load balancing, and disaster recovery planning and operations of your Power Systems infrastructure
  • Enhanced reliability, availability, and serviceability (RAS) to handle the requirements to accommodate a global economy

Prerequisites for Power Enterprise Pools:

  • All systems in a pool must be attached to the same HMC (or redundant set of HMCs).
  • Systems must be one of the following models: MMD, MHD, FHB, MME, or MHE.
  • Systems must be within one country.

Two types of pools are available. One that enables Power (MMD) or Power E (MME) class systems to run in the same pool and the other that enables Power (MHD), Power (FHB), and Power E (MHE) class systems to run in the same pool. Both pools allow both processor and memory activations to move between servers within the pool. Systems with different clock speeds are supported, co-existing within the same pool. All systems in the pool must be attached to the same HMC.

Power Systems users now have the satisfaction of knowing that as their requirements change, so can their systems. A simple movement of activations from one system to another helps users rebalance resources and respond to business needs. Maintenance windows now open up more easily as both workloads and activations move transparently across systems. Even disaster recovery planning becomes more manageable with the ability to move activations where and when they are needed. Power Enterprise Pools are just one more reason why enterprise class servers from Power Systems deliver value for your ever-changing business.

Mobile and static activations

A more flexible activation type is employed for Power Enterprise Pools. Historically, only "static" activations that could not move from server to server were available. These static activations remain available on the Power , , and , and are announced on the E and E and a certain number are required per server. But mobile activation features can be moved in the Power Enterprise Pool. Mobile activations apply to both processor core activations and memory activations.

A base number of static cores must continue to be activated on each system in the pool, consistent with previous minimum configurations. The new Power E and E must have at least eight cores activated in static capability. The Power and Power continue to have the requirement of at least four cores activated in static capability. The Power must have at least 24 cores or 25% of the installed cores, whichever is higher, activated in static capability. All remaining processor core activations on these systems can optionally be mobile activations, be static activations, or be a mixture. Static and mobile core activations can co-reside in the same system and in the same partition.

A maximum of 75% of all physically installed memory can have mobile activations. A minimum of 25% of all memory activations on a server must have static activations. Static and mobile memory activations can co-reside in the same system and in the same partition. Mobile activation feature codes are GB increments.

Existing static activation features can be converted to mobile activations for memory and cores. To provide administrative and pricing advantages, there are "regular" static core activations and "mobile-enabled" static core activations. The price of a mobile-enabled core activation is higher than a regular static activation. However, the total price of a static core activation plus a conversion to a mobile activation is higher than the total price of a mobile-enabled static core activation plus a conversion to a mobile core activation.

The mobile activation features are as follows:

For the Power E and E GB Mobile Memory Activation (#EMA7) For the Power E and E GB Mobile Enabled Memory Activation (#EMA9) For the Power E 1-core Mobile-Enabled Activation for EPBA (#EPBN) For the Power E 1-core Mobile-Enabled Activation for EPBC (#EPBQ) For the Power E 1-core Mobile Activation for EPBB (#EPBK) For the Power E 1-core Mobile-Enabled Activation for EPBB (#EPBP)

The Power Enterprise Pools mobile activation feature codes continue to exist for the Power , , and servers and can co-exist in the same pool as the new Power E and E feature codes. The Power , , and mobile activations feature codes are as follows:

For Power , , GB Mobile Memory Activation (#EMA4) For Power 1-Core Mobile Activation (#EP22) For Power 1-core Mobile-enabled activation (#EPMC, #EPMD) For Power , 1-Core Mobile Activation (#EP23) For Power 1-core Mobile-enabled activation (#EPHL, #EPHM) For Power 1-core Mobile-enabled activation (#, #)

Power Enterprise Pools and the HMC

Each Power Enterprise Pool is managed by a single master HMC (physical hardware HMC or a virtual appliance (vHMC)),. The HMC that was used to create a Power Enterprise Pool is set as the master HMC of that pool. After a Power Enterprise Pool is created, a redundant HMC can be configured as a backup. All Power Enterprise Pool resource assignments must be performed by the master HMC. When powering on or restarting a server, ensure that the server is connected to the master HMC. This ensures that the required Mobile CoD resources are assigned to the server.

The maximum number of systems in a Power Enterprise Pool is 32 high-end or 48 mid-range systems. An HMC can manage multiple Power Enterprise Pools but is limited to total partitions. The HMC can also manage systems that are not part of the Power Enterprise Pool. Powering down an HMC does not limit the assigned resources of participating systems in a pool but does limit the ability to perform pool change operations.

After a Power Enterprise Pool is created, the HMC can be used to perform the following functions:

  • Mobile CoD processor and memory resources can be assigned to systems with inactive resources. Mobile CoD resources remain on the system to which they are assigned until they are removed from the system.
  • New systems can be added to the pool and existing systems can be removed from the pool.
  • New resources can be added to the pool or existing resources can be removed from the pool.
  • Pool information can be viewed, including pool resource assignments, compliance, and history logs.

Power Enterprise Pools qualifying machines

To qualify for use of the Power Enterprise Pool offering, a participating system must be one of the following:

  • IBM Power E with POWER8 processors, designated as MHE
  • IBM Power E with POWER8 processors, designated as MME
  • IBM Power with POWER7 processors, designated as FHB
  • IBM Power with POWER7+ processors, designated as MHD
  • IBM Power with POWER7+ processors, designated as MMD

Each system must have installed Machine Code release level , or later, and be configured with at least the minimum amount of permanently active processor cores (listed below). Processor and memory activations that are enabled for movement within the pool will be in addition to these base minimum configurations.

Ordering Power Enterprise Pools

Ordering and enabling mobile activations for enterprise class systems is accomplished by following these steps:

  1. Complete and submit the Power Enterprise Pools contract and addendum (Z and Z), specifying all system serial numbers to be included in the pool. To generate a pool ID number, send a copy to the Power Systems CoD Project Office at pcod@cromwellpsi.com This IBM License Supplement for Power Enterprise Pools (Z) is required prior to ordering mobile resources but is only required once per client. The IBM License Supplement for Power Enterprise Pools Addendum (Z) is used to assign or remove systems to or from a pool.
  2. Order mobile enablement, processor, and memory activation features for participating systems. Every system in the pool must have feature EB35 as an identifier.
  3. Ensure all participating systems and controlling HMCs have the proper level of supporting software (eFW , or later, for systems; , or later, for HMCs)
  4. When the order is fulfilled, a configuration file will be generated that contains a Power Enterprise Pool membership activation code for each of the systems in the pool along with the mobile processor and memory activations. This file will be made available on the IBM COD website at
    cromwellpsi.com

    Download the client-specific configuration file with mobile activations to the controlling HMC for the pool. The file will work only for the specified system serial numbers. A new file will be generated when systems or mobile resources are added or removed from the pool.

Adding or removing systems from Power Enterprise Pools

Adding or removing a system from an established Power Enterprise Pool requires notification to IBM. An updated addendum must be submitted to the Power Systems CoD Project Office (pcod@cromwellpsi.com) to make this change. When the update is processed, a new pool configuration file will be posted on the CoD website and must be downloaded to the controlling HMC.

Before removal from a pool, all assets (including mobile resources) that were originally purchased with the system must be returned to that same system serial number. Mobile assets belonging to a system may qualify for transfer to another system serial number, depending on specific qualifying guidelines, and will require additional administrative action.

Systems removed from a pool can join another pool and contribute mobile activation resources to the new pool or use another system's mobile activation resources. Mobile activations require a pool ID to be recognized.

Capacity on demand

Several types of capacity on demand (CoD) processors are optionally available for the Power E system node. They help meet changing resource requirements in an on demand environment by using resources installed on the system but not activated.

Capacity upgrade on demand (CUOD) enables you to purchase additional permanent processor or memory capacity and dynamically activate it when needed.

Elastic capacity on demand (elastic CoD) enables processors or memory to be temporarily activated in full-day increments as needed. Charges are based on usage reporting collected monthly. Processors and memory can be activated and turned off an unlimited number of times, whenever you want additional processing resources. With this offering, system administrators have an interface at the HMC to manage the activation and deactivation of resources. A monitor that resides on the server logs the usage activity. You must send this usage data to IBM monthly. A bill is then generated based on the total amount of processor and memory resources utilized, in increments of processor and memory (1 GB) days. Before using temporary capacity on your server, you must enable your server. To do this, order an enablement feature (MES only) and sign the required contracts.

If a Power E system node uses the IBM i operating system and the temporarily activated cores were used for IBM i partitions, you must inform the sales team placing the billing feature order which operating system caused the temporary elastic CoD processor use so that the correct feature can be used for billing.

Use the following features to order enablement features and support billing charges on the Power E

Elastic CoD Elastic CoD Elastic CoD AIX/Linux IBM i processor processor processor Processor enablement billing billing Model feature feature feature feature MME EPBA EP9T EPJ6 EPJ7 : 1 Proc-Day MME EPBA EP9T EPJ8 EPJ9 : Proc-Days Elastic CoD Elastic CoD memory memory Memory enablement billing Model feature feature feature MME EM8J EM9T EMA5, EMA6 MME EM8K EM9T EMA5, EMA6 MME EM8L EM9T EMA5, EMA6
Note: Feature EMA5 is for 1 GB Memory activation and feature EMA6 is for of feature EMA5 Memory activations.

The Elastic CoD process consists of three steps: enablement, activation, and billing

  • Elastic CoD enablement: Description

    Before requesting temporary capacity on a server, you must "enable" it for elastic CoD. To do this, order a no-charge enablement feature (MES only) and sign the required contracts. IBM will generate an enablement code, mail it to you, and post it on the web for you to retrieve and enter on your server. A processor enablement code lets you request up to 90 processor days of temporary unused CoD processor capacity for all your processor cores that have not been permanently activated. For example, if you have 20 processor cores that are not permanently activated, the processor enablement code allows up to 1, processor days (20 x 90). If you have reached or are about to reach the limit of 90 processor days per unactivated processor core, place an order for another processor enablement code to reset the number of days you can request. Similarly, a memory enablement code lets you request up to 90 days of temporary unused CoD memory capacity for all your gigabytes of memory that have not been permanently activated. For example, if you had GB of memory that was not permanently activated, the memory enablement code allows up to GB memory days ( x 90). If you have reached the limit of 90 memory days per unactivated memory, place an order for another memory enablement code to reset the number of days you can request. Before ordering a new enablement code for either memory or processor temporary CoD, you must first process an MES delete order, deleting the current enablement code installed in the server configuration file.

    Elastic CoD enablement: Step-by-step

    Prerequisite 1: The sales channel (IBM Business Partner) must sign one of the following contracts, if applicable:

    • IBM Business Partner Agreement, Distributor Attachment for Elastic Capacity On Demand
    • IBM Business Partner Agreement for Solution Providers -- Attachment for Elastic Capacity On Demand
    • IBM Business Partner Agreement -- Attachment for Elastic Capacity On Demand

    Prerequisite 2: The sales channel (IBM Business Partner or IBM Direct) must register at

    cromwellpsi.com od
    • Step 1: The client initiates the request for elastic CoD use by asking the sales channel to enable the machine for temporary capacity.
    • Step 2: The client must complete and sign the following contracts. It is the sales channel's responsibility to return the signed contract to the responsible CSO organization and fax a copy to IBM at or email a copy to tcod@cromwellpsi.com
      • Required: IBM Customer Agreement, Attachment for Elastic Capacity On Demand; IBM Supplement for On/Off Capacity On Demand
      • Optional: IBM Addendum for Elastic Capacity On Demand Alternative Reporting
    • Step 3: The sales channel places an order for processor or memory enablement features.
    • Step 4: The sales channel updates the website registration data (see prerequisite 2 above) with information about the client machine being enabled for temporary capacity.
      Note: The order for an enablement feature will not be fulfilled until this step is completed.
    • Step 5: IBM generates an enablement code, mails it, and posts it.
    • Step 6: The client retrieves the enablement code and applies it to the system node.
  • Elastic activation requests: Description

    When Elastic CoD temporary capacity is needed, simply use the HMC menu for elastic CoD and specify how many of the inactive processors or how many gigabytes of memory you would like temporarily activated for some number of days. You will be billed for the days requested, whether the capacity is assigned to partitions or left in the shared processor pool. At the end of the temporary period (days you requested), you must ensure the temporarily activated capacity is available to be reclaimed by the server (not assigned to partitions) or you will be billed for any processor days not returned (per the contract you signed).

    Elastic CoD activation requests: Step-by-step

    When you need temporary capacity, use the Elastic CoD temporary capacity HMC menu for the server and specify how many of the inactive processors or how many gigabytes of memory you would like temporarily activated for some number of days. The user must assign the temporary capacity to a partition (whether or not the machine is configured for LPAR) to begin using temporary capacity.

  • Elastic CoD billing: Description

    The contract, signed by the client before receiving the enablement feature, requires the elastic CoD user to report billing data at least once a month (whether there is activity or not). This data is used to determine the proper amount to bill at the end of each billing period (calendar quarter). Failure to report billing data for use of temporary processor or memory capacity during a billing quarter will result in default billing equivalent to 90 processor days of temporary capacity. The sales channel will be notified of client requests for temporary capacity. As a result, the sales channel must order a quantity of billing features (using the appropriate billing features EPJ6, EPJ7, EPJ8, EPJ9, EPJJ, EPJK, EPJL, or EPJM for each billable processor and memory day reported less any outstanding credit balance of processor and memory days).

    Elastic CoD billing: Step-by-step

    The client must report billing data (requested and unreturned processor and memory days) at a minimum of once per month either electronically or by fax (stated requirement in the signed contract). At the end of each billing period (calendar quarter), IBM will process the accumulated data reported and notify the sales channel for proper billing. The sales channel places an order for the appropriate quantity of billing features (one processor billing feature ordered for each processor day used, or one memory day for each memory day utilized). IBM will ship a billing notice (notifies client of billing actions) to the ship-to address on the order as part of the fulfillment process. The client pays the sales channel and the sales channel pays IBM for the fulfillment of the billing features.

For more information regarding registration, enablement, and usage of elastic CoD, visit

cromwellpsi.com
Utility COD

Utility CoD provides additional processor performance on a temporary basis within the shared processor pool. Utility CoD enables you to place a quantity of inactive processors into the system node's shared processor pool, which then becomes available to the pool's resource manager. When the system node recognizes that the combined processor utilization within the shared pool exceeds % of the level of base (purchased/active) processors assigned across uncapped partitions, then a Utility CoD Processor Minute is charged and this level of performance is available for the next minute of use. If additional workload requires a higher level of performance, the system will automatically enable the additional Utility CoD processors to be used. The system continuously monitors and charges for the performance needed above the base (permanent) level. Registration and usage reporting for Utility CoD is made using a public website and payment is based on reported usage. Utility CoD requires PowerVM Enterprise Edition to be active on the MME.

If a Power E system node uses the IBM i operating system and the temporarily activated cores were used for IBM i partitions, the client must inform the sales team placing the billing feature order which operating system caused the temporary Utility CoD processor use so that the correct feature can be used for billing.

Utility billing processor Model feature Utility CoD feature description MME EPJA Processor minutes for #EPBA MME EPJB Processor minutes for #EPBA, IBM i MME EPJN Processor minutes for #EPBC, MME EPJP Processor minutes for #EPBC, IBM i

For more information regarding registration, enablement, and use of Utility CoD, visit

cromwellpsi.com
Trial Capacity on Demand (Trial CoD)

You can request either a standard or an exception trial by visiting

cromwellpsi.com?OpenForm

Software licensing

For software licensing considerations with the various CoD offerings, refer to the latest revision of the Capacity on Demand Planning Guide at

cromwellpsi.com

Accessibility by people with disabilities

A US Section Voluntary Product Accessibility Template (VPAT) containing details on accessibility compliance can be requested at

cromwellpsi.com tml


Back to top

Reliability, Availability, and Serviceability



Reliability, fault tolerance, and data correction

The reliability of systems starts with components, devices, and subsystems that are designed to be highly reliable. During the design and development process, subsystems go through rigorous verification and integration testing processes. During system manufacturing, systems go through a thorough testing process to help ensure the highest level of product quality.

Redundant infrastructure

Considerable redundancy in the infrastructure of these systems is included so to avoid failing components leading to system outages.

Such components include power supplies, fans, processor and memory voltage regulation outputs, global service processors, and processor clocks.

All of these redundant elements are present, even in single-system node systems.

Processor and memory availability functions

The Power Systems family continues to offer and introduce significant enhancements designed to increase system availability.

POWER8 processor functions

As previously provided in POWER7 and POWER7+, the POWER8 processor has the ability to do processor instruction retry for some transient errors and alternate processor recovery for a number of core-related faults. This significantly reduces exposure to both hard (logic) and soft (transient) errors in the processor core. Soft failures in the processor core are transient (intermittent) errors, often due to cosmic rays or other sources of radiation, and generally are not repeatable. When such an error is encountered in the core, the POWER8 processor will first automatically retry the instruction. If the source of the error was truly transient, the instruction will succeed and the system will continue as before.

Hard failures are more difficult, being true logical errors that will be replicated each time the instruction is repeated. Retrying the instruction will not help in this situation. As POWER7/POWER7+ technology, processors have the ability to extract the failing instruction from the faulty core and retry it elsewhere in the system for a number of faults, after which the failing core is dynamically deconfigured and called out for replacement in the PowerVM environment. These features are designed to avoid a full system outage.

As in POWER7/POWER7+, the POWER8 processor includes single processor check stopping for certain faults that cannot be handled by the availability enhancements described in the preceding section. This significantly reduces the probability of any one processor affecting total system availability.

Partition availability priority

Also available is the ability to assign availability priorities to partitions. In the PowerVM environment if an alternate processor recovery event requires spare processor resources in order to protect a workload, when no other means of obtaining the spare resources is available, the system will determine which partition has the lowest priority and attempt to claim the needed resource. On a properly configured POWER8 processor-based server, this allows that capacity to be first obtained from, for example, a test partition instead of a financial accounting system.

Cache availability

The L2 and L3 caches in the POWER8 processor and L4 cache in the memory buffer chip are protected with double-bit detect, single-bit correct error detection code (ECC). In addition, a threshold of correctable errors detected on cache lines can result in the data in the cache lines being purged and the cache lines removed from further operation without requiring a reboot in the PowerVM environment. In addition, the L2 and L3 caches have the ability to dynamically substitute a spare bit-line for a faulty bit-lane, allowing an entire faulty "column" of cache, impacting multiple cache lines, to be repaired. An ECC uncorrectable error detected in these caches can also trigger a purge and delete of cache lines. This results in no loss of operation if the cache lines contained data unmodified from what was stored in system memory.

Modified data would be handled through Special Uncorrectable Error handling. L1 data and instruction caches also have a retry capability for intermittent errors and a cache set delete mechanism for handling solid failures.

Special Uncorrectable Error handling

Special Uncorrectable Error (SUE) handling is designed to prevent an uncorrectable error in memory or cache from immediately causing the system to terminate. Rather, the system tags the data and determines whether it will ever be used again. If the error is irrelevant, it will not force a check stop. If the data is used, termination may be limited to the program/kernel or hypervisor owning the data; or the I/O adapters controlled by an I/O hub controller would freeze if data were transferred to an I/O device.

Memory error correction and recovery

The memory has error detection and correction circuitry designed such that the failure of any one specific memory module within an ECC word by itself can be corrected absent any other fault.

In addition, a spare DRAM per rank on each memory port provides for dynamic DRAM device replacement during runtime operation. Also, dynamic lane sparing on the DMI link allows for repair of a faulty data lane.

Other memory protection features include retry capabilities for certain faults detected at both the memory controller and the memory buffer. Memory is also periodically scrubbed to allow for soft errors to be corrected and for solid single-cell errors reported to the hypervisor, which supports operating system deallocation of a page associated with a hard single-cell fault.

Active memory mirroring for the hypervisor

The POWER8 memory subsystem is capable of mirroring sections of memory by writing to two different memory locations, and when an error is detected when reading from one location, taking data from the alternate location. This is used by the POWER hypervisor in these systems to mirror critical memory within the hypervisor so that a fault, even a solid uncorrectable error in the data, can be tolerated using the mirrored memory.

Dynamic processor and memory deallocation

When correctable solid faults occur in components of the processor and memory subsystem, the system will attempt to correct the problem by using spare capacity in the failing component, using a spare column in an L2 or L3 cache, for example, a spare data line in a memory or processor bus, or a spare DRAM in memory. Use of such spare capacity restores the system to full functionality without the need to take a repair action.

When such spare capacity is not available, the service processor and POWER hypervisor may request deallocation of the component experiencing the fault. When there are sufficient resources to continue running partitions at requested capacity, the system will continue to do so. This includes taking advantage of unlicensed capacity update on demand processor and memory resources as well as licensed but unallocated resources.

When such unlicensed or unused capacity is used in this manner, a request for service will be made.

PCI extended error handling

PCI extended error handling (EEH)-enabled adapters respond to a special data packet generated from the affected PCI slot hardware by calling system firmware, which will examine the affected bus, allow the device driver to reset it, and continue without a system reboot. For Linux, EEH support extends to the majority of frequently used devices, although some third-party PCI devices may not provide native EEH support.

Mutual surveillance

The service processor monitors the operation of the firmware during the boot process and also monitors the hypervisor for termination. The hypervisor monitors the service processor and reports the service reference code when it detects surveillance loss. In the PowerVM environment, it will perform a reset/reload if it detects the loss of the service processor.

Environmental monitoring functions

The Power Systems family does ambient and over temperature monitoring and reporting.

Uncorrectable error recovery

When the auto-restart option is enabled, the system can automatically restart following an unrecoverable software error, hardware failure, or environmentally induced (ac power) failure.

Serviceability

The purpose of serviceability is to efficiently repair the system while attempting to minimize or eliminate impact to system operation. Serviceability includes system installation, MES (system upgrades/downgrades), and system maintenance/repair. Depending upon the system and warranty contract, service may be performed by the customer, an IBM representative, or an authorized warranty service provider.

The serviceability features delivered in this system provide a highly efficient service environment by incorporating the following attributes:

  • Design for SSR Set Up and Customer Installed Features (CIF).
  • Detection and Fault Isolation (ED/FI).
  • First Failure Data Capture (FFDC).
  • Guiding Light service indicator architecture is used to control a system of integrated LEDs that lead the individual servicing the machine to the correct part as quickly as possible.
  • Service labels, service cards, and service diagrams available on the system and delivered through the HMC.
  • Step-by-step service procedures available through the HMC.

Service environment

The POWER8 processor-based system requires attachment to one or more HMCs.

The HMC is a dedicated server that provides functions for configuring and managing servers for either partitioned or full-system partition using a GUI or command-line interface (CLI). An HMC attached to the system allows support personnel (with client authorization) to remotely log in to review error logs and perform remote maintenance if required.

Источник: [cromwellpsi.com]
, 2X Load Balancer 4.02 serial key or number

The Nutanix Bible

prism - /'prizɘm/ - noun - control plane
one-click management and interface for datacenter operations.

Design Methodology and Iterations

Building a beautiful, empathetic and intuitive product is core to the Nutanix platform and something we take very seriously. This section will cover our design methodologies and how we iterate on design. More coming here soon!

In the meantime feel free to check out this great post on our design methodology and iterations by our Product Design Lead, Jeremy Sallee (who also designed this) - cromwellpsi.com

You can download the Nutanix Visio stencils here: cromwellpsi.com

Architecture

Prism is a distributed resource management platform which allows users to manage and monitor objects and services across their Nutanix environment, whether hosted locally or in the cloud.

These capabilities are broken down into two key categories:

  • Interfaces
    • HTML5 UI, REST API, CLI, PowerShell CMDlets, etc.
  • Management Capabilities
    • Platform management, VM / Container CRUD, policy definition and compliance, service design and status, analytics and monitoring

The following figure illustrates the conceptual nature of Prism as part of the Nutanix platform:

Prism is broken down into two main components:

  • Prism Central (PC)
    • Multi-cluster manager responsible for managing multiple Nutanix Clusters to provide a single, centralized management interface.  Prism Central is an optional software appliance (VM) which can be deployed in addition to the AOS Cluster (can run on it).
    • 1-to-many cluster manager
  • Prism Element (PE)
    • Localized cluster manager responsible for local cluster management and operations.  Every Nutanix Cluster has Prism Element built-in.
    • 1-to-1 cluster manager

The figure shows an image illustrating the conceptual relationship between Prism Central and Prism Element:

Note
Pro tip

For larger or distributed deployments (e.g. more than one cluster or multiple sites) it is recommended to use Prism Central to simplify operations and provide a single management UI for all clusters / sites.

Prism Services

A Prism service runs on every CVM with an elected Prism Leader which is responsible for handling HTTP requests.  Similar to other components which have a Leader, if the Prism Leader fails, a new one will be elected.  When a CVM which is not the Prism Leader gets a HTTP request it will permanently redirect the request to the current Prism Leader using HTTP response status code

Here we show a conceptual view of the Prism services and how HTTP request(s) are handled:

Note
Prism ports

Prism listens on ports 80 and , if HTTP traffic comes in on port 80 it is redirected to HTTPS on port

When using the cluster external IP (recommended), it will always be hosted by the current Prism Leader.  In the event of a Prism Leader failure the cluster IP will be assumed by the newly elected Prism Leader and a gratuitous ARP (gARP) will be used to clean any stale ARP cache entries.  In this scenario any time the cluster IP is used to access Prism, no redirection is necessary as that will already be the Prism Leader.

Note
Pro tip

You can determine the current Prism leader by running 'curl localhost/prism/leader' on any CVM.

Authentication and Access Control (RBAC)

Authentication

Prism currently supports integrations with the following authentication providers:

  • Prism Element (PE)
    • Local
    • Active Directory
    • LDAP
  • Prism Central (PC)
    • Local
    • Active Directory
    • LDAP
    • SAML Authn (IDP)
Note
SAML / 2FA

SAML Authn allows Prism to integrate with external identity providers (IDP) that are SAML compliant (e.g. Okta, ADFS, etc.).

This also allows you to leverage the multi-factor authentication (MFA) / two-factor authentication (2FA) capabilities these providers support for users logging into Prism.

Access Control

Coming soon!

Navigation

Prism is fairly straight forward and simple to use, however we'll cover some of the main pages and basic usage.

Prism Central (if deployed) can be accessed using the IP address specified during configuration or corresponding DNS entry.  Prism Element can be accessed via Prism Central (by clicking on a specific cluster) or by navigating to any Nutanix CVM or cluster IP (preferred).

Once the page has been loaded you will be greeted with the Login page where you will use your Prism or Active Directory credentials to login.

Upon successful login you will be sent to the dashboard page which will provide overview information for managed cluster(s) in Prism Central or the local cluster in Prism Element.

Prism Central and Prism Element will be covered in more detail in the following sections.

Prism Central

The figure shows a sample Prism Central dashboard where multiple clusters can be monitored / managed:

From here you can monitor the overall status of your environment, and dive deeper if there are any alerts or items of interest.

Prism Central contains the following main pages (NOTE: Search is the preferred / recommended method to navigation):

  • Home Page
    • Environment wide monitoring dashboard including detailed information on service status, capacity planning, performance, tasks, etc.  To get further information on any of them you can click on the item of interest.
  • Virtual Infrastructure
    • Virtual entities (e.g. VMs, containers, Images, categories, etc.)
  • Policies
    • Policy management and creation (e.g. security (FLOW), Protection (Backup/Replication), Recovery (DR), NGT)
  • Hardware
    • Physical devices management (e.g. clusters, hosts, disks, GPU)
  • Activity
    • Environment wide alerts, events and tasks
  • Operations
    • Operations dashboards, reporting and actions (X-Play)
  • Administration
    • Environment construct management (e.g. users, groups, roles, availability zones)
  • Services
    • Add-on service management (e.g. Calm, Karbon)
  • Settings
    • Prism Central configuration

To access the menu click on the hamburger icon::

The menu expands to display the available options:

Search

Search is now the primary mechanism for Navigating the Prism Central UI (menus are still available).

To use the search bar to navigate you can use the search bar in the top left corner next to the menu icon.

Note
Search Semantics

PC Search allows for a great deal to semantics to be leveraged, some examples include:

RuleExample
Entity typevms
Entity type + metric perspective (io, cpu, memory)vms io
Entity type + alertsvm alerts
Entity type + alerts + alert filtersvm alerts severity=critical
Entity type + eventsvm events
Entity type + events + event filtersvm events classification=anomaly
Entity type + filters (both metric and attribute)vm “power state”=on
Entity type + filters + metric perspective (io, cpu, memory)vm “power state”=on io
Entity type + filters + alertsvm “power state”=on alerts
Entity type + filters + alerts + (alert filters)vm “power state”=on alerts severity=critical
Entity type + filters + eventsvm “power state”=on events
Entity type + filters + events + event filtersvm “power state”=on events classification=anomaly
Entity instance (name, ip address, disk serial etc)vm1, , BHTXSPWRM
Entity instance + Metric perspective (io, cpu, memory)vm1 io
Entity instance + alertsvm1 alerts
Entity instance + alerts + alert filtersvm1 alerts severity=critical
Entity instance + eventsvm1 events
Entity instance + events + event filtersvm1 events classification=anomaly
Entity instance + pagesvm1 nics, c1 capacity
Parent instance + entity typec1 vms
Alert title searchDisk bad alerts
Page name searchAnalysis, tasks

The prior is just a small subset of the semantics, the best way to get familiar with them is to give it a shot!

Prism Element

Prism Element contains the following main pages:

  • Home Page
    • Local cluster monitoring dashboard including detailed information on alerts, capacity, performance, health, tasks, etc.  To get further information on any of them you can click on the item of interest.
  • Health Page
    • Environment, hardware and managed object health and state information.  Includes NCC health check status as well.
  • VM Page
    • Full VM management, monitoring and CRUD (AOS)
  • Storage Page
    • Container management, monitoring and CRUD
  • Hardware
    • Server, disk and network management, monitoring and health.  Includes cluster expansion as well as node and disk removal.
  • Data Protection
    • DR, Cloud Connect and Metro Availability configuration.  Management of PD objects, snapshots, replication and restore.
  • Analysis
    • Detailed performance analysis for cluster and managed objects with event correlation
  • Alerts
    • Local cluster and environment alerts

The home page will provide detailed information on alerts, service status, capacity, performance, tasks, and much more.  To get further information on any of them you can click on the item of interest.

The figure shows a sample Prism Element dashboard where local cluster details are displayed:

Note

Keyboard Shortcuts

Accessibility and ease of use is a very critical construct in Prism.  To simplify things for the end-user a set of shortcuts have been added to allow users to do everything from their keyboard.

The following characterizes some of the key shortcuts:

Change view (page context aware):

  • O - Overview View
  • D - Diagram View
  • T - Table View

Activities and Events:

Drop down and Menus (Navigate selection using arrow keys):

  • M - Menu drop-down
  • S - Settings (gear icon)
  • F - Search bar
  • U - User drop down
  • H - Help

Features and Usage

In the following sections we'll cover some of the typical Prism uses as well as some common troubleshooting scenarios.

Anomaly Detection

In the world of IT operations there is a lot of noise. Traditionally systems would generate a great deal of alerts, events and notifications, often leading to the operator either a) not seeing critical alerts since they are lost in the noise or b) disregarding the alerts/events.

With Nutanix Anomaly Detection the system will monitor seasonal trends for time-series data (e.g. CPU usage, memory usage, latency, etc.) and establish a "band" of expected values. Only values that hit outside the "band" will trigger an event / alert. You can see the anomaly events / alerts from any entity or events page.

The following chart shows a lot of I/O and disk usage anomlaies as we were performing some large batch loads on these systems:

The following image shows the time-series values for a sample metric and the established "band":

This reduces unnecessary alerts as we don't want alerts for a "normal" state. For example, a database system will normally run at >95% memory utilization due to caching, etc. In the event this drops to say 10% that would be an anomaly as something may be wrong (e.g. database service down).

Another example would be how some batched workloads run on the weekend. For example, I/O bandwidth may be low during the work week, however on the weekends when some batch processes run (e.g. backups, reports, etc.) there may be a large spike in I/O. The system would detect the seasonality of this and bump up the band during the weekend.

Here you can see an anomaly event has occured as the values are outside the expected band:

Another topic of interest for anomalies is seasonality. For example, during the holiday period retailers will see higher demand than other times of the year, or during the end of month close.

Anomaly detection accounts for this seasonality and leverages the following periods to compare between micro (daily) and macro (quarterly) trends:

You can also set your own custom alerts or static thresholds:

Note
Anomaly Detection Algorithm

Nutanix leverages a method for determining the bands called 'Generalized Extreme Studentized Deviate Test'. A simple way to think about this is similar to a confidence interval where the values are between the lower and upper limits established by the algorithm.

The algorithm requires 3 x the granularity (e.g. daily, weekly, monthly, etc.) to calculate the seasonality and expected bands. For example, the following amounts of data would be required to adapt to each seasonality:

  • Daily: 3 days
  • Weekly: 3 weeks (21 days)
  • Monthly: 3 months (90 days)

Twitter has a good resource on how they leverage this which goes into more detail on the logic: LINK

Nutanix Software Upgrade

Performing a Nutanix software upgrade is a very simple and non-disruptive process.

To begin, start by logging into Prism and clicking on the gear icon on the top right (settings) or by pressing 'S' and selecting 'Upgrade Software':

This will launch the 'Upgrade Software' dialog box and will show your current software version and if there are any upgrade versions available.  It is also possible to manually upload a NOS binary file.

You can then download the upgrade version from the cloud or upload the version manually:

Note
Upload software from the CVM

In certain cases you may want to download the software and upload from the CVM itself. I use this in my environment when I want to download builds locally to the CVM.

First SSH into a CVM and find the Prism leader:

curl localhost/prism/leader && echo

SSH to the Prism leader and download the software bundle and metadata JSON

Run the following command to "upload" the software to Prism:

ncli software upload file-path=<PATH_TO_SOFTWARE> meta-file-path=<PATH_TO_METADATA_JSON> software-type=<SOFTWARE_TYPE>

The following shows an example for Prism Central:

ncli software upload file-path=/home/nutanix/tmp/leader-prism_cromwellpsi.com meta-file-path=/home/nutanix/tmp/leader-prism_cromwellpsi.com software-type=prism_central_deploy

It will then upload the upgrade software onto the Nutanix CVMs:

After the software is loaded click on 'Upgrade' to start the upgrade process:

You'll then be prompted with a confirmation box:

The upgrade will start with pre-upgrade checks then start upgrading the software in a rolling manner:

Once the upgrade is complete you'll see an updated status and have access to all of the new features:

Note
Note

Your Prism session will briefly disconnect during the upgrade when the current Prism Leader is upgraded.  All VMs and services running remain unaffected.

Hypervisor Upgrade

Similar to Nutanix software upgrades, hypervisor upgrades can be fully automated in a rolling manner via Prism.

To begin follow the similar steps above to launch the 'Upgrade Software' dialogue box and select 'Hypervisor'.

You can then download the hypervisor upgrade version from the cloud or upload the version manually:

It will then load the upgrade software onto the Hypervisors.  After the software is loaded click on 'Upgrade' to start the upgrade process:

You'll then be prompted with a confirmation box:

The system will then go through host pre-upgrade checks and upload the hypervisor upgrade to the cluster:

Once the pre-upgrade checks are complete the rolling hypervisor upgrade will then proceed:

Similar to the rolling nature of the Nutanix software upgrades, each host will be upgraded in a rolling manner with zero impact to running VMs.  VMs will be live-migrated off the current host, the host will be upgraded, and then rebooted.  This process will iterate through each host until all hosts in the cluster are upgraded.

Note
Pro tip

You can also get cluster wide upgrade status from any Nutanix CVM by running 'host_upgrade --status'.  The detailed per host status is logged to ~/data/logs/host_cromwellpsi.com on each CVM.

Once the upgrade is complete you'll see an updated status and have access to all of the new features:

Cluster Expansion (add node)

The ability to dynamically scale the Nutanix cluster is core to its functionality. To scale an Nutanix cluster, rack / stack / cable the nodes and power them on. Once the nodes are powered up they will be discoverable by the current cluster using mDNS.

The figure shows an example 7 node cluster with 1 node which has been discovered:

Источник: [cromwellpsi.com]
2X Load Balancer 4.02 serial key or number

Released 10/14/

Features:

  • You can now run to delete all installed Cypress versions from the cache except for the currently-installed version. Addresses #
  • There&#x;s a new option for the command that prints the sizes of the Cypress cache folders. Addresses #
  • For video recordings of runs, there is now a video chapter key for each test. If your video player supports chapters, you can move to the start of each test right away. Addresses #
  • In Windows, you can now append the browser type to the end of the path passed to the flag, like , to help detect the browser type. Addresses #
  • has new , , and presets. Addressed in #
  • When there is a new version of Cypress available, the update modal has a new design with &#x;copy to clipboard&#x; buttons to copy the upgrade commands. Addressed in #
  • The Command Log can be hidden by passing the environment variable during or to be used as a tool to debug performance issues. Addressed in #

Bugfixes:

  • We fixed a regression in where the option had no effect in Electron. Fixes #
  • Tests will no longer hang and now properly throw when there is an error thrown from a event listener. Fixes # and #
  • When a command is chained after and is called inside it, the scope will no longer permanently change. Fixes #, #, #, and #
  • Dual commands like when used after an commands now query as expected. Fixes #
  • is no longer added to the URL when has param(s). Fixes #
  • When using the route handler timeouts will no longer leak into other tests and cause random failures. Addressed in #
  • Re-running failed build steps in Bitbucket will no longer create a new run on the Cypress Dashboard. Fixes #
  • The forced garbage collection timer will no longer display when using a version of Firefox newer than Fixes #
  • The browser dropdown is no longer covered when opened from the Runs tab in the Test Runner. Fixed in #
  • Fixed an issue where preprocessor-related plugins would cause tests not to run and a duplicate instance of Cypress to be spawned. Fixes #

Misc:

  • Improved type definitions for . Addresses # and #
  • The Test Runner now shows an indicator in the footer and a toast notification if there is a new version available. Addressed in # and #

Dependency Updates:

  • Upgraded Chrome browser version used during and when selecting Electron browser in from to . Addressed in #
  • Upgraded bundled cromwellpsi.com version from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #

Released 9/28/

Features:

  • Firefox support is now out of beta!  Firefox 80 and newer are now officially stable when used with Cypress. Addresses #
    • Versions of Firefox older than 80 can still be used, but will be subject to the workaround. The desktop GUI will display a warning if such versions are used.

Bugfixes:

  • Fixed a bug where would not automatically JSONify an empty array handler. Addresses #
  • Fixed a bug where objects yielded by using on a alias would not always have a property. Addresses #
  • Fixed an issue where routes would not be able to intercept requests to HTTPS destinations on a different origin. Addresses #
  • Fixed an issue where subjects became after certain assertion failures. Addresses #
  • Fixed an issue where a with no arguments passed would receive as the first argument instead of . Addresses #
  • Fixed an issue preventing users from passing the config-file argument when starting cypress through the node module API. Addresses #
  • Fixed an issue where s to a relative URL containing would not work. Addresses #
  • Fixed an issue where Mocha hooks could still be triggered after the Test Runner was manually stopped. Addresses #
  • Fixed an issue where failed when given a cookie name with a prefix. Addresses #
  • Fixed an issue where a misleading error was displayed when test code threw an exception with a non- object. Addresses #

Misc:

  • The proxy now omits the header the same way that it does for . Addresses #
  • Added a property to objects. Addresses #
  • Updated types to no longer use deprecated Mocha interfaces. Addresses #
  • Passing an empty string to now takes precedence over npm config. Addresses #

Released 9/15/

Features:

  • Added the configuration option for enabling shadow DOM querying globally, per-suite, per-test, or programmatically. Addresses #
  • Added a option to request interception with , allowing redirects to be followed before continuing to response interception. Addresses #
  • Added the capability to specify and when stubbing static responses with . Addresses #
  • Installing Cypress pre-releases no longer requires setting the environment variable. Addresses #

Performance Improvements:

  • Fixed a performance issue which led to CPU bottlenecking during Cypress runs. Addresses # and #

Bugfixes:

  • Fixed an issue where using TypeScript path aliases in the plugins file would error. Addresses #
  • Fixed an issue where using within a shadow root would not yield the correct element. Addresses #
  • Fixed an issue where querying the shadow DOM in a callback would throw the error . Addresses #
  • Fixed an issue with special characters moving the cursor to the current line instead of the entire text editable when typing in a element. Addresses #
  • Fixed an issue where typing into a manually-focused number input would prepend the number instead of appending it. Addresses #
  • now fires a event instead of an event. Addresses # and #
  • Fixed long selectors in the selector playground text input overflowing other page elements. Addresses # and #
  • Fixed an issue where assertions on would be called twice. Addresses #
  • Fixed an issue that caused the Open in IDE button on hooks and tests not to appear in Firefox. Addresses #
  • Fixed an issue causing Cypress to hang on test retry in run mode with certain assertions. Addresses #

Documentation Changes:

  • Fixed examples of delaying and throttling responses with . Addresses #
  • Added examples of using a response function with . Addresses #
  • Removed unmaintained languages. English docs is the only supported language by the Cypress team. We greatly appreciate the contributions from the community for other languages, but these docs are largely stale, unmaintained, and partial. The Cypress team will seek out more scalable docs internalization implementation in the future.

Misc:

  • The configuration flag has been removed. It is no longer necessary to enable shadow DOM testing.
  • Improved the error message when the subject provided to is not a shadow host. Addresses #
  • Improved the error message when the Cypress binary is not executable. It now recommends trying to clear the cache and re-install. Addresses #
  • Added missing type declarations for the command.
  • Updated the type declaration for , adding to the list of allowed return types. Addresses #

Released 9/1/

Features:

  • Introducing experimental full network stubbing support .
    • With enabled, the command is available.
    • By using , your tests can intercept, modify, and wait on any type of HTTP request originating from your app, including s, requests, beacons, and subresources (like iframes and scripts).
    • Outgoing HTTP requests can be modified before reaching the destination server, and the HTTP response can be intercepted as well before it reaches the browser.
    • See the docs for more information on how to enable this experiment.
  • now accepts an option for specifying the constructor with which to create the event to trigger. Addresses #

Bugfixes:

  • Improved warnings for when user is exceeding test limits of the free Dashboard plan. Addresses #
  • Added to types. Addresses #
  • Added types for field on . Addresses #
  • Fixed a typo in type definitions. Addresses #
  • Cypress now resolves and loads cromwellpsi.com for TypeScript projects starting from the plugins directory. Addresses #
  • Fixed an issue where, if npm config&#x;s is set, unexpected behavior could occur. Addresses #
  • Fixed an issue where nesting hooks within other hooks caused the test to never finish. Addresses #
  • Fixed an issue in where tests would unexpectedly fail with a Can&#x;t resolve &#x;async_hooks&#x; error. Addresses #
  • Fixed an issue where return values from blob utils were mistaken for promises and could cause errors. Addresses #
  • Fixed an issue with loading files. Addresses #
  • Fixed an issue causing tests to run slowly in Electron. Addresses #
  • Using with only chainer assertions will now throw an error. Addresses #
  • now includes the property in the event object when appropriate. Addresses #
  • Fixed an issue where Cypress would not detect newer bit installations of Chrome on Windows. Addresses #
  • Fixed an issue where Cypress would not detect per-user Firefox installations on Windows. Addresses #

Dependency Updates:

  • Updated dependency to version . Addresses #
  • Updated dependency to version . Addresses #
  • Updated dependency to version . Addresses #

Released 8/19/

Summary:

Cypress now includes support for test retries! Similar to how Cypress will retry assertions when they fail, test retries will allow you to automatically retry a failed test prior to marking it as failed. Read our new guide on Test Retries for more details.

Breaking Changes:

Please read our Migration Guide which explains the changes in more detail and how to change your code to migrate to Cypress

  • The plugin has been deprecated in favor of test retries built into Cypress. Addresses #
  • The option has been renamed to to more closely reflect its behavior. Addressed in #
  • The configuration has been renamed to to more closely reflect its behavior. Addressed in #
  • The option has been renamed to to more closely reflect its behavior. Addresses #
  • is now a requirement to run Cypress on Linux. Addressed in #
  • Values yielded by , , and will now contain the property if specified. Addresses #
  • The configuration flag has been removed, since this behavior is now the default. Addresses #
  • The return type of the methods , , , and have changed from to . Addresses #
  • Cypress no longer supports file paths with a question mark or exclamation mark in them. We now use the webpack preprocessor by default and it does not support files with question marks or exclamation marks. Addressed in #
  • For TypeScript compilation of spec, support, and plugins files, the option is no longer coerced to . If you need to utilize , set it in your . Addresses #
  • Cypress now requires TypeScript +. Addressed in #
  • Installing Cypress on your system now requires cromwellpsi.com 10+. Addresses #
  • In spec files, the values for the globals and no longer include leading slashes. Addressed in #

Features:

  • There&#x;s a new configuration option to configure the number of times to retry a failing test. Addresses #
  • , , and now accept options , , , and to hold down key combinations while clicking. Addresses #
  • You can now chain off of and to disabled snapshots during those commands. For example: . Addresses #

Bugfixes:

  • The error will no longer incorrectly throw when rerunning tests in the Test Runner. Fixes # and #
  • Cypress will no longer throw a error during on Firefox versions >= Fixes #
  • The error will no longer throw when calling on an element in the shadow dom. Fixes #
  • Cypress environment variables that accept arrays as their value will now properly evaluate as arrays. Fixes #
  • Elements having will no longer be considered hidden if it has child elements within it that are visible. Fixes #
  • When is enabled, and commands now work correctly in shadow dom as well as passing a selector to when the subject is in the shadow dom. Fixed in #
  • Screenshots will now be correctly taken when a test fails in an or hook after the hook has already passed. Fixes #
  • Cypress will no longer report screenshots overwritten in a option as a unique screenshot. Fixes #
  • Taking screenshots will no longer fail when the screenshot names are too long for the filesystem to accept. Fixes #
  • The last used browser will now be correctly remembered during if a non-default-channel browser was selected. Fixes #
  • For TypeScript projects, will now be loaded and used to configure TypeScript compilation of spec and support files. Fixes # and #
  • now correctly show the number of passed and failed tests when a test passes but the fails. Fixes #
  • The Developer Tools menu will now always display in Electron when switching focus from Specs to the Test Runner. Fixes #

Documentation Changes:

Misc:

  • Cypress now uses the webpack preprocessor by default to preprocess spec files.
  • The Runs tab within the Test Runner has a new improved design when the project has not been set up or login is required. Addressed in #
  • The type for the object returned from is now correct. Addresses #
  • The type definition for Cypress&#x;s can now be extended. Addresses #
  • The type definition for has been added. Addresses #

Dependency Updates

  • Upgraded Chrome browser version used during cypress run and when selecting Electron browser in cypress open from to . Addressed in #
  • Upgraded bundled cromwellpsi.com version from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in # and #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #

Released 8/5/

Bugfixes:

  • The error will no longer incorrectly throw when rerunning tests in the Test Runner. Fixes #
  • Skipping the last test before a nested suite with a hook will now correctly run the tests in the suite following the skipped test. Fixes #

Dependency Updates:

  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #

Released 8/3/

Features:

  • Now you can control whether screenshots are automatically taken on test failure during by setting in your configuration. Addresses #
  • The now has access to a readonly property within the object that returns the current Cypress version being run. This will allow plugins to better target specific Cypress versions. Addresses #
  • During , you can now run a subset of all specs by entering a text search filter and clicking &#x;Run n tests&#x;. Addresses #

Bugfixes:

  • elements that have a parent with will now correctly evaluate as visible. Fixes #
  • Applications using custom elements will no longer trigger infinite XHR request loops. Fixes #
  • When snapshotting the DOM, Cypress no longer causes to be triggered on custom elements. Fixes #
  • Spec files containing characters now properly run in Cypress. Fixes #
  • When using the shortcut in , an error is now thrown when the fixture file cannot be found. Fixes #
  • Cypress no longer thrown error when passing a file containing content to . Fixes #
  • Values containing exponential operators passed to via the command line are now properly read. Fixes #
  • The Open in IDE button no longer disappears from hooks when the tests are manually rerun. Fixes #
  • When is enabled, AST rewriting will no longer return an output before the body is done being written. This would happen when the response body was too large and the response would be sent while the body was still being modified. Fixes #
  • When using , Cypress now properly types into an input within an iframe that auto focuses the input. Fixes #

Misc:

  • Dependencies for our npm package are no longer pinned to a specific version. This allows the use of to fix security vulnerabilities without needing a patch release from Cypress. Addresses #
  • We now collect environment variables for AWS CodeBuild when recording to the Dashboard. Addressed #
  • Types inside Module API are now accessible via the namespace. Addresses #
  • We added more type definitions for the command. Addresses #
  • Cookie command&#x;s property type is now a Number instead of a String. Addresses #
  • There are some minor visual improvements to the Test Runner&#x;s Command Log when hovering, focusing and clicking on hook titles and pending tests. Addressed in #

Dependency Updates:

  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #

Released 7/21/

Features:

  • You can now pass an option to to skip checking whether the element is scrollable. Addresses #
  • now accepts Dates as well as a Number for now. Fixes #
  • The Module API has a new function to assist in parsing user-supplied command line arguments using the same logic as uses. Addesses #

Bugfixes:

  • Running multiple specs within Firefox during on Windows will no longer fail trying to make a connection to the browser. Fixes #
  • Cypress will no longer throw a error during on Firefox versions >= Fixes #
  • Fixed an issue where Cypress tests in Chromium-family browsers could randomly fail with the error WebSocket is already in CLOSING or CLOSED state. Fixes #
  • Taking a screenshot of an element that changes height upon scroll will no longer throw an error. Fixes #
  • Setting or from within the test configuration now properly changes the viewport size for the duration of the suite or test.
  • Setting deep objects and arrays on within the now sets the values correctly. Fixes #
  • The progress bar for now reflects the correct and of the command. Fixes #
  • The command&#x;s progress bar will not longer restart when its parent test is collapsed in the Command Log. Fixes #
  • Key value pairs sent to as will now be properly read in. Fixes #
  • Stubbed responses responding with an empty string to now correctly display as &#x;xhr stub&#x; in the Test Runner&#x;s Command Log. Fixes #
  • Quickly reclicking the Run All Tests button in the Test Runners&#x; Command Log will no longer throw errors about undefined properties and the tests will no longer hang. Fixes #

Misc:

  • The error messages thrown from and now mention that extensions are supported. Addresses #
  • The style when focusing on tests in the Command Log has been updated. Addresses #

Dependency Updates:

  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #, #, and #

Released 7/7/

Features:

  • You can open a , , , and hook definition in your IDE from the Test Runner&#x;s Command Log by clicking the Open in IDE button. Addresses #
  • , , , and hook definitions now display separately in the Test Runner&#x;s Command Log when defined in separate hook definitions. Addresses #
  • You can now open a spec file directly from the Tests tab in the Test Runner by clicking the Open in IDE button. Addresses #

Bugfixes:

  • HTTP requests taking longer than the default will no longer be prematurely canceled by the Cypress proxy layer. Fixes #
  • Using Cypress commands to traverse the DOM on an application with a global variable will no longer throw Illegal Invocation errors. Fixes #
  • When is enabled, using on an input in the Shadow DOM will not result in an error. Fixes #
  • When is enabled, checking for visibility on a shadow dom host element will no longer hang if the host element was the foremost element and had an ancestor with fixed position. Fixes #
  • Debug logs from the module will no longer appear if any environment variable was set. Fixed #

Misc:

  • We made some minor UI updates to the Test Runner. Addresses # and #

Dependency Updates:

  • Upgraded from to . Addressed in #

Released 6/23/

Features:

  • An animated progress bar now displays on every command in the Command Log indicating how long the command has left to run before reaching its command timeout. Addresses #
  • There is now an configuration option. When this option is , Cypress will automatically replace with a polyfill that Cypress can spy on and stub. Addresses #
  • You can now pass a flag to to silence any Cypress specific output from stdout. Addresses #

Bugfixes:

  • now correctly resolves when waiting for XHR requests that contain resource-like text in the XHR&#x;s query params or hash (like , ., ). #
  • We fixed a regression in where errors thrown from the application under test as strings would not be correctly handled. Fixes #
  • We fixed a regression in where would hang if the subject had a shadow root and was not enabled. Fixes #
  • We fixed a regression in so that now properly asserts against , or element&#x;s values. Fixes #
  • Cypress no longer responds with responses during a recorded when the stdout is too large. Fixes #
  • We fixed an issue where Cypress could exit successfully even with failing tests when launched in global mode. Fixes #
  • Assertion logs now properly display as parent commands in the Command Log regardless of what is in the hook. Fixes #
  • When is enabled, querying shadow dom in certain situations will no longer cause the error during . Fixes #
  • Highlighting of elements upon hover of a command in the Command Log are now visible when targeting absolute positioned elements. Fixes #
  • will no longer crash when provided an empty string to the flag. Fixes #

Misc:

  • There is now a loading state to indicate when tests are loading in the Command Log. Addresses #
  • The type definitions for , , and have been updated to allow TypeScript types. Addresses #
  • The type definitions for now correctly yield the type of the previous subject. Addresses #
  • The type definitions now allow for the &#x;key&#x; keyword when chaining off &#x;any&#x; or &#x;all&#x; assertion chainers. Addresses #

Dependency Updates:

  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in # and #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #

Released 6/8/

Features:

Bugfixes:

  • Upon domain navigation, and hooks defined in completed suites no longer erroneously rerun. Fixes #
  • Errors thrown within root level hooks now correctly display in the Test Runner&#x;s Command Log. Fixes #
  • We fixed a regression in where an XHR response without a body would cause Cypress to throw . Fixes #
  • We fixed a regression in where using to an authenticated URL would error with Fixes #
  • We now properly load code from the or when they are TypeScript files. Fixes #
  • utf-8 characters now properly display within error code frames. Fixes #
  • Errors thrown in a fail handler now display a stack trace and code frame pointing to the origin of the error. Fixes #
  • now properly clicks on wrapped inline elements when the first child element in the parent element has no width or height. Fixes # and #
  • now properly respects the option. It also better handles situations when passed a promise that never resolves. Fixes #
  • When is enabled, Cypress will no longer exit with SIGABRT in certain situations. Fixes #
  • We fixed a regression in where the Tests button in the Test Runner wouldn&#x;t take you back to the tests list in all browsers. Fixes #
  • Using the shortcut during no longer does anything. This prevents the Test Runner from getting into a &#x;stuck&#x; state. Fixes #

Misc:

  • The design of errors and some iconography displayed in the Test Runner&#x;s Command Log have been updated. Addresses #, # and #
  • The commands in the Test Runner&#x;s Command Log now display in the same casing as the original command. Addresses #
  • The navigation links in the Test Runner now display the correct CSS styles when focused. Addresses #
  • now has TypeScript types for the option. Addresses #
  • TypeScript types for options and have been updated to be more accurate. Addresses #
  • TypeScript types for have been added. Addresses #
  • We now display a more accurate error message when passing a browser to the flag that is not supported by Cypress. Addresses #
  • We&#x;re continuing to make progress in converting our codebase from CoffeeScript to JavaScript. Addresses # in # and #

Dependency Updates:

  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #

Released 5/26/

Features:

  • now supports an option that can be used to set the encoding of the response body, defaulting to . Addresses # and #

Bugfixes:

  • We fixed a regression in where the address bar of the application under test would disappear when scrolling commands ran and the application under test would visually shift after taking screenshots. Fixes # and #
  • We fixed a regression in where test runs could hang when loading spec files with source maps. Fixes #

Misc:

  • We now display a more descriptive error message when the plugins file does not export a function. Addresses #

Released 5/20/

Features:

  • Errors in the Test Runner now display a code frame to preview where the failure occurred with the relevant file, line number, and column number highlighted. Clicking on the file link will open the file in your preferred file opener and highlight the line and column in editors that support it. Addresses #
  • Cypress now utilizes source maps to enhance the error experience. Stack traces are translated so that your source files are shown instead of the generated file that is loaded by the browser. Cypress will include an inline source map in your spec file. If you modify the preprocessor, ensure that inline source maps are enabled to get the same experience. Users of should upgrade to v or later of the package which will correctly inline source maps. Addresses #, # and #
  • Cypress now enables AST-based JS/HTML rewriting when setting the configuration option to . Addresses #
  • Number arguments passed to , , , , and assertions chainers are now automatically cast to strings for comparison. Addresses #

Bugfixes:

  • Default TypeScript options are now set to which cromwellpsi.com and the browser expect. This fixes a situation where setting a different module in a would cause errors to throw if you had , or keywords in your code. Fixes #, #, #, and #
  • When is enabled, setting or to a relative href, or using or with a relative href will no longer navigate the AUT to the wrong URL. Fixes # and #
  • When is enabled, the use of and will no longer cause the AUT to break out of the Cypress iframe. Fixes # and #
  • When is enabled, calls to , , and other will no longer point to the wrong reference after being proxied through Cypress. Fixes #
  • When is enabled, scripts using the attribute for sub-resource integrity (SRI) will now load after being proxied through Cypress. Fixes #
  • When is enabled, the use of to set the URL will no longer navigate the AUT to the wrong URL. Fixes #
  • Type definitions will no longer conflict when running Cypress in a project with Jest. Fixes #
  • We increased the timeout for launching Firefox from seconds to 50 seconds. Previously, users hitting this limit would encounter a cannot open socket error; now, the error will be wrapped. Fixes #
  • will now click in the correct coordinates when either x or y coordinate options are zero. Fixes #
  • Cypress no longer displays when a browser can&#x;t connect. Fixes #
  • You can now pass the option to to select options within a disabled . Addresses #
  • We now throw an error when attempting to an within a disabled . Fixes #
  • We fixed a regression in where the message output during errors were not formatted correctly. Fixes #
  • Using now correctly behaves the same as Lodash&#x;s capitalize method. Fixes #
  • When is enabled, clicking on a component spec now watches the correct file without assuming it is an integration file. Fixes #
  • Firefox video recording no longer crashes Cypress when running very short spec files. Fixes #
  • Applications containing a DOM element with an id attribute containing &#x;jquery&#x; will no longer throw an error during . Fixes #
  • Long errors generated when compiling or bundling the test file are now horizontally scrollable. Fixes #

Misc:

  • Cypress no longer requires write access to the root of the project, it instead will display a warning when no write access is given. Addresses #
  • We increased the timeout for launching Chrome from 20 seconds to 50 seconds. Addressed in #
  • We increased the timeout for macOS or Linux to exit from a command when looking for available browsers from 5 seconds to 30 seconds. Addressed in #
  • We improved error handling when Cypress launches Chromium-family browsers. Addresses #
  • We now export types as a partial of the full options interface. Addresses #
  • We&#x;re continuing to make progress in converting our codebase from CoffeeScript to JavaScript. Addresses # in #, #, #, #, #, and #

Dependency Updates:

  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in # and #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #
  • Upgraded from to . Addressed in #

Released 4/28/

Features:

Источник: [cromwellpsi.com]
.

What’s New in the 2X Load Balancer 4.02 serial key or number?

Screen Shot

System Requirements for 2X Load Balancer 4.02 serial key or number

Add a Comment

Your email address will not be published. Required fields are marked *