Mysql front 3,2 3,2 build 6.14 serial key or number
Mysql front 3,2 3,2 build 6.14 serial key or number
When using MySQL you may need to ensure the availability or scalability of your MySQL installation. Availability refers to the ability to cope with, and if necessary recover from, failures on the host, including failures of MySQL, the operating system, or the hardware. Scalability refers to the ability to spread the load of your application queries across multiple MySQL servers. As your application and usage increases, you may need to spread the queries for the application across multiple servers to improve response times.
There are a number of solutions available for solving issues of availability and scalability. The two primary solutions supported by MySQL are MySQL Replication and MySQL Cluster. Further options are available using third-party solutions such as DRBD (Distributed Replicated Block Device) and Heartbeat, and more complex scenarios can be solved through a combination of these technologies. These tools work in different ways:
The information and suitability of the various technologies and different scenarios is summarized in the following table.
Using MySQL with DRBD
The Distributed Replicated Block Device (DRBD) is a Linux Kernel module that constitutes a distributed storage system. You can use DRBD to share block devices between Linux servers and, in turn, share file systems and data.
DRBD implements a block device which can be used for storage and which is replicated from a primary server to one or more secondary servers. The distributed block device is handled by the DRBD service. Writes to the DRBD block device are distributed among the servers. Each DRBD service writes the information from the DRBD block device to a local physical block device (hard disk).
On the primary data writes are written both to the underlying physical block device and distributed to the secondary DRBD services. On the secondary, the writes received through DRBD and written to the local physical block device. On both the primary and the secondary, reads from the DRBD block device are handled by the underlying physical block device. The information is shared between the primary DRBD server and the secondary DRBD server synchronously and at a block level, and this means that DRBD can be used in high-availability solutions where you need failover support.
Figure DRBD Architecture Overview
When used with MySQL, DRBD can be used to ensure availability in the event of a failure. MySQL is configured to store information on the DRBD block device, with one server acting as the primary and a second machine available to operate as an immediate replacement in the event of a failure.
For automatic failover support you can combine DRBD with the Linux Heartbeat project, which will manage the interfaces on the two servers and automatically configure the secondary (passive) server to replace the primary (active) server in the event of a failure. You can also combine DRBD with MySQL Replication to provide both failover and scalability within your MySQL environment.
For information on how to configure DRBD and MySQL, including Heartbeat support, see Section , “Configuring the DRBD Environment”.
An FAQ for using DRBD and MySQL is available. See Section A, “MySQL FAQ: MySQL, DRBD, and Heartbeat”.
Note
Because DRBD is a Linux Kernel module it is currently not supported on platforms other than Linux.
Configuring the DRBD Environment
To set up DRBD, MySQL and Heartbeat you need to follow a number of steps that affect the operating system, DRBD and your MySQL installation.
Before starting the installation process, you should be aware of the following information, terms and requirements on using DRBD:
DRBD is a solution for enabling high-availability, and therefore you need to ensure that the two machines within your DRBD setup are as identically configured as possible so that the secondary machine can act as a direct replacement for the primary machine in the event of system failure.
DRBD works through two (or more) servers, each called a node
The node that contains the primary data, has read/write access to the data, and in an HA environment is the currently active node is called the primary.
The server to which the data is replicated is referred as secondary.
A collection of nodes that are sharing information are referred to as a DRBD cluster.
For DRBD to operate you must have a block device on which the information can be stored on each DRBD node. The lower level block device can be a physical disk partition, a partition from a volume group or RAID device or any other block device.
Typically you use a spare partition on which the physical data will be stored . On the primary node, this disk will hold the raw data that you want replicated. On the secondary nodes, the disk will hold the data replicated to the secondary server by the DRBD service. Ideally, the size of the partition on the two DRBD servers should be identical, but this is not necessary as long as there is enough space to hold the data that you want distributed between the two servers.
For the distribution of data to work, DRBD is used to create a logical block device that uses the lower level block device for the actual storage of information. To store information on the distributed device, a file system is created on the DRBD logical block device.
When used with MySQL, once the file system has been created, you move the MySQL data directory (including InnoDB data files and binary logs) to the new file system.
When you set up the secondary DRBD server, you set up the physical block device and the DRBD logical block device that will store the data. The block device data is then copied from the primary to the secondary server.
The overview for the installation and configuration sequence is as follows:
You may optionally want to configure high availability using the Linux Heartbeat service. See Section , “Using Linux HA Heartbeat”, for more information.
Setting Up Your Operating System for DRBD
To set your Linux environment for using DRBD there are a number of system configuration steps that you must follow.
Make sure that the primary and secondary DRBD servers have the correct host name, and that the host names are unique. You can verify this by using the uname command:
shell> uname -n drbd-oneIf the host name is not set correctly, edit the appropriate file (usually , , or ) and set the name correctly.
Each DRBD node must have a unique IP address. Make sure that the IP address information is set correctly within the network configuration and that the host name and IP address has been set correctly within the file.
Although you can rely on the DNS or NIS system for host resolving, in the event of a major network failure these services may not be available. If possible, add the IP address and host name of each DRBD node into the /etc/hosts file for each machine. This will ensure that the node information can always be determined even if the DNS/NIS servers are unavailable.
As a general rule, the faster your network connection the better. Because the block device data is exchanged over the network, everything that will be written to the local disk on the DRBD primary will also be written to the network for distribution to the DRBD secondary.
For tips on configuring a faster network connection see Section , “Optimizing Performance and Reliability”.
You must have a spare disk or disk partition that you can use as the physical storage location for the DRBD data that will be replicated. You do not have to have a complete disk available, a partition on an existing disk is acceptable.
If the disk is unpartitioned, partition the disk using fdisk, cfdisk or other partitioning solution. Do not create a file system on the new partition.
Remember that you must have a physical disk available for the storage of the replicated information on each DRBD node. Ideally the partitions that will be used on each node should be of an identical size, although this is not strictly necessary. Do, however, ensure that the physical partition on the DRBD secondary is at least as big as the partitions on the DRBD primary node.
If possible, upgrade your system to the latest available Linux kernel for your distribution. Once the kernel has been installed, you must reboot to make the kernel active. To use DRBD you will also need to install the relevant kernel development and header files that are required for building kernel modules. Platform specification information for this is available later in this section.
Before you compile or install DRBD, you must make sure the following tools and files are in place:
Kernel header files
Kernel source files
GCC Compiler
flex
Here are some operating system specific tips for setting up your installation:
Tips for Red Hat (including CentOS and Fedora):
Use up2date or yum to update and install the latest kernel and kernel header files:
root-shell> up2date kernel-smp-devel kernel-smpReboot. If you are going to build DRBD from source, then update your system with the required development packages:
root-shell> up2date glib-devel openssl-devel libgcrypt-devel glib2-devel \ pkgconfig ncurses-devel rpm-build rpm-devel redhat-rpm-config gcc \ gcc-c++ bison flex gnutls-devel lm_sensors-devel net-snmp-devel \ python-devel bzip2-devel libselinux-devel perl-DBIIf you are going to use the pre-built DRBD RPMs:
root-shell> up2date gnutls lm_sensors net-snmp ncurses libgcrypt glib2 openssl glibTips for Debian, Ubuntu, Kubuntu:
Use apt-get to install the kernel packages
root-shell> apt-get install linux-headers linux-image-serverIf you are going to use the pre-built Debian packages for DRBD then you should not need any additional packages.
If you want to build DRBD from source, you will need to use the following command to install the required components:
root-shell> apt-get install devscripts flex bison build-essential \ dpkg-dev kernel-package debconf-utils dpatch debhelper \ libnet1-dev e2fslibs-dev libglibdev automake \ libgnutls-dev libtool libltdl3 libltdl3-devTips for Gentoo:
Gentoo is a source based Linux distribution and therefore many of the source files and components that you will need are either already installed or will be installed automatically by emerge.
To install DRBD x, you must unmask the build by adding the following line to :
sys-cluster/drbd ~x86 sys-cluster/drbd-kernel ~x86If your kernel does not already have the userspace to kernelspace linker enabled, then you will need to rebuild the kernel with this option. The best way to do this is to use genkernel with the option to select the option and then rebuild the kernel. For example, at the command line as :
root-shell> genkernel --menuconfig allThen through the menu options, select Device Drivers, Connector - unified userspace <-> kernelspace linker and finally press 'y' or 'space' to select the Connector - unified userspace <-> kernelspace linker option. Then exit the menu configuration. The kernel will be rebuilt and installed. If this is a new kernel, make sure you update your bootloader accordingly. Now reboot to enable the new kernel.
Installing and Configuring DRBD
To install DRBD you can choose either the pre-built binary installation packages or you can use the source packages and build from source. If you want to build from source you must have installed the source and development packages.
If you are installing using a binary distribution then you must ensure that the kernel version number of the binary package matches your currently active kernel. You can use uname to find out this information:
shell> uname -r gentoo-r6Once DRBD has been built and installed, you need to edit the file and then run a number of commands to build the block device and set up the replication.
Although the steps below are split into those for the primary node and the secondary node, it should be noted that the configuration files for all nodes should be identical, and many of the same steps have to be repeated on each node to enable the DRBD block device.
Building from source:
To download and install from the source code:
Download the source code.
Unpack the package:
shell> tar zxfChange to the extracted directory, and then run make to build the DRBD driver:
shell> cd drbd shell> makeInstall the kernel driver and commands:
shell> make install
Binary Installation:
SUSE Linux Enterprise Server (SLES)
For SUSE, use yast:
shell> yast -i drbdAlternatively:
shell> rug install drbdDebian
Use apt-get to install the modules. You do not need to install any other components.
shell> apt-get install drbd8-utils drbd8-moduleDebian and
You must install the to build the DRBD kernel module, in addition to the DRBD components.
shell> apt-get install drbdutils drbdmodule-source \ build-essential module-assistant shell> module-assistant auto-install drbdCentOS
DRBD can be installed using yum:
shell> yum install drbd kmod-drbdUbuntu
You must enable the universe component for your preferred Ubuntu mirror in , and then issue these commands:
shell> apt-get update shell> apt-get install drbd8-utils drbd8-module-source \ build-essential module-assistant shell> module-assistant auto-install drbd8Gentoo
You can now emerge DRBD x into your Gentoo installation:
root-shell> emerge drbdOnce has been downloaded and installed, you need to decompress and copy the default configuration file from into .
Setting Up a DRBD Primary Node
To set up a DRBD primary node you need to configure the DRBD service, create the first DRBD block device and then create a file system on the device so that you can store files and data.
The DRBD configuration file defines a number of parameters for your DRBD configuration, including the frequency of updates and block sizes, security information and the definition of the DRBD devices that you want to create.
The key elements to configure are the sections which specify the configuration of each node.
To follow the configuration, the sequence below shows only the changes from the default file. Configurations within the file can be both global or tied to specific resource.
Set the synchronization rate between the two nodes. This is the rate at which devices are synchronized in the background after a disk failure, device replacement or during the initial setup. You should keep this in check compared to the speed of your network connection. Gigabit Ethernet can support up to MB/second, Mbps Ethernet slightly less than a tenth of that (12MBps). If you are using a shared network connection, rather than a dedicated, then you should gauge accordingly.
To set the synchronization rate, edit the setting within the block:
syncer { rate 10M; }You may additionally want to set the parameter. The default for this parameter is
For more detailed information on synchronization, the effects of the synchronization rate and the effects on network performance, see Section , “Optimizing the Synchronization Rate”.
Set up some basic authentication. DRBD supports a simple password hash exchange mechanism. This helps to ensure that only those hosts with the same shared secret are able to join the DRBD node group.
cram-hmac-alg “sha1”; shared-secret "";Now you must configure the host information. Remember that you must have the node information for the primary and secondary nodes in the file on each host. You need to configure the following information for each node:
: The path of the logical block device that will be created by DRBD.
: The block device that will be used to store the data.
: The IP address and port number of the host that will hold this DRBD device.
: The location where the metadata about the DRBD device will be stored. You can set this to and DRBD will use the physical block device to store the information, by recording the metadata within the last sections of the disk. The exact size will depend on the size of the logical block device you have created, but it may involve up to MB.
A sample configuration for our primary server might look like this:
on drbd-one { device /dev/drbd0; disk /dev/hdd1; address ; meta-disk internal; }The configuration block should be repeated for the secondary node (and any further) nodes:
on drbd-two { device /dev/drbd0; disk /dev/hdd1; address ; meta-disk internal; }The IP address of each block must match the IP address of the corresponding host. Do not set this value to the IP address of the corresponding primary or secondary in each case.
Before starting the primary node, you should create the metadata for the devices:
root-shell> drbdadm create-md allYou are now ready to start DRBD:
root-shell> /etc/init.d/drbd startDRBD should now start and initialize, creating the DRBD devices that you have configured.
DRBD creates a standard block device - to make it usable, you must create a file system on the block device just as you would with any standard disk partition. Before you can create the file system, you must mark the new device as the primary device (that is, where the data will be written and stored), and initialize the device. Because this is a destructive operation, you must specify the command line option to overwrite the raw data:
root-shell> drbdadm -- --overwrite-data-of-peer primary allIf you are using a version of DRBD x or earlier, then you need to use a different command-line option:
root-shell> drbdadm -- --do-what-I-say primary allNow create a file system using your chosen file system type:
root-shell> cromwellpsi.com3 /dev/drbd0You can now mount the file system and if necessary copy files to the mount point:
root-shell> mkdir /mnt/drbd root-shell> mount /dev/drbd0 /mnt/drbd root-shell> echo "DRBD Device" >/mnt/drbd/samplefile
Your primary node is now ready to use. You should now configure your secondary node or nodes.
Setting Up a DRBD Secondary Node
The configuration process for setting up a secondary node is the same as for the primary node, except that you do not have to create the file system on the secondary node device, as this information will automatically be transferred from the primary node.
To set up a secondary node:
Copy the file from your primary node to your secondary node. It should already contain all the information and configuration that you need, since you had to specify the secondary node IP address and other information for the primary node configuration.
Create the DRBD metadata on the underlying disk device:
root-shell> drbdadm create-md allStart DRBD:
root-shell> /etc/init.d/drbd start
Once DRBD has started, it will start the copy the data from the primary node to the secondary node. Even with an empty file system this will take some time, since DRBD is copying the block information from a block device, not simply copying the file system data.
You can monitor the progress of the copy between the primary and secondary nodes by viewing the output of :
root-shell> cat /proc/drbd version: (api/proto) SVN Revision: build by root@drbd-one, 0: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r ns nr:0 dw:0 dr al:0 bm lo:0 pe:7 ua ap:0 [==>] sync'ed: % (/)K finish: speed: 4, (4,) K/sec resync: used:1/31 hits misses starving:0 dirty:0 changed act_log: used:0/ hits:0 misses:0 starving:0 dirty:0 changed:0You can monitor the synchronization process by using the watch command to run the command at specific intervals:
root-shell> watch -n 10 'cat /proc/drbd'Monitoring DRBD Device
Once the primary and secondary machines are configured and synchronized, you can get the status information about your DRBD device by viewing the output from :
root-shell> cat /proc/drbd version: (api/proto) SVN Revision: build by root@drbd-one, 0: cs:Connected st:Primary/Secondary ds:UpToDate/UpToDate C r ns nr:0 dw dr al bm lo:0 pe:0 ua:0 ap:0 resync: used:0/31 hits misses starving:0 dirty:0 changed act_log: used:0/ hits misses starving:0 dirty:0 changedThe first line provides the version/revision and build information.
The second line starts the detailed status information for an individual resource. The individual field headings are as follows:
cs: connection state
st: node state (local/remote)
ld: local data consistency
ds: data consistency
ns: network send
nr: network receive
dw: disk write
dr: disk read
pe: pending (waiting for ack)
ua: unack'd (still need to send ack)
al: access log write count
In the previous example, the information shown indicates that the nodes are connected, the local node is the primary (because it is listed first), and the local and remote data is up to date with each other. The remainder of the information is statistical data about the device, and the data exchanged that kept the information up to date.
You can also get the status information for DRBD by using the startup script with the option:
root-shell> /etc/init.d/drbd status * status: started * drbd driver loaded OK; device status: [ ok ] version: (api/proto) GIT-hash: 9ba8b93e24df0dd3fb1f9b90eddb build by root@cromwellpsi.com, 0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r ns:0 nr:0 dw:0 dr al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0The information and statistics are the same.
Managing your DRBD Installation
For administration, the main command is drbdadm. There are a number of commands supported by this tool the control the connectivity and status of the DRBD devices.
Note
For convenience, a bash completion script is available. This will provide tab completion for options to drbdadm. The file can be found within the standard DRBD source package within the directory. To enable, copy the file to . You can load it manually by using:
shell> source /etc/bash_completion.d/drbdadmThe most common commands are those to set the primary/secondary status of the local device. You can manually set this information for a number of reasons, including when you want to check the physical status of the secondary device (since you cannot mount a DRBD device in primary mode), or when you are temporarily moving the responsibility of keeping the data in check to a different machine (for example, during an upgrade or physical move of the normal primary node). You can set state of all local device to be the primary using this command:
root-shell> drbdadm primary allOr switch the local device to be the secondary using:
root-shell> drbdadm secondary allTo change only a single DRBD resource, specify the resource name instead of .
You can temporarily disconnect the DRBD nodes:
root-shell> drbdadm disconnect allReconnect them using :
root-shell> drbdadm connect allFor other commands and help with drbdadm see the DRBD documentation.
Additional DRBD Configuration Options
Additional options you may want to configure:
: Specifies the level of consistency to be used when information is written to the block device. The option is similar in principle to the option within MySQL. Three levels are supported:
: Data is considered written when the information reaches the TCP send buffer and the local physical disk. There is no guarantee that the data has been written to the remote server or the remote physical disk.
: Data is considered written when the data has reached the local disk and the remote node's network buffer. The data has reached the remote server, but there is no guarantee it has reached the remote server's physical disk.
: Data is considered written when the data has reached the local disk and the remote node's physical disk.
The preferred and recommended protocol is C, as it is the only protocol which ensures the consistency of the local and remote physical storage.
: If you do not want to use the entire partition space with your DRBD block device then you can specify the size of the DRBD device to be created. The size specification can include a quantifier. For example, to set the maximum size of the DRBD partition to 1GB you would use:
size 1G;
With the configuration file suitably configured and ready to use, you now need to populate the lower-level device with the metadata information, and then start the DRBD service.
Configuring MySQL for DRBD
Once you have configured DRBD and have an active DRBD device and file system, you can configure MySQL to use the chosen device to store the MySQL data.
When performing a new installation of MySQL, you can either select to install MySQL entirely onto the DRBD device, or just configure the data directory to be located on the new file system.
In either case, the files and installation must take place on the primary node, because that is the only DRBD node on which you can mount the DRBD device file system as read/write.
You should store the following files and information on your DRBD device:
MySQL data files, including the binary log, and InnoDB data files.
MySQL configuration file ().
To set up MySQL to use your new DRBD device and file system:
If you are migrating an existing MySQL installation, stop MySQL:
shell> mysqladmin shutdownCopy the onto the DRBD device. If you are not already using a configuration file, copy one of the sample configuration files from the MySQL distribution.
root-shell> mkdir /mnt/drbd/mysql root-shell> cp /etc/cromwellpsi.com /mnt/drbd/mysqlCopy your MySQL data directory to the DRBD device and mounted file system.
root-shell> cp -R /var/lib/mysql /drbd/mysql/dataEdit the configuration file to reflect the change of directory by setting the value of the option. If you have not already enabled the binary log, also set the value of the option.
datadir = /drbd/mysql/data log-bin = mysql-binCreate a symbolic link from to the new configuration file on the DRBD device file system.
root-shell> ln -s /drbd/mysql/cromwellpsi.com /etc/cromwellpsi.comNow start MySQL and check that the data that you copied to the DRBD device file system is present.
root-shell> /etc/init.d/mysql start
Your MySQL data should now be located on the file system running on your DRBD device. The data will be physically stored on the underlying device that you configured for the DRBD device. Meanwhile, the content of your MySQL databases will be copied to the secondary DRBD node.
Note that you cannot access the information on your secondary node, as a DRBD device working in secondary mode is not available for use.
Optimizing Performance and Reliability
Because of the nature of the DRBD system, the critical requirements are for a very fast exchange of the information between the two hosts. To ensure that your DRBD setup is available to switch over in the event of a failure as quickly as possible, you must transfer the information between the two hosts using the fastest method available.
Typically, a dedicated network circuit should be used for exchanging DRBD data between the two hosts. You should then use a separate, additional, network interface for your standard network connection. For an example of this layout, see Figure , “DRBD Architecture Using Separate Network Interfaces”.
Figure DRBD Architecture Using Separate Network Interfaces
The dedicated DRBD network interfaces should be configured to use a nonrouted TCP/IP network configuration. For example, you might want to set the primary to use and the secondary These networks and IP addresses should not be part of normal network subnet.
Note
The preferred setup, whenever possible, is to use a direct cable connection (using a crossover cable with Ethernet, for example) between the two machines. This eliminates the risk of loss of connectivity due to switch failures.
Using Bonded Ethernet Network Interfaces
For a set-up where there is a high-throughput of information being written, you may want to use bonded network interfaces. This is where you combine the connectivity of more than one network port, increasing the throughput linearly according to the number of bonded connections.
Bonding also provides an additional benefit in that with multiple network interfaces effectively supporting the same communications channel, a fault within a single network interface in a bonded group does not stop communication. For example, imagine you have a bonded setup with four network interfaces providing a single interface channel between two DRBD servers. If one network interface fails, communication can continue on the other three without interruption, although it will be at a lower speed
To enable bonded connections you must enable bonding within the kernel. You then need to configure the module to specify the bonded devices and then configure each new bonded device just as you would a standard network device:
To configure the bonded devices, you need to edit the file (RedHat) or add a file to the directory.. In each case you will define the parameters for the kernel module. First, you need to specify each bonding device:
alias bond0 bondingYou can then configure additional parameters for the kernel module. Typical parameters are the option and the option.
The option specifies how the network interfaces are used. The default setting is 0, which means that each network interface is used in a round-robin fashion (this supports aggregation and fault tolerance). Using setting 1 sets the bonding mode to active-backup. This means that only one network interface is used as a time, but that the link will automatically failover to a new interface if the primary interface fails. This settings only supports fault-tolerance.
The option enables the MII link monitoring. A positive value greater than zero indicates the monitoring frequency in milliseconds for checking each slave network interface that is configured as part of the bonded interface. A typical value is
You set th options within the module parameter file, and you must set the options for each bonded device individually:
options bond0 miimon= mode=1Reboot your server to enable the bonded devices.
Configure the network device parameters. There are two parts to this, you need to setup the bonded device configuration, and then configure the original network interfaces as 'slaves' of the new bonded interface.
For RedHat Linux:
Edit the configuration file for the bonded device. For device this would be :
DEVICE=bond0 BOOTPROTO=none ONBOOT=yes GATEWAY= NETWORK= NETMASK= IPADDR= USERCTL=noThen for each network interface that you want to be part of the bonded device, configure the interface as a slave to the 'master' bond. For example, the configuration of in might look like this::
DEVICE=eth0 BOOTPROTO=none HWADDR= ONBOOT=yes TYPE=Ethernet MASTER=bond0 SLAVE=yesFor Debian Linux:
Edit the file and configure the logical name and MAC address for each devices. For example:
eth0 macNow you need to set the configuration of the devices in :
auto bond0 iface bond0 inet static address netmask network gateway up /sbin/ifenslave bond0 eth0 up /sbin/ifenslave bond0 eth1For Gentoo:
Use emerge to add the package to your system.
Edit the file and specify the network interface slaves in a bond, the dependencies and then the configuration for the bond itself. A sample configuration might look like this:
slaves_bond0="eth0 eth1 eth2" config_bond0=( " netmask " ) depend_bond0() { need cromwellpsi.com0 cromwellpsi.com1 cromwellpsi.com2 }Then make sure that you add the new network interface to list of interfaces configured during boot:
root-shell> rc-update add default cromwellpsi.com0
Once the bonded devices are configured you should reboot your systems.
You can monitor the status of a bonded connection using the file system:
root-shell> cat /proc/net/bonding/bond0 Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: eth1 MII Status: up MII Polling Interval (ms): Up Delay (ms): Down Delay (ms): Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: Slave Interface: eth2 MII Status: up Link Failure Count: 0 Permanent HW addr:Optimizing the Synchronization Rate
The configuration parameter should be configured with care as the synchronization rate can have a significant effect on the performance of the DRBD setup in the event of a node or disk failure where the information is being synchronized from the Primary to the Secondary node.
In DRBD, there are two distinct ways of data being transferred between peer nodes:
Replication refers to the transfer of modified blocks being transferred from the primary to the secondary node. This happens automatically when the block is modified on the primary node, and the replication process uses whatever bandwidth is available over the replication link. The replication process cannot be throttled, because you want to transfer of the block information to happen as quickly as possible during normal operation.
Synchronization refers to the process of bringing peers back in sync after some sort of outage, due to manual intervention, node failure, disk swap, or the initial setup. Synchronization is limited to the configured for the DRBD device.
Both replication and synchronization can take place at the same time. For example, the block devices can be being synchronized while they are actively being used by the primary node. Any I/O that updates on the primary node will automatically trigger replication of the modified block. In the event of a failure within an HA environment, it is highly likely that synchronization and replication will take place at the same time.
Unfortunately, if the synchronization rate is set too high, then the synchronization process will use up all the available network bandwidth between the primary and secondary nodes. In turn, the bandwidth available for replication of changed blocks is zero, which means replication will stall and I/O will block, and ultimately the application will fail or degrade.
To avoid enabling the to consume the available network bandwidth and prevent the replication of changed blocks you should set the to less than the maximum network bandwidth.
You should avoid setting the sync rate to more than 30% of the maximum bandwidth available to your device and network bandwidth. For example, if your network bandwidth is based on Gigabit ethernet, you should achieve MB/s. Assuming your disk interface is capable of handling data at MB/s or more, then the sync rate should be configered as (33MB/s). If your disk system works at a rate lower than your network interface, use 30% of your disk interface speed.
Depending on the application, you may wish to limit the synchronization rate. For example, on a busy server you may wish to configure a significantly slower synchronization rate to ensure the replication rate is not affected.
The parameter controls the number of 4MB blocks of the underlying disk that can be written to at the same time. Increasing this parameter lowers the frequency of the meta data transactions required to log the changes to the DRBD device, which in turn lowers the number of interruptions in your I/O stream when synchronizing changes. This can lower the latency of changes to the DRBD device. However, if a crash occurs on your primary, then all of the blocks in the activity log (that is, the number of blocks) will need to be completely resynchronized before replication can continue.
Using Linux HA Heartbeat
The Heartbeat program provides a basis for verifying the availability of resources on one or more systems within a cluster. In this context a resource includes MySQL, the file systems on which the MySQL data is being stored and, if you are using DRBD, the DRBD device being used for the file system. Heartbeat also manages a virtual IP address, and the virtual IP address should be used for all communication to the MySQL instance.
A cluster within the context of Heartbeat is defined as two computers notionally providing the same service. By definition, each computer in the cluster is physically capable of providing the same services as all the others in the cluster. However, because the cluster is designed for high-availability, only one of the servers is actively providing the service at any one time. Each additional server within the cluster is a “hot-spare” that can be brought into service in the event of a failure of the master, its next connectivity or the connectivity of the network in general.
The basics of Heartbeat are very simple. Within the Heartbeat cluster (see Figure , “Heartbeat Architecture”, each machine sends a 'heartbeat' signal to the other hosts in the cluster. The other cluster nodes monitor this heartbeat. The heartbeat can be transmitted over many different systems, including shared network devices, dedicated network interfaces and serial connections. Failure to get a heartbeat from a node is treated as failure of the node. Although we do not know the reason for the failure (it could be an OS failure, a hardware failure in the server, or a failure in the network switch), it is safe to assume that if no heartbeat is produced there is a fault.
Figure Heartbeat Architecture
In addition to checking the heartbeat from the server, the system can also check the connectivity (using ping) to another host on the network, such as the network router. This allows Heartbeat to detect a failure of communication between a server and the router (and therefore failure of the server, since it is no longer capable of providing the necessary service), even if the heartbeat between the servers in the clusters is working fine.
In the event of a failure, the resources on the failed host are disabled, and the resources on one of the replacement hosts is enabled instead. In addition, the Virtual IP address for the cluster is redirected to the new host in place of the failed device.
When used with MySQL and DRBD, the MySQL data is replicated from the master to the slave using the DRBD device, but MySQL is only running on the master. When the master fails, the slave switches the DRBD devices to be primary, the file systems on those devices are mounted, and MySQL is started. The original master (if still available) has its resources disabled, which means shutting down MySQL and unmounting the file systems and switching the DRBD device to secondary.
Heartbeat Configuration
Heartbeat configuration requires three files located in . The contains the main heartbeat configuration, including the list of the nodes and times for identifying failures. contains the list of resources to be managed within the cluster. The file contains the security information for the cluster.
The contents of these files should be identical on each host within the Heartbeat cluster. It is important that you keep these files in sync across all the hosts. Any changes in the information on one host should be copied to the all the others.
For these examples n example of the file is shown below:
logfacility local0 keepalive ms deadtime 10 warntime 5 initdead 30 mcast bond0 2 0 mcast bond1 1 0 auto_failback off node drbd1 node drbd2The individual lines in the file can be identified as follows:
: Sets the logging, in this case setting the logging to use syslog.
: Defines how frequently the heartbeat signal is sent to the other hosts.
— the delay in seconds before other hosts in the cluster are considered 'dead' (failed).
: The delay in seconds before a warning is written to the log that a node cannot be contacted.
: The period in seconds to wait during system startup before the other host is considered to be down.
: Defines a method for sending a heartbeat signal. In the above example, a multicast network address is being used over a bonded network device. If you have multiple clusters then the multicast address for each cluster should be unique on your network. Other choices for the heartbeat exchange exist, including a serial connection.
If you are using multiple network interfaces (for example, one interface for your server connectivity and a secondary and/or bonded interface for your DRBD data exchange) then you should use both interfaces for your heartbeat connection. This decreases the chance of a transient failure causing a invalid failure event.
: Sets whether the original (preferred) server should be enabled again if it becomes available. Switching this to may cause problems if the preferred went offline and then comes back on line again. If the DRBD device has not been synced properly, or if the problem with the original server happens again you may end up with two different datasets on the two servers, or with a continually changing environment where the two servers flip-flop as the preferred server reboots and then starts again.
: Sets the nodes within the Heartbeat cluster group. There should be one for each server.
An optional additional set of information provides the configuration for a ping test that will check the connectivity to another host. You should use this to ensure that you have connectivity on the public interface for your servers, so the ping test should be to a reliable host such as a router or switch. The additional lines specify the destination machine for the , which should be specified as an IP address, rather than a host name; the command to run when a failure occurs, the authority for the failure and the timeout before an nonresponse triggers a failure. A sample configure is shown below:
ping respawn hacluster /usr/lib64/heartbeat/ipfail apiauth ipfail gid=haclient uid=hacluster deadping 5In the above example, the ipfail command, which is part of the Heartbeat solution, is called on a failure and 'fakes' a fault on the currently active server. You need to configure the user and group ID under which the command should be executed (using the ). The failure will be triggered after 5 seconds.
Note
The value must be less than the value.
The file holds the authorization information for the Heartbeat cluster. The authorization relies on a single unique 'key' that is used to verify the two machines in the Heartbeat cluster. The file is used only to confirm that the two machines are in the same cluster and is used to ensure that the multiple clusters can co-exist within the same network.
Using Heartbeat with MySQL and DRBD
To use Heartbeat in combination with MySQL you should be using DRBD (see Section , “Using MySQL with DRBD”) or another solution that allows for sharing of the MySQL database files in event of a system failure. In these examples, DRBD is used as the data sharing solution.
Heartbeat manages the configuration of different resources to manage the switching between two servers in the event of a failure. The resource configuration defines the individual services that should be brought up (or taken down) in the event of a failure.
The file within defines the resources that should be managed, and the individual resource mentioned in this file in turn relates to scripts located within . The resource definition is defined all on one line:
drbd1 drbddisk Filesystem::/dev/drbd/drbd::ext3 mysqlThe line is notionally split by whitespace. The first entry () is the name of the preferred host; that is the server that is normally responsible for handling the service. The last field is virtual IP address or name that should be used to share the service. This is the IP address that should be used to connect to the MySQL server. It will automatically be allocated to the server that is active when Heartbeat starts.
The remaining fields between these two fields define the resources that should be managed. Each Field should contain the name of the resource (and each name should refer to a script within ). In the event of a failure, these resources are started on the backup server by calling the corresponding script (with a single argument, ), in order from left to right. If there are additional arguments to the script, you can use a double colon to separate each additional argument.
In the above example, we manage the following resources:
: The DRBD resource script, this will switch the DRBD disk on the secondary host into primary mode, making the device read/write.
: Manages the Filesystem resource. In this case we have supplied additional arguments to specify the DRBD device, mount point and file system type. When executed this should mount the specified file system.
: Manages the MySQL instances and starts the MySQL server. You should copy the file from the directory from any MySQL release into the directory.
If this file is not available in your distribution, you can use the following as the contents of the file:
#!/bin/bash # # This script is inteded to be used as resource script by heartbeat # # Mar by Monty Taylor # ### . /etc/ha.d/shellfuncs case "$1" in start) res=`/etc/init.d/mysql start` ret=$? ha_log $res exit $ret ;; stop) res=`/etc/init.d/mysql stop` ret=$? ha_log $res exit $ret ;; status) if [[ `ps -ef | grep '[m]ysqld'` > 1 ]] ; then echo "running" else echo "stopped" fi ;; *) echo "Usage: mysql {start|stop|status}" exit 1 ;; esac exit 0
If you want to be notified of the failure by email, you can add another line to the file with the address for warnings and the warning text:
MailTo::youremail@cromwellpsi.com::DRBDFailureWith the Heartbeat configuration in place, copy the , and files from your primary and secondary servers to make sure that the configuration is identical. Then start the Heartbeat service, either by calling or by rebooting both primary and secondary servers.
You can test the configuration by running a manual failover, connect to the primary node and run:
root-shell> /usr/lib64/heartbeat/hb_standbyThis will cause the current node to relinquish its resources cleanly to the other node.
Using Heartbeat with DRBD and dopd
As a further extension to using DRBD and Heartbeat together, you can enable dopd. The dopd daemon handles the situation where a DRBD node is out of date compared to the master and prevents the slave from being promoted to master in the event of a failure. This stops a situation where you have two machines that have been masters ending up different data on the underlying device.
For example, imagine that you have a two server DRBD setup, master and slave. If the DRBD connectivity between master and slave fails then the slave would be out of the sync with the master. If Heartbeat identifies a connectivity issue for master and then switches over to the slave, the slave DRBD device will be promoted to the primary device, even though the data on the slave and the master is not in synchronization.
In this situation, with dopd enabled, the connectivity failure between the master and slave would be identified and the metadata on the slave would be set to . Heartbeat will then refuse to switch over to the slave even if the master failed. In a dual-host solution this would effectively render the cluster out of action, as there is no additional fail over server. In an HA cluster with three or more servers, control would be passed to the slave that has an up to date version of the DRBD device data.
To enable dopd, you need to modify the Heartbeat configuration and specify dopd as part of the commands executed during the monitoring process. Add the following lines to your file:
respawn hacluster /usr/lib/heartbeat/dopd apiauth dopd gid=haclient uid=haclusterMake sure you make the same modification on both your primary and secondary nodes.
You will need to reload the Heartbeat configuration:
root-shell> /etc/init.d/heartbeat reloadYou will also need to modify your DRBD configuration by configuration the option. You will need to add the configuration line into the section of on both hosts. An example of the full block is shown below:
common { handlers { outdate-peer "/usr/lib/heartbeat/drbd-peer-outdater"; } }Finally, set the option on your DRBD configured resources:
resource my-resource { disk { fencing resource-only; } }Now reload your DRBD configuration:
root-shell> drbdadmin adjust allYou can test the system by unplugging your DRBD link and monitoring the output from .
Dealing with System Level Errors
Because a kernel panic or oops may indicate potential problem with your server, you should configure your server to remove itself from the cluster in the event of a problem. Typically on a kernel panic your system will automatically trigger a hard reboot. For a kernel oops a reboot may not happen automatically, but the issue that caused that oops may still lead to potential problems.
You can force a reboot by setting the and parameters of the kernel control file . For example:
cromwellpsi.com_on_oops = 1 cromwellpsi.com = 1You can also set these parameters during runtime by using the sysctl command. You can either specify the parameters on the command line:
shell> sysctl -w cromwellpsi.com=1Or you can edit your file and then reload the configuration information:
shell> sysctl -pBy setting both these parameters to a positive value (actually the number of seconds to wait before triggering the reboot), the system will reboot. Your second heartbeat node should then detect that the server is down and then switch over to the failover host.
Homebrew Formulae
npm(1) -- a JavaScript package manager
SYNOPSIS
This is just enough info to get you up and running.
Much more info will be available via once it's installed.
IMPORTANT
You need node v6 or higher to run this program.
To install an old and unsupported version of npm that works on node v5 and prior, clone the git repo and dig through the old tags and branches.
npm is configured to use npm, Inc.'s public registry at cromwellpsi.com by default. Use of the npm public registry is subject to terms of use available at cromwellpsi.com
You can configure npm to use any compatible registry you like, and even run your own registry. Check out the doc on registries.
Super Easy Install
npm is bundled with node.
Windows Computers
Get the MSI. npm is in it.
Apple Macintosh Computers
Get the pkg. npm is in it.
Other Sorts of Unices
Run . npm will be installed with node.
If you want a more fancy pants install (a different version, customized paths, etc.) then read on.
Fancy Install (Unix)
There's a pretty robust install script at cromwellpsi.com You can download that and run it.
Here's an example using curl:
Slightly Fancier
You can set any npm configuration params with that script:
Or, you can run it in uber-debuggery mode:
Even Fancier
Get the code with git. Use to build the docs and do other stuff. If you plan on hacking on npm, is your friend.
If you've got the npm source code, you can also semi-permanently set arbitrary config keys using the , and then run npm commands by doing . (This is helpful for testing, or running stuff without actually installing npm itself.)
Windows Install or Upgrade
Many improvements for Windows users have been made in npm 3 - you will have a better experience if you run a recent version of npm. To upgrade, either use Microsoft's upgrade tool, download a new version of Node, or follow the Windows upgrade instructions in the Installing/upgrading npm post.
If that's not fancy enough for you, then you can fetch the code with git, and mess with it directly.
Installing on Cygwin
No.
Uninstalling
So sad to see you go.
Or, if that fails,
More Severe Uninstalling
Usually, the above instructions are sufficient. That will remove npm, but leave behind anything you've installed.
If you would like to remove all the packages that you have installed, then you can use the command to find them, and then to remove them.
To remove cruft left behind by npm 0.x, you can use the included script file. You can run it conveniently like this:
npm uses two configuration files, one for per-user configs, and another for global (every-user) configs. You can view them by doing:
Uninstalling npm does not remove configuration files by default. You must remove them yourself manually if you want them gone. Note that this means that future npm installs will not remember the settings that you have chosen.
More Docs
Check out the docs.
You can use the command to read any of them.
If you're a developer, and you want to use npm to publish your program, you should read this.
BUGS
When you find issues, please report them:
Be sure to include all of the output from the npm command that didn't work as expected. The file is also helpful to provide.
SEE ALSO
Current Tags
- latest (2 months ago)
- latest-1 (5 years ago)
- latest-2 (4 years ago)
- latest-3 (4 years ago)
- latest-4 (3 years ago)
- latest-5 (2 years ago)
- latest-6 (2 months ago)
- lts (2 months ago)
- next (2 months ago)
- next-2 (4 years ago)
- next-3 (4 years ago)
- next-4 (3 years ago)
- next-5 (2 years ago)
- next-6 (2 months ago)
- next-7 (2 days ago)
Versions
- 2 days ago
- 3 days ago
- 5 days ago
- rc.4 9 days ago
- rc.3 12 days ago
- rc.2 16 days ago
- rc.1 16 days ago
- rc.0 17 days ago
- beta 19 days ago
- beta a month ago
- beta a month ago
- beta a month ago
- beta.9 a month ago
- beta.8 2 months ago
- beta.7 2 months ago
- beta.6 2 months ago
- beta.5 2 months ago
- 2 months ago
- beta.4 2 months ago
- beta.3 2 months ago
- beta.2 2 months ago
- beta.1 2 months ago
- beta.0 2 months ago
- 3 months ago
- 3 months ago
- 5 months ago
- 7 months ago
- 7 months ago
- 7 months ago
- 8 months ago
- 8 months ago
- 9 months ago
- 9 months ago
- 9 months ago
- 10 months ago
- 10 months ago
- 10 months ago
- a year ago
- a year ago
- a year ago
- a year ago
- next.0 a year ago
- a year ago
- a year ago
- a year ago
- a year ago
- a year ago
- a year ago
- next.3 a year ago
- next.2 a year ago
- next.1 a year ago
- next.0 a year ago
- a year ago
- next.2 a year ago
- next.1 a year ago
- next.0 a year ago
- a year ago
- next.0 a year ago
- a year ago
- next.0 2 years ago
- 2 years ago
- next.0 2 years ago
- 2 years ago
- next.2 2 years ago
- next.1 2 years ago
- next.0 2 years ago
- 2 years ago
- 2 years ago
- next.1 2 years ago
- next.0 2 years ago
- 2 years ago
- next.0 2 years ago
- 2 years ago
- next.0 2 years ago
- 2 years ago
- next.0 2 years ago
- 2 years ago
- next.0 2 years ago
- 2 years ago
- next.1 2 years ago
- next.0 2 years ago
- 2 years ago
- next.0 2 years ago
- 2 years ago
- 2 years ago
- next.1 2 years ago
- next.0 2 years ago
- 2 years ago
- next.2 2 years ago
- next.1 3 years ago
- next.0 3 years ago
- next.0 3 years ago
- next.0 3 years ago
- 3 years ago
- next.0 3 years ago
- 3 years ago
- 3 years ago
- 3 years ago
- 3 years ago
- 3 years ago
- 3 years ago
- 3 years ago
- 3 years ago
- 3 years ago
- 3 years ago
- 3 years ago
- 3 years ago
- 3 years ago
- 3 years ago
- 3 years ago
- 3 years ago
- 3 years ago
- 3 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 4 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- [deprecated] 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- [deprecated] 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 5 years ago
- 6 years ago
- 6 years ago
- 6 years ago
- 6 years ago
- 6 years ago
- 6 years ago
- 6 years ago
- 6 years ago
- 6 years ago
- 6 years ago
- 6 years ago
- 6 years ago
- 6 years ago
- 6 years ago
- 6 years ago
- 6 years ago
- 6 years ago
- 6 years ago
What’s New in the Mysql front 3,2 3,2 build 6.14 serial key or number?
Screen Shot
System Requirements for Mysql front 3,2 3,2 build 6.14 serial key or number
- First, download the Mysql front 3,2 3,2 build 6.14 serial key or number
-
You can download its setup from given links: