Install and Test the MYSQL Cluster NDB API

9 Jan

Here are the steps I followed to get the C++ NDB API example code working.

Configure API Node in MySQL Cluster

  • In this step, we configure an API Node for the MySQL Cluster.
  • Run ALL these commands on the instance running the Mgmt Node
    • First, Shutdown the Mgmt and Data Nodes.
      • ndb_mgm -e shutdown
    • Edit the config.ini file and add an entry for the API node
      • gedit /opt/mysqlcluster/deploy/conf/config.ini
        ...
        [api]
        ...
    • For now the entry with one line above is sufficient, which allows API nodes from any host to connect. In
      the future we can add hostname sub-entries for security purposes.
    • Restart the Management Node
      • ndb_mgmd -f /opt/mysqlcluster/deploy/conf/config.ini –initial –configdir=/opt/mysqlcluster/deploy/conf
    • Verify that the API node is active
      • ndb_mgm -e show
  • Login to EACH instance running the Data Node and restart them:
    • ndbd -c localhost:1186 OR ndbd -c domU-12-31-39-04-D6-A3.compute-1.internal:1186
  • Ensure that the MySQL Node is running
    • mysqld –defaults-file=/opt/mysqlcluster/deploy/conf/my.cnf –user=root &

Setup NDB API example directory

  • mkdir -p /opt/ndbapi/example
  • cd /opt/ndbapi/example
  • gedit example.cpp
  • Copy/Paste the following code to the C++ file example.cpp:
    • NOTE: IF RUNNING ON EC2 REPLACE CONNECTSTR below with hostname of the Management Node
      #include <stdio.h>
      #include <stdlib.h>
      
      #include <NdbApi.hpp>
      
      #define CONNECTSTR "localhost"
      
      Ndb_cluster_connection* example_init()
      {
        Ndb_cluster_connection* conn;
      
        // initialise MySQL and Ndb client libraries   if( ndb_init() )
        {
          exit(EXIT_FAILURE);
        }
      
        // prepare connection to cluster   conn = new Ndb_cluster_connection(CONNECTSTR);
      
        // initiate connection   if( conn->connect(4, 5, 1) )
        {
          fprintf(stderr, "Unable to connect to cluster within 30 seconds.\n");
          exit(EXIT_FAILURE);
        }
      
        // wait for data (ndbd) nodes   if(conn->wait_until_ready(30, 0) < 0)
        {
          fprintf(stderr, "Cluster was not ready within 30 seconds.\n");
          exit(EXIT_FAILURE);
        }
      
        return conn;
      }
      
      void example_end(Ndb_cluster_connection* conn)
      {
        // terminate connection   delete conn;
      
        // shut down MySQL and Ndb client libraries   ndb_end(2);
      }
      
      int main(int argc, char** argv)
      {
        Ndb_cluster_connection* conn;
      
        conn = example_init();
      
        printf("Connection established.\n");
      
        //example_end(conn); 
        return EXIT_SUCCESS;
      }
  • Note the #include <NdbApi.hpp> this is referring to a NDB API header file that all client programs must include.
  • The Ndb_cluster_connection class and other classes referred to in this example are available in either a static library (libndbclient.a) or
    a shared library (libndbclient.so)

Resolving NDB API dependencies

NDB API Library Dependencies

  • Compiling C++ programs that use the NDB APIs requires either the libndblclient.so (dynamic) or libndbclient.a (static) libraries.
  • Neither of these are available in the MySQL Cluster 7.2.1 Beta Binary download
    • There is a libndbclient_static.a file in the download. It’s not clear if this is a bug or intentional
[root@myhost ndb-bindings-0.7.1]# find / -name *ndbclient* -print

--- MYSQL CLUSTER 7.1.15 GA, BINARY DOWNLOAD
/opt/mysqlcluster/home/mysql-cluster-gpl-7.1.15a-linux-i686-glibc23/lib/libndbclient.so
/opt/mysqlcluster/home/mysql-cluster-gpl-7.1.15a-linux-i686-glibc23/lib/libndbclient.a
/opt/mysqlcluster/home/mysql-cluster-gpl-7.1.15a-linux-i686-glibc23/lib/libndbclient.so.4.0.0
/opt/mysqlcluster/home/mysql-cluster-gpl-7.1.15a-linux-i686-glibc23/lib/libndbclient.so.4
/opt/mysqlcluster/home/mysql-cluster-gpl-7.1.15a-linux-i686-glibc23/lib/libndbclient.la

-- MYSQL CLUSTER 7.2.1 BETA, BINARY DOWNLOAD
/opt/mysqlcluster/home/mysql-cluster-gpl-7.2.1-linux2.6-i686/lib/libndbclient_static.a
  • To verify I compared the contents of 2 ndbclient libraries:
    • ar -vt /opt/mysqlcluster/home/mysql-cluster-gpl-7.1.15a-linux-i686-glibc23/lib/libndbclient.a > file1
    • ar -vt /opt/mysqlcluster/home/mysql-cluster-gpl-7.2.1-linux2.6-i686/lib/libndbclient_static.a > file2
    • diff file1 file2
      • There are numerous classes missing in the libndbclient_static.a version.

NDB API Header Files

  • Compiling C++ programs that use the NDB APIs require the NDB API Header Files
  • Again, the 7.2.1 BETA BINARY DOWNLOAD doesn’t have these header files
--- MYSQL CLUSTER 7.1.15 GA, BINARY DOWNLOAD
[root@myhost /]# ls /opt/mysqlcluster/home/mysql-cluster-gpl-7.1.15a-linux-i686-glibc23/include/storage/ndb/ndbapi/
NdbApi.hpp       ndb_cluster_connection.hpp  NdbError.hpp           NdbIndexOperation.hpp      NdbInterpretedCode.hpp  NdbPool.hpp      NdbScanFilter.hpp
ndbapi_limits.h  NdbDictionary.hpp           NdbEventOperation.hpp  NdbIndexScanOperation.hpp  NdbOperation.hpp        NdbRecAttr.hpp   NdbScanOperation.hpp
NdbBlob.hpp      ndberror.h                  Ndb.hpp                NdbIndexStat.hpp           ndb_opt_defaults.h      NdbReceiver.hpp  NdbTransaction.hpp

-- MYSQL CLUSTER 7.2.1 BETA, BINARY DOWNLOAD
[root@myhost /]# ls /opt/mysqlcluster/home/mysql-cluster-gpl-7.2.1-linux2.6-i686/include/storage/ndb/ndbapi/
ls: /opt/mysqlcluster/home/mysql-cluster-gpl-7.2.1-linux2.6-i686/include/storage/ndb/ndbapi/: No such file or directory

Resolving the Library/Header File Issues

  • To resolve the issues mentioned above we download the source code distribution of MySQL Cluster 7.2.1.
  • mkdir -p /opt/ndbapi
  • cd /opt/ndbapi
  • Download the Source code distribution:
  • tar xvf mysql-cluster-gpl-7.2.1.tar.gz
  • cd mysql-cluster-gpl-7.2.1
  • Install CMAKE
    • yum install cmake
    • Note: Previous releases of MySQL Cluster used the GNU Autotools (configure/make/make install). Starting this version cmake is used instead on configure.
  • Generate the Makefile, compile and install
  • cmake .
  • make
  • make install
    • This will install the MySQL Cluster software in /usr/local/mysql
  • Note that compiling the source also generated the libndbclient_static.a file but also generates libndbclient.so file in the src
    directory as shown below.
-- MYSQL CLUSTER 7.2.1 BETA, SOURCE DOWNLOAD, COMPILED and INSTALLED

/usr/local/mysql/lib/libndbclient_static.a

-- Sill No Header files in installed location
ls /usr/local/mysql/include/storage/ndb/ndbapi/
ls: /usr/local/mysql/include/storage/ndb/ndbapi/: No such file or directory

-- But shared library is generated and header files are in the source code tree

ls /opt/ndbapi/mysql-cluster-gpl-7.2.1/storage/ndb/src/libndbclient.so
ls /opt/ndbapi/mysql-cluster-gpl-7.2.1/storage/ndb/src/libndbclient_static.a

[root@myhost mysql-cluster-gpl-7.2.1]# ls /opt/ndbapi/mysql-cluster-gpl-7.2.1/storage/ndb/include/ndbapi
NdbApi.hpp       ndb_cluster_connection.hpp  NdbError.hpp           NdbIndexOperation.hpp      NdbInterpretedCode.hpp  NdbPool.hpp      NdbScanFilter.hpp
ndbapi_limits.h  NdbDictionary.hpp           NdbEventOperation.hpp  NdbIndexScanOperation.hpp  NdbOperation.hpp        NdbRecAttr.hpp   NdbScanOperation.hpp
NdbBlob.hpp      ndberror.h                  Ndb.hpp                NdbIndexStat.hpp           ndb_opt_defaults.h      NdbReceiver.hpp  NdbTransaction.hpp

Compile example.cpp

  • In the scenarios below:
    • -I options points to the various header files required for compilation
    • -L options points to the directories that contain the various libraries required for compilation
      • There are 2 locations specified in the examples below. One contains the libndbclient library and the second one contains other MySQL libraries required for
        compilation.
    • -l options points to the various libraries that exist in the directories setup by -L

Scenario 1. Compile against libndbclient_static.a, FAILS

  • cd /opt/ndbapi/example
  • Run the following gcc command to compile
    gcc -I/opt/ndbapi/mysql-cluster-gpl-7.2.1/include/  -I/opt/ndbapi/mysql-cluster-gpl-7.2.1/storage/ndb/include -I/opt/ndbapi/mysql-cluster-gpl-7.2.1/storage/ndb/include/ndbapi -I/opt/ndbapi/mysql-cluster-gpl-7.2.1/storage/ndb/include/mgmapi  example.cpp -o example  -L/usr/local/mysql/lib -L/opt/ndbapi/mysql-cluster-gpl-7.2.1/storage/ndb/src/ -lmysqlclient_r -lpthread -lm -lrt -ldl -lndbclient_static

Scenario 1. Compile against libndbclient.so, SUCCESS

  • cd /opt/ndbapi/example
  • Run the following gcc command to compile
    gcc -I/opt/ndbapi/mysql-cluster-gpl-7.2.1/include/  -I/opt/ndbapi/mysql-cluster-gpl-7.2.1/storage/ndb/include -I/opt/ndbapi/mysql-cluster-gpl-7.2.1/storage/ndb/include/ndbapi -I/opt/ndbapi/mysql-cluster-gpl-7.2.1/storage/ndb/include/mgmapi  example.cpp -o example  -L/usr/local/mysql/lib -L/opt/ndbapi/mysql-cluster-gpl-7.2.1/storage/ndb/src/ -lmysqlclient_r -lpthread -lm -lrt -ldl -lndbclient

Copy libndbclient.so to standard location

  • Copy the shared object to a standard location so this is available both during compile and runtime
    • cp /opt/ndbapi/mysql-cluster-gpl-7.2.1/storage/ndb/src/libndbclient.so /usr/lib

Test example.cpp

  • cd /opt/ndbapi/example
  • ./example
    • You should see the message “Connection established.”.

ndbapi_simple.cpp: Full Insert/Update/Delete/Read Example

  • The ndbap_simple.cpp code was copied from the MySQL Cluster API site at http://dev.mysql.com/doc/ndbapi/en/ndb-examples-synchronous-transactions.html
  • In this example:
    • A Database called ndb_examples is created
    • A Table called api_simple is created in that database with 2 columns ATTR1 and ATTR2 and ENGINE=NDB
    • Multiple Transactions are started each creating 10 rows.
    • Multiple Transactions are started each updating ATTR2
    • 1 Transaction is created which deletes ATTR1=3
    • Multiple Reads are performed on the api_simple table
  • cd /opt/ndbapi/example
  • gedit ndbapi_simple.cpp
  • Copy/Paste the following code
    /*
     *  ndbapi_simple.cpp: Using synchronous transactions in NDB API
     *
     *  Correct output from this program is:
     *
     *  ATTR1 ATTR2
     *    0    10
     *    1     1
     *    2    12
     *  Detected that deleted tuple doesn't exist!
     *    4    14
     *    5     5
     *    6    16
     *    7     7
     *    8    18
     *    9     9
     *
     */
    
    #include <mysql.h>
    #include <mysqld_error.h>
    #include <NdbApi.hpp>
    // Used for cout #include <stdio.h>
    #include <iostream>
    
    static void run_application(MYSQL &, Ndb_cluster_connection &);
    
    #define PRINT_ERROR(code,msg) \
      std::cout << "Error in " << __FILE__ << ", line: " << __LINE__ \
                << ", code: " << code \
                << ", msg: " << msg << "." << std::endl
    #define MYSQLERROR(mysql) { \
      PRINT_ERROR(mysql_errno(&mysql),mysql_error(&mysql)); \
      exit(-1); }
    #define APIERROR(error) { \
      PRINT_ERROR(error.code,error.message); \
      exit(-1); }
    
    int main(int argc, char** argv)
    {
      if (argc != 4)
      {
        std::cout << "Arguments are <mysqld hostname> <mysqld port> <mysql cluster connect_string >.\n";
        exit(-1);
      }
      // ndb_init must be called first   ndb_init();
    
      // connect to mysql server and cluster and run application   {
    
        char * mysqld  = argv[1];
        int mysqld_port = atoi(argv[2]);
        const char *connectstring = argv[3];
    
        // Object representing the cluster     Ndb_cluster_connection cluster_connection(connectstring);
    
        // Connect to cluster management server (ndb_mgmd)     if (cluster_connection.connect(4 /* retries               */,
           5 /* delay between retries */,
           1 /* verbose               */))
        {
          std::cout << "Cluster management server was not ready within 30 secs.\n";
          exit(-1);
        }
    
        // Optionally connect and wait for the storage nodes (ndbd's)     if (cluster_connection.wait_until_ready(30,0) < 0)
        {
          std::cout << "Cluster was not ready within 30 secs.\n";
          exit(-1);
        }
    
        // connect to mysql server     MYSQL mysql;
        if ( !mysql_init(&mysql) ) {
          std::cout << "mysql_init failed\n";
          exit(-1);
        }
        if ( !mysql_real_connect(&mysql, mysqld, "root", NULL, NULL, mysqld_port, NULL, 0) )
           MYSQLERROR(mysql);
    
        //if (mysql_real_connect(&mysql, "127.0.0.1", "root", NULL, NULL, 5000, NULL, 0) == NULL) {      // printf("Error %u: %s\n", mysql_errno(&mysql), mysql_error(&mysql));       //exit(1);     //} 
        // run the application code     run_application(mysql, cluster_connection);
      }
    
      ndb_end(0);
    
      return 0;
    }
    
    static void create_table(MYSQL &);
    static void do_insert(Ndb &);
    static void do_update(Ndb &);
    static void do_delete(Ndb &);
    static void do_read(Ndb &);
    
    static void run_application(MYSQL &mysql,
           Ndb_cluster_connection &cluster_connection)
    {
      /********************************************
       * Connect to database via mysql-c          *ndb_examples
       ********************************************/
      mysql_query(&mysql, "CREATE DATABASE ndb_examples");
      if (mysql_query(&mysql, "USE ndb_examples") != 0) MYSQLERROR(mysql);
      create_table(mysql);
    
      /********************************************
       * Connect to database via NdbApi           *
       ********************************************/
      // Object representing the database   Ndb myNdb( &cluster_connection, "ndb_examples" );
      if (myNdb.init()) APIERROR(myNdb.getNdbError());
    
      /*
       * Do different operations on database
       */
      do_insert(myNdb);
      do_update(myNdb);
      do_delete(myNdb);
      do_read(myNdb);
    }
    
    /*********************************************************
     * Create a table named api_simple if it does not exist *
     *********************************************************/
    static void create_table(MYSQL &mysql)
    {
      while (mysql_query(&mysql,
        "CREATE TABLE"
        " api_simple"
        " (ATTR1 INT UNSIGNED NOT NULL PRIMARY KEY,"
        " ATTR2 INT UNSIGNED NOT NULL)"
        " ENGINE=NDB"))
      {
        if (mysql_errno(&mysql) == ER_TABLE_EXISTS_ERROR)
        {
          std::cout << "MySQL Cluster already has example table: api_simple. "
          << "Dropping it..." << std::endl;
          mysql_query(&mysql, "DROP TABLE api_simple");
        }
        else MYSQLERROR(mysql);
      }
    }
    
    /**************************************************************************
     * Using 5 transactions, insert 10 tuples in table: (0,0),(1,1),...,(9,9) *
     **************************************************************************/
    static void do_insert(Ndb &myNdb)
    {
      const NdbDictionary::Dictionary* myDict= myNdb.getDictionary();
      const NdbDictionary::Table *myTable= myDict->getTable("api_simple");
    
      if (myTable == NULL)
        APIERROR(myDict->getNdbError());
    
      for (int i = 0; i < 5; i++) {
        NdbTransaction *myTransaction= myNdb.startTransaction();
        if (myTransaction == NULL) APIERROR(myNdb.getNdbError());
    
        NdbOperation *myOperation= myTransaction->getNdbOperation(myTable);
        if (myOperation == NULL) APIERROR(myTransaction->getNdbError());
    
        myOperation->insertTuple();
        myOperation->equal("ATTR1", i);
        myOperation->setValue("ATTR2", i);
    
        myOperation= myTransaction->getNdbOperation(myTable);
        if (myOperation == NULL) APIERROR(myTransaction->getNdbError());
    
        myOperation->insertTuple();
        myOperation->equal("ATTR1", i+5);
        myOperation->setValue("ATTR2", i+5);
    
        if (myTransaction->execute( NdbTransaction::Commit ) == -1)
          APIERROR(myTransaction->getNdbError());
    
        myNdb.closeTransaction(myTransaction);
      }
    }
    
    /*****************************************************************
     * Update the second attribute in half of the tuples (adding 10) *
     *****************************************************************/
    static void do_update(Ndb &myNdb)
    {
      const NdbDictionary::Dictionary* myDict= myNdb.getDictionary();
      const NdbDictionary::Table *myTable= myDict->getTable("api_simple");
    
      if (myTable == NULL)
        APIERROR(myDict->getNdbError());
    
      for (int i = 0; i < 10; i+=2) {
        NdbTransaction *myTransaction= myNdb.startTransaction();
        if (myTransaction == NULL) APIERROR(myNdb.getNdbError());
    
        NdbOperation *myOperation= myTransaction->getNdbOperation(myTable);
        if (myOperation == NULL) APIERROR(myTransaction->getNdbError());
    
        myOperation->updateTuple();
        myOperation->equal( "ATTR1", i );
        myOperation->setValue( "ATTR2", i+10);
    
        if( myTransaction->execute( NdbTransaction::Commit ) == -1 )
          APIERROR(myTransaction->getNdbError());
    
        myNdb.closeTransaction(myTransaction);
      }
    }
    
    /*************************************************
     * Delete one tuple (the one with primary key 3) *
     *************************************************/
    static void do_delete(Ndb &myNdb)
    {
      const NdbDictionary::Dictionary* myDict= myNdb.getDictionary();
      const NdbDictionary::Table *myTable= myDict->getTable("api_simple");
    
      if (myTable == NULL)
        APIERROR(myDict->getNdbError());
    
      NdbTransaction *myTransaction= myNdb.startTransaction();
      if (myTransaction == NULL) APIERROR(myNdb.getNdbError());
    
      NdbOperation *myOperation= myTransaction->getNdbOperation(myTable);
      if (myOperation == NULL) APIERROR(myTransaction->getNdbError());
    
      myOperation->deleteTuple();
      myOperation->equal( "ATTR1", 3 );
    
      if (myTransaction->execute(NdbTransaction::Commit) == -1)
        APIERROR(myTransaction->getNdbError());
    
      myNdb.closeTransaction(myTransaction);
    }
    
    /*****************************
     * Read and print all tuples *
     *****************************/
    static void do_read(Ndb &myNdb)
    {
      const NdbDictionary::Dictionary* myDict= myNdb.getDictionary();
      const NdbDictionary::Table *myTable= myDict->getTable("api_simple");
    
      if (myTable == NULL)
        APIERROR(myDict->getNdbError());
    
      std::cout << "ATTR1 ATTR2" << std::endl;
    
      for (int i = 0; i < 10; i++) {
        NdbTransaction *myTransaction= myNdb.startTransaction();
        if (myTransaction == NULL) APIERROR(myNdb.getNdbError());
    
        NdbOperation *myOperation= myTransaction->getNdbOperation(myTable);
        if (myOperation == NULL) APIERROR(myTransaction->getNdbError());
    
        myOperation->readTuple(NdbOperation::LM_Read);
        myOperation->equal("ATTR1", i);
    
        NdbRecAttr *myRecAttr= myOperation->getValue("ATTR2", NULL);
        if (myRecAttr == NULL) APIERROR(myTransaction->getNdbError());
    
        if(myTransaction->execute( NdbTransaction::Commit ) == -1)
          APIERROR(myTransaction->getNdbError());
    
        if (myTransaction->getNdbError().classification == NdbError::NoDataFound)
          if (i == 3)
            std::cout << "Detected that deleted tuple doesn't exist!" << std::endl;
          else
     APIERROR(myTransaction->getNdbError());
    
        if (i != 3) {
          printf(" %2d %2d\n", i, myRecAttr->u_32_value());
        }
        myNdb.closeTransaction(myTransaction);
      }
    }
  • Compile the ndbapi_simple.cpp file
    gcc -I/opt/ndbapi/mysql-cluster-gpl-7.2.1/include/  -I/opt/ndbapi/mysql-cluster-gpl-7.2.1/storage/ndb/include -I/opt/ndbapi/mysql-cluster-gpl-7.2.1/storage/ndb/include/ndbapi -I/opt/ndbapi/mysql-cluster-gpl-7.2.1/storage/ndb/include/mgmapi  ndbapi_simple.cpp -o ndbapi_simple  -L/usr/local/mysql/lib -L/opt/ndbapi/mysql-cluster-gpl-7.2.1/storage/ndb/src/ -lmysqlclient_r -lpthread -lm -lrt -ldl -lndbclient
  • Run the program
    • ./ndbapi_simple 127.0.0.1 5000 localhost
      • 127.0.0.1 is the IP address of the MySQL Node
      • 5000 is the port of the MySQL Node
      • localhost is the hostname of the Mgmt Node

More NDB API Examples

References

Advertisements

Installing MySQL Cluster on EC2

4 Jan

Here is how I installed MySQL cluster on Amazon EC2. Note: This is a basic setup that I used to test MySQL cluster. A real-world production setup would probably involve a more complex variation

Create 3 Amazon EC2 instances.

  • This post assumes 3 Amazon EC2 t1.micro instances
    • 1 t1.micro for SQL Node and Mgmt Node
    • 1 t1.micro for Data Node
    • 1 t1.micro for Data Node

On the SQL/Mgmt Node executes these steps:

Common Steps

  • Follow all steps in Common Steps on all Nodes section below.

Create the Deployment Directory and Setup Config Files

  • mkdir -p /opt/mysqlcluster/deploy
  • cd /opt/mysqlcluster/deploy
  • mkdir conf
  • mkdir mysqld_data
  • mkdir ndb_data
  • cd conf
  • gedit my.cnf and enter the following
    [mysqld]
    ndbcluster
    datadir=/opt/mysqlcluster/deploy/mysqld_data
    basedir=/opt/mysqlcluster/home/mysqlc
    port=3306

     

  • gedit config.ini and enter the following
    • NOTE: REPLACE the hostname entries below with names of the SQL/MGMT Node and Data Nodes.
      [ndb_mgmd]
      hostname=domU-12-31-39-04-D6-A3.compute-1.internal
      datadir=/opt/mysqlcluster/deploy/ndb_data
      nodeid=1
      
      [ndbd default]
      noofreplicas=2
      datadir=/opt/mysqlcluster/deploy/ndb_data
      
      [ndbd]
      hostname=ip-10-72-50-247.ec2.internal
      nodeid=3
      
      [ndbd]
      hostname=ip-10-194-139-246.ec2.internal
      nodeid=4
      
      [mysqld]
      nodeid=50

       

Initialize the Database

  • Just like any other MySQL Server, the mysqld process requires a ‘mysql’ database to be created and populated with essential system data”
  • cd /opt/mysqlcluster/home/mysqlc
  • scripts/mysql_install_db –no-defaults –datadir=/opt/mysqlcluster/deploy/mysqld_data

START MANAGEMENT NODE

  • ndb_mgmd -f /opt/mysqlcluster/deploy/conf/config.ini –initial –configdir=/opt/mysqlcluster/deploy/conf
    • MGMT SERVER RUNS LISTENING ON PORT 1186

On Data Node 1 host:

Common Steps

  • Follow all steps in Common Steps on all Nodes below.

Create NDB DATA directory

  • mkdir -p /opt/mysqlcluster/deploy/ndb_data

START DATA NODE

  • Start up Data Node passing in Private DNS name of Mgmt Node
    • ndbd -c domU-12-31-39-04-D6-A3.compute-1.internal:1186

On Data Node 2 host:

  • Repeat Steps listed for Data Node 1 above.

Back On SQL/Mgmt Node:

CHECK STATUS OF MGMT/DATA NODES

  • ndb_mgm -e show

START SQL NODE

  • mysqld –defaults-file=/opt/mysqlcluster/deploy/conf/my.cnf –user=root &
  • WAIT FOR THE FOLLOWING TO BE DISPLAYED BEFORE PROCEEDING:
    111104 12:01:50 [Note] Plugin 'FEDERATED' is disabled.
    111104 12:02:25 [Warning] NDB: server id set to zero - changes logged to bin log with server id zero will be logged with another server id by slave mysqlds
    111104 12:02:25 [Note] Starting Cluster Binlog Thread
    111104 12:02:25 InnoDB: The InnoDB memory heap is disabled
    111104 12:02:25 InnoDB: Mutexes and rw_locks use GCC atomic builtins
    111104 12:02:25 InnoDB: Compressed tables use zlib 1.2.3
    111104 12:02:25 InnoDB: Using Linux native AIO
    111104 12:02:25 InnoDB: Initializing buffer pool, size = 128.0M
    111104 12:02:25 InnoDB: Completed initialization of buffer pool
    111104 12:02:25 InnoDB: highest supported file format is Barracuda.
    InnoDB: 127 rollback segment(s) active.
    111104 12:02:26  InnoDB: Waiting for the background threads to start
    111104 12:02:27 InnoDB: 1.1.8 started; log sequence number 44233
    111104 12:02:27 [Note] Event Scheduler: Loaded 0 events
    111104 12:02:27 [Note] mysqld: ready for connections.
    Version: '5.5.15-ndb-7.2.1-gpl'  socket: '/tmp/mysql.sock'  port: 5000  MySQL Cluster Community Server (GPL)
    111104 12:02:57 [Warning] NDB : Tables not available after 30 seconds.  Consider increasing --ndb-wait-setup value
    111104 12:03:11 [Note] NDB: NodeID is 50, management server 'localhost:1186'
    111104 12:03:12 [Note] NDB Binlog: DISCOVER TABLE Event: REPL$mysql/ndb_schema
    111104 12:03:12 [Note] NDB Binlog: logging ./mysql/ndb_schema (UPDATED,USE_WRITE)
    111104 12:03:12 [Note] NDB Binlog: DISCOVER TABLE Event: REPL$mysql/ndb_apply_status
    111104 12:03:12 [Note] NDB Binlog: logging ./mysql/ndb_apply_status (UPDATED,USE_WRITE)
    2011-11-04 12:03:12 [NdbApi] INFO     -- Flushing incomplete GCI:s < 628/6
    2011-11-04 12:03:12 [NdbApi] INFO     -- Flushing incomplete GCI:s < 628/6
    111104 12:03:12 [Note] NDB Binlog: starting log at epoch 628/6
    111104 12:03:12 [Note] NDB Binlog: ndb tables writable

     

CHECK STATUS OF MGMT/DATA/SQL NODES

  • ndb_mgm -e show

SECURE MYSQL INSTALLATION

  • Note: Temporary “root” and “app” users password in testpwd. This will change once we are in production
  • mysql_secure_installation
    [root@domU-12-31-39-04-D6-A3 ndb_data]# mysql_secure_installation
    
    NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL
          SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!
    
    In order to log into MySQL to secure it, we'll need the current
    password for the root user.  If you've just installed MySQL, and
    you haven't set the root password yet, the password will be blank,
    so you should just press enter here.
    
    Enter current password for root (enter for none): 
    OK, successfully used password, moving on...
    
    Setting the root password ensures that nobody can log into the MySQL
    root user without the proper authorisation.
    
    You already have a root password set, so you can safely answer 'n'.
    
    Change the root password? [Y/n] n
     ... skipping.
    
    By default, a MySQL installation has an anonymous user, allowing anyone
    to log into MySQL without having to have a user account created for
    them.  This is intended only for testing, and to make the installation
    go a bit smoother.  You should remove them before moving into a
    production environment.
    
    Remove anonymous users? [Y/n] 
     ... Success!
    
    Normally, root should only be allowed to connect from 'localhost'.  This
    ensures that someone cannot guess at the root password from the network.
    
    Disallow root login remotely? [Y/n] y
     ... Success!
    
    By default, MySQL comes with a database named 'test' that anyone can
    access.  This is also intended only for testing, and should be removed
    before moving into a production environment.
    
    Remove test database and access to it? [Y/n] y
     - Dropping test database...
    ERROR 1008 (HY000) at line 1: Can't drop database 'test'; database doesn't exist
     ... Failed!  Not critical, keep moving...
     - Removing privileges on test database...
     ... Success!
    
    Reloading the privilege tables will ensure that all changes made so far
    will take effect immediately.
    
    Reload privilege tables now? [Y/n] y
     ... Success!
    
    Cleaning up...
    
    All done!  If you've completed all of the above steps, your MySQL
    installation should now be secure.
    
    Thanks for using MySQL!

     

Setup Remote Users

[root@domU-12-31-39-04-D6-A3 ndb_data]# mysql -h 127.0.0.1 -u root -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 21
Server version: 5.5.15-ndb-7.2.1-gpl MySQL Cluster Community Server (GPL)

Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE USER 'myapp'@'%' IDENTIFIED BY 'testpwd';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT ALL PRIVILEGES ON * . * TO 'myapp'@'%' IDENTIFIED BY 'testpwd' WITH GRANT OPTION MAX_QUERIES_PER_HOUR 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0 MAX_USER_CONNECTIONS 0 ;
Query OK, 0 rows affected (0.01 sec)

TESTS on SQL/Mgmt Node:

  • mysql -h 127.0.0.1 -u myapp -p
    mysql> create database clusterdb;use clusterdb;
    mysql> create table simples (id int not null primary key) engine=ndb;
    mysql> insert into simples values (1),(2),(3),(4);
    mysql> select * from simples;

     

TESTS from a Remote MySQL client:

  • /opt/mysqlcluster/home/mysqlc/bin/mysql -h ec2-50-17-110-253.compute-1.amazonaws.com -u w -p
    mysql> use clusterdb;
    mysql> select * from simples;

     

Shutdown Services on SQL/Mgmt Node

  • SHUTDOWN MYSQLD
    • mysqladmin -u myapp -h 127.0.0.1 -p shutdown
  • SHUTDOWN MANAGEMENT AND DATA NODES:
    • ndb_mgm -e shutdown

Common Steps on all Nodes

Uninstall any existing Mysql packages

  • service mysqld stop
  • yum remove mysql-server mysql mysql-devel

Download and Extract MySQL Cluster Binaries

Setup Executable Path Globally

  • echo ‘export MYSQLC_HOME=/opt/mysqlcluster/home/mysqlc’ > /etc/profile.d/mysqlc.sh
  • echo ‘export PATH=$MYSQLC_HOME/bin:$PATH’ >> /etc/profile.d/mysqlc.sh
  • source /etc/profile.d/mysqlc.sh
    • This will set the environment variables for the current shell

References

Configruing HandlerSocket plugin for MySQL

21 Dec

This post assumes you are installing HandlerSocket plugin on MySQL cluster. But the steps should be applicable for the non-cluster MySQL installation as well.

Download and Extract MySQL Cluster Source Code

  • IMPORTANT: If you are installing HandlerSocket plugin on MySQL Cluster, download the source code to the instance that contains the SQL/Mgmt Server node.

Download, Extract and Build HandlerSocket Source Code

  • cd /opt/mysqlcluster/src
  • wget –no-check-certificate https://github.com/ahiguti/HandlerSocket-Plugin-for-MySQL/tarball/master
  • tar xvf master
  • ln -s ahiguti-HandlerSocket-Plugin-for-MySQL-0986a75 handler_src
  • cd handler_src
  • ./autogen.sh
  • ./configure –with-mysql-source=/opt/mysqlcluster/src/mysqlc_src –with-mysql-bindir=/opt/mysqlcluster/home/mysqlc/bin –with-mysql-plugindir=/opt/mysqlcluster/home/mysqlc/lib/plugin
    • IF YOU GET THE FOLLOWING ERROR:
      ...
      checking mysql source... yes: Using /opt/mysqlcluster/src/mysqlc_src, version 5.5.15
      checking mysql binary... yes: Using /opt/mysqlcluster/home/mysqlc/bin/mysql_config, version 5.5.15-ndb-7.2.1
      configure: error: MySQL source version does not match MySQL binary version
      ...
      • According to the install docs https://github.com/ahiguti/HandlerSocket-Plugin-for-MySQL/blob/master/docs-en/installation.en.txt: –with-mysql-source refers to the top of MySQL source directory (which
        contains the VERSION file or the configure.in file)
      • The VERSION file contains:
        MYSQL_VERSION_MAJOR=5
        MYSQL_VERSION_MINOR=5
        MYSQL_VERSION_PATCH=15
        MYSQL_VERSION_EXTRA=-ndb-7.2.1
      • But looking at the generated configure script, only the first 3 entries are checked with the output of mysql_config –version which returns the EXTRA information in addition to the 3 entries. This causes the match to fail
    • AS A WORKAROUND we do the following:
  • cd /opt/mysqlcluster/src/mysqlc_src
  • cp VERSION VERSION.bkup
  • gedit VERSION and append the EXTRA value to the PATCH value as follows:
    MYSQL_VERSION_MAJOR=5
    MYSQL_VERSION_MINOR=5
    MYSQL_VERSION_PATCH=15-ndb-7.2.1
    MYSQL_VERSION_EXTRA=-ndb-7.2.1
    • Re-run the configure command above
  • Compile and install HandlerSocket:
    • make && make install

Configure mysqld to use the HandlerSocket plugin

  • Shutdown any running mysqld process
    • mysqladmin -u myuser -h 127.0.0.1 -p shutdown
      • Note if you don’t have the myuser user setup in your environment then use:
        • mysqladmin -u root -h 127.0.0.1 shutdown
  • Edit the my.cnf file to add the setting for HandlerSocket.
    • cd /opt/mysqlcluster/deploy/conf
    • gedit my.cnf and should look like this:
      [mysqld]
      ndbcluster
      datadir=/opt/mysqlcluster/deploy/mysqld_data
      basedir=/opt/mysqlcluster/home/mysqlc
      port=3306
      loose_handlersocket_port = 9998
          # the port number to bind to (for read requests)
      loose_handlersocket_port_wr = 9999
          # the port number to bind to (for write requests)
      loose_handlersocket_threads = 16
          # the number of worker threads (for read requests)
      loose_handlersocket_threads_wr = 1
          # the number of worker threads (for write requests)
      open_files_limit = 65535
          # to allow handlersocket accept many concurrent
          # connections, make open_files_limit as large as
          # possible.
  • Restart mysqld
    • mysqld –defaults-file=/opt/mysqlcluster/deploy/conf/my.cnf –user=root &
  • Install the plugin
    • mysql -h 127.0.0.1 -u myuser -p OR mysql -h 127.0.0.1 -u root depending on your environment
    • mysql> install plugin handlersocket soname ‘handlersocket.so’;
      • You should see a bunch of messages and then a confirmation that the plugin was installed.
    • mysql> show processlist ;
      • This should list the handlersocket process
  • If handlersocket.so is successfully installed, it starts accepting connections on port 9998 (Reads) and 9999 (Writes).

Benchmarking MySQL Cluster with mysqlslap

18 Dec

Differences with pgbench

  • There is no option in mysqlslap to specify the time in seconds to run the test for
  • There is no built-in TPC-B test
  • Does not show Transactions per second automatically, but can be calculated manually.
  • No custom variables or function support (setrandom etc).

Installation

  • mysqlslap is part o the MySQL Cluster distribution. See the post on installing MySQL Cluster.

Test Connectivity to MySQL Cluster

  • mysqlslap -h ec2-xx-xx-xx-x.compute-1.amazonaws.com -u testuser –password=testpwd –auto-generate-sql

Add verbose option to see behind-the-scenes-action

  • mysqlslap -vvv -h ec2-xx-xx-xx-x.compute-1.amazonaws.com -u testuser –password=testpwd –auto-generate-sql
    • Note that a schema called “mysqlslap” is created at the beginning of the test and deleted at the end of the test

Concurrent User Test

  • 10 concurrent users, repeat the test 5 times (iterations)
    • mysqlslap -h ec2-xx-xx-xx-x.compute-1.amazonaws.com -u testuser –password=testpwd –auto-generate-sql –concurrency=10 –iterations=5

Custom Queries with Schema Generate on the fly

Create Schema with Tables/Data and specify SELECT statement – Single User

mysqlslap -vvv -h ec2-xx-xx-xx-x.compute-1.amazonaws.com -u testuser --password=testpwd --delimiter=";" --create="CREATE TABLE a (b int);INSERT INTO a VALUES (23)" --query="SELECT * FROM a"

Create Schema with Tables/Data and specify SELECT statement – 10 concurrent users, repeated 5 times

 mysqlslap -vvv -h ec2-xx-xx-xx-x.compute-1.amazonaws.com -u testuser --password=testpwd --delimiter=";" --create="CREATE TABLE a (b int);INSERT INTO a VALUES (23)" --query="SELECT * FROM a" --concurrency=10 --iterations=5

Reading SQL from files

  • create.sql file
    CREATE TABLE a (b int);INSERT INTO a VALUES (21)

     

  • query.sql file
    SELECT * FROM a

     

  • mysqlslap command
     mysqlslap -vvv -h ec2-xx-xx-xx-x.compute-1.amazonaws.com -u testuser --password=testpwd --delimiter=";" --create=create.sql --query=query.sql --concurrency=10 --iterations=5

     

Simulating the TPC-B Test

Initialize TPC-B Test Database

Installing the PHP HandlerSocket Client

18 Dec

Download and Extract HandlerSocket Source Code

  • NOTE: If you are in a development environment where the Server-Side MySQL HandlerSocket service and the Apache/PHP service are in the
    same box, then you can skip this step.

Compile and Build HandlerSocket Client Code

  • cd /opt/mysqlcluster/src/handler_src
  • ./autogen.sh
  • ./configure –disable-handlersocket-server
  • make && make install

Install/Setup PHP HandlerSocket Client

  1. mkdir -p /opt/php_hs
  2. cd /opt/php_hs
  3. wget http://php-handlersocket.googlecode.com/files/php-handlersocket-0.3.0.tar.gz
  4. tar xvf php-handlersocket-0.3.0.tar.gz
  5. cd cd php-handlersocket
  6. Ensure the phpize command from /usr/bin is executed.
    1. export PATH=/usr/bin:$PATH
  7. make clean (If this is your first time building PHP HandlerSocket, this command will fail and that is ok)
  8. phpize
  9. ./configure
  10. make
  11. make install
    • This will copy handlersocket.so to /usr/lib64/php/modules on 64 bit OS (production systems)
    • Will copy handlersocket.so /usr/lib/php/modules on 32 bit OS (Dev systems)
  12. Specify handlersocket.so as a PHP extension
    1. echo “extension=handlersocket.so” > /etc/php.d/handlersocket.ini
  13. Restart Apache
    1. service httpd restart

Setup Sample Database

  • Setup a test database and innodb table:
    • mysql -h 127.0.0.1 -u myuser -p OR mysql -h 127.0.0.1 -u root depending on your environment.
    • mysql> create database hstestdb;
    • mysql> CREATE TABLE user (user_id INT UNSIGNED PRIMARY KEY,user_name VARCHAR(50),user_email VARCHAR(255),created Timestamp DEFAULT CURRENT_TIMESTAMP) ENGINE=InnoDB;
    • mysql> insert into user values (101, “Hulk Hogan”, “hulk@hogan.com”, current_timestamp());
    • mysql> insert into user values (102, “The Rock”, “the@rock.com”, current_timestamp());
    • mysql> SELECT user_name, user_email, created FROM user WHERE user_id=101;

Sample PHP Code to access the above Database/Table using HandlerSocket API:

Client API

SELECT example

$host = 'ec2-50-17-110-253.compute-1.amazonaws.com';
$port = 9998;
$port_wr = 9999;
$dbname = 'hstestdb';
$table = 'user';

// SELECT SINGLE AND SELECT MULTI $hs = new \HandlerSocket($host, $port);
if (!($hs->openIndex(1, $dbname, $table, \HandlerSocket::PRIMARY, 'user_name,user_email,created')))
{

    echo $hs->getError(), PHP_EOL; die();
}

$retval = $hs->executeSingle(1, '=', array('101'), 1, 0);

var_dump($retval);

$retval = $hs->executeMulti(
    array(array(1, '=', array('101'), 1, 0),
          array(1, '=', array('102'), 1, 0)));

var_dump($retval);

unset($hs);

UPDATE Example

$host = 'ec2-50-17-110-253.compute-1.amazonaws.com';
$port = 9998;
$port_wr = 9999;
$dbname = 'hstestdb';
$table = 'user';
// UPDATE $hs = new \HandlerSocket($host, $port_wr);
if (!($hs->openIndex(2, $dbname, $table, '', 'user_name')))
{
    echo $hs->getError(), PHP_EOL;
    die();
}

if ($hs->executeUpdate(2, '=', array('101'), array('Hulkier Hogan'), 1, 0) === false)
{
    echo $hs->getError(), PHP_EOL;
    die();
}

unset($hs);

INSERT EXAMPLE

$host = 'ec2-50-17-110-253.compute-1.amazonaws.com';
$port = 9998;
$port_wr = 9999;
$dbname = 'hstestdb';
$table = 'user';
// INSERT $hs = new \HandlerSocket($host, $port_wr);
if (!($hs->openIndex(3, $dbname, $table, '', 'user_id,user_name,user_email')))
{
    echo $hs->getError(), PHP_EOL;
    die();
}

if ($hs->executeInsert(3, array(103,'Mr. Perfect', 'mr@perfect.com')) === false)
{
    echo $hs->getError(), PHP_EOL;
}

unset($hs);

exit;

DELETE EXAMPLE

$host = 'ec2-50-17-110-253.compute-1.amazonaws.com';
$port = 9998;
$port_wr = 9999;
$dbname = 'hstestdb';
$table = 'user';
//DELETE $hs = new \HandlerSocket($host, $port_wr);
if (!($hs->openIndex(4, $dbname, $table, '', '')))
{
    echo $hs->getError(), PHP_EOL;
    die();
}

if ($hs->executeDelete(4, '=', array(103)) === false)
{
    echo $hs->getError(), PHP_EOL;
    die();
}

References

Install and Configure phpPgAdmin

18 Nov

Install and Configure phpPgAdmin

  • Option 1: Using Yum
    • Note that this install the latest stable version. As of this writing phpPgAdmin 4.x.
    • yum install phpPgAdmin
  • Option 2: Building from source
    • Note that this is required to build a beta package. As of this writing phpPgAdmin 5.x.
    • Download a beta package from http://phppgadmin.sourceforge.net/?page=download
    • Extract the zip file to the document root. For example: /var/www/html/phppgadmin
    • Follow the instructions in the INSTALL file in the extracted directory.
  • Setup a userid/password for phpPgAdmin
    • su – postgres
    • createuser -P pgadmin
    • Enter password for new role: xxxx
    • Enter it again: xxxx
    • Shall the new role be a superuser? (y/n) y
  • Restart Postgres
    • service postgresql-9.0 restart
  • Ensure RewriteRules allow phppgadmin requests. For example, the following Redirect rule matches redirects all request except xhprof, phpmyadmin and phppgadmin to the abc.php.

    RewriteRule !^(/phpmyadmin|/xhprof_html|/phpPgAdmin) /var/www/html/myapp/abc.php

Setup PhpPgAdmin remote access

  • If you want to access phpPgAdmin remotely ensure you set the “Allow from all” entry in /etc/httpd/conf.d/phpPgAdmin.conf

Configure PhpPgAdmin hostname

  • cd /usr/share/phpPgAdmin/conf
  • rm config.inc.php (remove the existing soft link)
  • cp config.inc.php-dist config.inc.php
  • vi config.inc.php and set the $conf.servers.host entry to the server running postgres

Edit open_basedir setting in /etc/php.ini

  • If open_basedir is set in /etc/php.ini add the following to open_basedir
    • open_basedir = /var/www/html:/usr/share/phpPgAdmin/conf:/usr/share/phpPgAdmin

Setup Export/Import tools.

  • If you want to use Export/Import features in phpMyAdmin, run the following command:
  • cp /usr/pgsql-9.0/bin/* /usr/bin

Setting up Postgres 9.0 Streaming Replication

1 Nov

Goal

  • The instructions in this page are specific to the CentOS VM being used for development purposes. For development all the masters/slave are running in the same server/VM. In production, the masters/slaves will
    be in different servers. Make sure you take into different server IP address settings and shared folder settings if you are using this document for a production deployment.
  • 8 total pgsql instances (listening on different ports).
    • pgsql master shard 1 (pg1)
    • 3 pgsql slaves to master shard 1 (pg1_s1, pg1_s2, pg1_s3)
    • pgsql master shard 2 (pg2)
    • 3 pgsql slaves to master shard 2 (pg2_s1, pg2_s2, pg2_s3)

Shutdown the existing Postgres Service

  • service postgresql-9.0 stop

Create the directories that will contain the 6 pgsql instances:

{info:title=Make Note}

  • It’s uncommon to have multiple postgres instances/services running in the same server. However, for development
    environment purposes we will create multiple instances by creating separate DB directories and specifying different
    ports for each postgres service.

{info}

  • There is already a Database currently in /var/lib/pgsql/9.0/data. We will move this to a new folder as follows:
    • mkdir -p /var/lib/pgsql/9.0/shards/pg1
    • cp -r /var/lib/pgsql/9.0/data /var/lib/pgsql/9.0/shards/pg1

Now the remaining 7 pgsql instances:

  • Shard1
    • mkdir -p /var/lib/pgsql/9.0/shards/pg1_s1
    • mkdir -p /var/lib/pgsql/9.0/shards/pg1_s2
    • mkdir -p /var/lib/pgsql/9.0/shards/pg1_s3
  • Shard2
    • mkdir -p /var/lib/pgsql/9.0/shards/pg2
    • mkdir -p /var/lib/pgsql/9.0/shards/pg2_s1
    • mkdir -p /var/lib/pgsql/9.0/shards/pg2_s2
    • mkdir -p /var/lib/pgsql/9.0/shards/pg2_s3

Setup File permissions

  • The postgres process runs as the user postgres and so we make this directory and everything under it owned by the user postgres
    • chown -R postgres:postgres /var/lib/pgsql/9.0/shards
  • Postgres requires that DB directory folders are not world readable/writeable. Set the permissions to restrict this:
    • chmod -R go-rwx /var/lib/pgsql/9.0/shards

Setup and Configure Master and Slaves for Shard 1(Ports 5432,5433,5434,5435)

Create an archive directory

  • This is a directory that is shared by the Shard1 masters and slaves.
  • In production this would probably be a Network shared directory. For example, NFS.
  • mkdir -p /var/lib/pgsql/9.0/shards/pg1/archive
  • chown postgres:postgres /var/lib/pgsql/9.0/shards/pg1/archive
  • chmod -R go-rwx /var/lib/pgsql/9.0/shards/pg1/archive

Configure Master Server for Shard1

Edit Master Server pg_hba.conf file

  • Edit /var/lib/pgsql/9.0/shards/pg1/data/pg_hba.conf
  • This sets up authentication for remote slave servers.
  • Add a line at the end of this file as follows. Note for production servers, replace samehost with
    the appropriate slave ip addresses.

/var/lib/pgsql/9.0/shards/pg1/pg_hba.conf

host    replication     pgadmin     samehost           md5

Edit Master Server postgresql.conf file

  • Edit /var/lib/pgsql/9.0/shards/pg1/data/postgresql.conf
  • Edit the following entries as follows:

/var/lib/pgsql/9.0/shards/pg1/data/postgresql.conf

listen_addresses = '*'

wal_level = hot_standby  

archive_mode = on

archive_command = 'cp %p /var/lib/pgsql/9.0/shards/pg1/archive/%f'  

max_wal_senders = 5  

wal_keep_segments = 32

Setup and Start Postgres Service Start/Stop Scripts for Master Shard1

  • Rename the existing Postgres service name from postgresql-9.0 to pg1
    • cd /etc/init.d
    • mv postgresql-9.0 pg1
  • Remove the existing Postgres service name from CentOS:
    • chkconfig postgresql-9.0 off
    • chkconfig –del postgresql-9.0
  • Edit the pg1 script so it points to the Database directory. Verify the following entries (LINES 61-63):

    /etc/init.d/pg1

    PGPORT=5432
    PGDATA=/var/lib/pgsql/${PGMAJORVERSION}/shards/pg1/data
    PGLOG=/var/lib/pgsql/${PGMAJORVERSION}/pg1.log
  • Setup the Postgres service name in CentOS:
    • chkconfig –add pg1
    • chkconfig pg1 on
  • Start the service for Master Shard1
  • service pg1 start
  • If startup failed or to ensure proper startup check the latest log file in:
    /var/lib/pgsql/9.0/shards/pg1/data/pg_log/

Get a Snapshot of the Master Server Database

  • Start a Psql session:
    • psql –username=pgadmin –port=5432 –dbname=shard_1
  • After login exectute the following statement to prepare the backup
    • SELECT pg_start_backup(‘shard1_backup’);
    • This command will create a snapshot of the master server
  • Open another terminal window and archive the entire Master Server Data folder
    • cd /var/lib/pgsql/9.0/shards/pg1
    • tar -cjf postgres-pg1.tar.bz2 data
  • In the psql session execute the following command to stop backup mode
    • SELECT pg_stop_backup();

      {info:title=Make Note}

      • Executing the pg_stop_backup() is very important otherwise postgres remains in a backup state
        and might not respond to regular queries.

      {info}

Create Slave instances for Master Shard1

Update Slave DB folder using Master Server Snapshot

  • For pg1_s1
    • cd /var/lib/pgsql/9.0/shards/pg1_s1
    • tar -xjf ../pg1/postgres-pg1.tar.bz2
    • cd data
    • rm -f postmaster.pid
    • rm -f pg_log/*
  • For pg1_s2
    • cd /var/lib/pgsql/9.0/shards/pg1_s2
    • tar -xjf ../pg1/postgres-pg1.tar.bz2
    • cd data
    • rm -f postmaster.pid
    • rm -f pg_log/*
  • For pg1_s3
    • cd /var/lib/pgsql/9.0/shards/pg1_s3
    • tar -xjf ../pg1/postgres-pg1.tar.bz2
    • cd data
    • rm -f postmaster.pid
    • rm -f pg_log/*

Setup the Slave Service Start/Stop Scripts

  • cd /etc/init.d
  • cp pg1 pg1_s1
  • cp pg1 pg1_s2
  • cp pg1 pg1_s3
  • Edit pg1_s1 and change the following entries (LINES 61-63):

    /etc/init.d/pg1_s1

    PGPORT=5433
    PGDATA=/var/lib/pgsql/${PGMAJORVERSION}/shards/pg1_s1/data
    PGLOG=/var/lib/pgsql/${PGMAJORVERSION}/pg1_s1.log
  • Edit pg1_s2 and change the following entries (LINES 61-63):

    /etc/init.d/pg1_s2

    PGPORT=5434
    PGDATA=/var/lib/pgsql/${PGMAJORVERSION}/shards/pg1_s2/data
    PGLOG=/var/lib/pgsql/${PGMAJORVERSION}/pg1_s2.log
  • Edit pg1_s3 and change the following entries (LINES 61-63):

    /etc/init.d/pg1_s3

    PGPORT=5435
    PGDATA=/var/lib/pgsql/${PGMAJORVERSION}/shards/pg1_s3/data
    PGLOG=/var/lib/pgsql/${PGMAJORVERSION}/pg1_s3.log
  • Enable the Slaves as a CentOS service:
    • chkconfig –add pg1_s1
    • chkconfig pg1_s1 on
    • chkconfig –add pg1_s2
    • chkconfig pg1_s2 on
    • chkconfig –add pg1_s3
    • chkconfig pg1_s3 on

Configure the Slave instances for Streaming Replication

  • For pg1_s1
    • cd /var/lib/pgsql/9.0/shards/pg1_s1/data
    • Edit postgresql.conf and mark this instance as hot standby

      /var/lib/pgsql/9.0/shards/pg1_s1/postgresql.conf

      hot_standby=on
    • Create a new file called recovery.conf with the entries below.

      /var/lib/pgsql/9.0/shards/pg1_s1/recovery.conf

      standby_mode          = 'on'
      primary_conninfo      = 'host=localhost port=5432 user=pgadmin password=pgadmin'
      # Specifies a trigger file whose presence should cause streaming replication to
      # end (i.e., failover).
      trigger_file = '/var/lib/pgsql/9.0/shards/pg1/data/trigger'
      restore_command = 'cp /var/lib/pgsql/9.0/shards/pg1/archive/%f %p'
  • For pg1_s2
    • cd /var/lib/pgsql/9.0/shards/pg1_s2/data
    • cp ../../pg1_s1/data/postgresql.conf .
      • enter yes to overwrie
    • cp ../../pg1_s1/data/recovery.conf .
  • For pg1_s3
    • cd /var/lib/pgsql/9.0/shards/pg1_s3/data
    • cp ../../pg1_s1/data/postgresql.conf .
      • enter yes to overwrie
    • cp ../../pg1_s1/data/recovery.conf .

Setup Slave File permissions

We repeat the setting up file permission to ensure any new files created have the same privileges.

  • The postgres process runs as the user postgres and so we make this directory and everything under it owned by the user postgres
    • chown -R postgres:postgres /var/lib/pgsql/9.0/shards
  • Postgres requires that DB directory folders are not world readable/writeable. Set the permissions to restrict this:
    • chmod -R go-rwx /var/lib/pgsql/9.0/shards

Start Slave Instances

  • service pg1_s1 start
  • service pg1_s2 start
  • service pg1_s3 start
  • If startup failed or to ensure proper startup view these files.
    • For pg1_s1 see the latest log file in /var/lib/pgsql/9.0/shards/pg1_s1/data/pg_log/
    • For pg1_s2 see the latest log file in /var/lib/pgsql/9.0/shards/pg1_s2/data/pg_log/
    • For pg1_s3 see the latest log file in /var/lib/pgsql/9.0/shards/pg1_s3/data/pg_log/

Verify Streaming Replication processes

  • To verify streaming replication run the following command:
    ps -ef | grep wal
  • If your setup is valid you should see:
    • a “wal write” process – one per master
    • 3 “wal receiver” processes – one for each slave
    • 3 “wal sender” processes – one for each slave

Test Replication

  • In a terminal window start a psql session and connect to the Master:
    psql –username=pgadmin –port=5432 –dbname=shard_1
  • Execute the following SQL statements:
    CREATE TABLE test (test varchar(30));
    INSERT INTO test VALUES (‘Testing 1 2 3’);
  • In another terminal window start a psql session and connect to slave 1:
    psql –username=pgadmin –dbname=shard_1 –port=5433
  • Execute the following SQL statements:
    SELECT * from test;

    • If replication was successul you should see one record displayed.
  • In another terminal window start a psql session and connect to slave 2:
    psql –username=pgadmin –dbname=shard_1 –port=5434
  • Execute the following SQL statements:
    SELECT * from test;

    • If replication was successul you should see one record displayed.
  • In another terminal window start a psql session and connect to slave 3:
    psql –username=pgadmin –dbname=shard_1 –port=5435
  • Execute the following SQL statements:
    SELECT * from test;

    • If replication was successul you should see one record displayed.
  • Now in the Master psql session execute the statement:
    DROP TABLE test;
  • In any one of the slave psql sessions run:
    SELECT * from test;
  • The previous command will generate an error.

Setup and Configure Master and Slaves for Shard 2(Ports 7432,7433,7434,7435)

Initialize Shard2 Master with Shard1 Master Database files

  1. cd /var/lib/pgsql/9.0/shards/pg2
  2. tar -xjf ../pg1/postgres-pg1.tar.bz2
  3. cd data
  4. rm -f postmaster.pid
  5. rm -f pg_log/*

Create an archive directory for Shard2

  • This is a directory that is shared by the Shard2 masters and slaves.

{info:title=Make Note}

  • In production this would probably be a Network shared directory. For example, NFS.

{info}

  • mkdir -p /var/lib/pgsql/9.0/shards/pg2/archive
  • chown postgres:postgres /var/lib/pgsql/9.0/shards/pg2/archive
  • chmod -R go-rwx /var/lib/pgsql/9.0/shards/pg2/archive

Configure Master Server for Shard2

Edit Shard2 Master Server pg_hba.conf file

  • Edit /var/lib/pgsql/9.0/shards/pg2/data/pg_hba.conf
  • This sets up authentication for remote slave servers.
  • Add a line at the end of this file as follows. Note for production servers, replace samehost with
    the appropriate slave ip addresses.

/var/lib/pgsql/9.0/shards/pg2/pg_hba.conf

host    replication     pgadmin     samehost           md5

Edit Shard2 Master Server postgresql.conf file

  • Edit /var/lib/pgsql/9.0/shards/pg2/data/postgresql.conf
  • Edit the following entries as follows:

/var/lib/pgsql/9.0/shards/pg2/data/postgresql.conf

listen_addresses = '*'

wal_level = hot_standby  

archive_mode = on

archive_command = 'cp %p /var/lib/pgsql/9.0/shards/pg2/archive/%f'  

max_wal_senders = 5  

wal_keep_segments = 32

Setup Shard_2 File permissions

  • The postgres process runs as the user postgres and so we make this directory and everything under it owned by the user postgres
    • chown -R postgres:postgres /var/lib/pgsql/9.0/shards
  • Postgres requires that DB directory folders are not world readable/writeable. Set the permissions to restrict this:
    • chmod -R go-rwx /var/lib/pgsql/9.0/shards

Setup and Start Postgres Service Start/Stop Scripts for Master Shard2

  • Create pg2 startup script using pg1
    • cd /etc/init.d
    • cp pg1 pg2
  • Edit the pg2 script with new PORT and the Database directory. Verify the following entries (LINES 61-63):

    /etc/init.d/pg2

    PGPORT=7432
    PGDATA=/var/lib/pgsql/${PGMAJORVERSION}/shards/pg2/data
    PGLOG=/var/lib/pgsql/${PGMAJORVERSION}/pg2.log
  • Setup the Postgres service name in CentOS:
    • chkconfig –add pg2
    • chkconfig pg2 on
  • Start the service for Master Shard1
  • service pg2 start
  • If startup failed or to ensure proper startup check the latest log file in:
    /var/lib/pgsql/9.0/shards/pg2/data/pg_log/

Dump Shard1 Schema and Create Shard2 Database with schema only

  • cd /var/lib/pgsql/9.0/shards/pg2/data
  • pg_dump –username=pgadmin –port=7432 shard_1 > myapp.schema
  • dropdb –username=pgadmin –port=7432 shard_1
  • createdb –username=pgadmin –port=7432 shard_2
  • psql –username=pgadmin –port=7432 -f myapp.schema shard_2

Get a Snapshot of the Shard2 Master Server Database

  • Start a Psql session:
    • psql –username=pgadmin –port=7432 –dbname=shard_2
  • After login exectute the following statement to prepare the backup
    • SELECT pg_start_backup(‘shard2_backup’);
    • This command will create a snapshot of the master server
  • Open another terminal window and archive the entire Master Server Data folder
    • cd /var/lib/pgsql/9.0/shards/pg2
    • tar -cjf postgres-pg2.tar.bz2 data
  • In the psql session execute the following command to stop backup mode
    • SELECT pg_stop_backup();

      {info:title=Make Note}

      • Executing the pg_stop_backup() is very important otherwise postgres remains in a backup state
        and might not respond to regular queries.

      {info}

Create Shard2 Slave instances for Master Shard2

Update Shard2 Slave DB folder using Master Server Snapshot

  • For pg2_s1
    • cd /var/lib/pgsql/9.0/shards/pg2_s1
    • tar -xjf ../pg2/postgres-pg2.tar.bz2
    • cd data
    • rm -f postmaster.pid
    • rm -f pg_log/*
  • For pg2_s2
    • cd /var/lib/pgsql/9.0/shards/pg2_s2
    • tar -xjf ../pg2/postgres-pg2.tar.bz2
    • cd data
    • rm -f postmaster.pid
    • rm -f pg_log/*
  • For pg2_s3
    • cd /var/lib/pgsql/9.0/shards/pg2_s3
    • tar -xjf ../pg2/postgres-pg2.tar.bz2
    • cd data
    • rm -f postmaster.pid
    • rm -f pg_log/*

Setup the Shard2 Slave Service Start/Stop Scripts

  • cd /etc/init.d
  • cp pg2 pg2_s1
  • cp pg2 pg2_s2
  • cp pg2 pg2_s3
  • Edit pg2_s1 and change the following entries (LINES 61-63):

    /etc/init.d/pg2_s1

    PGPORT=7433
    PGDATA=/var/lib/pgsql/${PGMAJORVERSION}/shards/pg2_s1/data
    PGLOG=/var/lib/pgsql/${PGMAJORVERSION}/pg2_s1.log
  • Edit pg2_s2 and change the following entries (LINES 61-63):

    /etc/init.d/pg2_s2

    PGPORT=7434
    PGDATA=/var/lib/pgsql/${PGMAJORVERSION}/shards/pg2_s2/data
    PGLOG=/var/lib/pgsql/${PGMAJORVERSION}/pg2_s2.log
  • Edit pg2_s3 and change the following entries (LINES 61-63):

    /etc/init.d/pg2_s3

    PGPORT=7435
    PGDATA=/var/lib/pgsql/${PGMAJORVERSION}/shards/pg2_s3/data
    PGLOG=/var/lib/pgsql/${PGMAJORVERSION}/pg2_s3.log
  • Enable the Slaves as a CentOS service:
    • chkconfig –add pg2_s1
    • chkconfig pg2_s1 on
    • chkconfig –add pg2_s2
    • chkconfig pg2_s2 on
    • chkconfig –add pg2_s3
    • chkconfig pg2_s3 on

Configure the Shard2 Slave instances for Streaming Replication

  • For pg2_s1
    • cd /var/lib/pgsql/9.0/shards/pg2_s1/data
    • Edit postgresql.conf and mark this instance as hot standby

      /var/lib/pgsql/9.0/shards/pg2_s1/postgresql.conf

      hot_standby=on
    • Create a new file called recovery.conf with the entries below.

      /var/lib/pgsql/9.0/shards/pg2_s1/recovery.conf

      standby_mode          = 'on'
      primary_conninfo      = 'host=localhost port=7432 user=pgadmin password=pgadmin'
      # Specifies a trigger file whose presence should cause streaming replication to
      # end (i.e., failover).
      trigger_file = '/var/lib/pgsql/9.0/shards/pg2/data/trigger'
      restore_command = 'cp /var/lib/pgsql/9.0/shards/pg2/archive/%f %p'
  • For pg2_s2
    • cd /var/lib/pgsql/9.0/shards/pg2_s2/data
    • cp ../../pg2_s1/data/postgresql.conf .
      • enter yes to overwrie
    • cp ../../pg2_s1/data/recovery.conf .
  • For pg2_s3
    • cd /var/lib/pgsql/9.0/shards/pg2_s3/data
    • cp ../../pg2_s1/data/postgresql.conf .
      • enter yes to overwrie
    • cp ../../pg2_s1/data/recovery.conf .

Setup Shard2 Slave File permissions

We repeat the setting up file permission to ensure any new files created have the same privileges.

  • The postgres process runs as the user postgres and so we make this directory and everything under it owned by the user postgres
    • chown -R postgres:postgres /var/lib/pgsql/9.0/shards
  • Postgres requires that DB directory folders are not world readable/writeable. Set the permissions to restrict this:
    • chmod -R go-rwx /var/lib/pgsql/9.0/shards

Start Shard2 Slave Instances

  • service pg2_s1 start
  • service pg2_s2 start
  • service pg2_s3 start
  • If startup failed or to ensure proper startup view these files.
    • For pg2_s1 see the latest log file in /var/lib/pgsql/9.0/shards/pg2_s1/data/pg_log/
    • For pg2_s2 see the latest log file in /var/lib/pgsql/9.0/shards/pg2_s2/data/pg_log/
    • For pg2_s3 see the latest log file in /var/lib/pgsql/9.0/shards/pg2_s3/data/pg_log/

Verify Streaming Replication processes for Shard2

  • To verify streaming replication run the following command:
    ps -ef | grep wal
  • If your setup is valid you should see:
    • 2 “wal write” process – one per master
    • 6 “wal receiver” processes – one for each slave
    • 6 “wal sender” processes – one for each slave

Test Replication for Shard2

  • In a terminal window start a psql session and connect to the Master:
    psql –username=pgadmin –port=7432 –dbname=shard_2
  • Execute the following SQL statements:
    CREATE TABLE test (test varchar(30));
    INSERT INTO test VALUES (‘Testing 1 2 3’);
  • In another terminal window start a psql session and connect to slave 1:
    psql –username=pgadmin –dbname=shard_2 –port=7433
  • Execute the following SQL statements:
    SELECT * from test;

    • If replication was successul you should see one record displayed.
  • In another terminal window start a psql session and connect to slave 2:
    psql –username=pgadmin –dbname=shard_2 –port=7434
  • Execute the following SQL statements:
    SELECT * from test;

    • If replication was successul you should see one record displayed.
  • In another terminal window start a psql session and connect to slave 3:
    psql –username=pgadmin –dbname=shard_2 –port=7435
  • Execute the following SQL statements:
    SELECT * from test;

    • If replication was successul you should see one record displayed.
  • Now in the Master psql session execute the statement:
    DROP TABLE test;
  • In any one of the slave psql sessions run:
    SELECT * from test;
  • The previous command will generate an error.

References

  1. http://wiki.postgresql.org/wiki/Replication,_Clustering,_and_Connection_Pooling
  2. http://wiki.postgresql.org/wiki/Streaming_Replication#How_to_Use
  3. http://brandonkonkle.com/blog/2010/oct/20/postgres-9-streaming-replication-and-django-balanc/