Promoting MySQL slave to master


Many reasons why, one is because the master is running out of space, another would be to safely facilitate an OS upgrade. In the first instance we could resize the droplet, but that’ll take time during which the DB will not be available and therefore result in enforced downtime, in the latter case we could upgrade the OS on the running server but you know not what perils may lie in wait which could result in much downtime.

By following this process we should be able to complete the task with minimal, < 30 minutes, downtime – probably even less with good planning. By adding a new slave we can also get the benefit of rolling out a new droplet running the latest OS.


The basic topology is one master with 2 slaves directly attached, this provides redundancy for running backups if a slave fails, and provides scope for a reasonably quick fail over by promoting a slave should the master fail.

[add diagrams]

Step 1: – Re-structuring

When we promote one slave to be the new master, we want the other slave to become a slave of this new master. The slaves are all configured to write bin logs already so all we need to do is select which role each of our slaves will have, stop the salves and ensure they are exactly the same point in respect of the current master, update slave which will not be promoted to the new master as a slave of the other one. Essentially we are moving to a chain structure, which will allow us to move the application connections to down the chain to the promoted master, with it’s initial slave and then deprecate the old master from the top of the chain. Later we can create a new slave which we will also attach to the new master restoring the original architecture of one master with two directly linked slaves.

Selecting the slave to promote

If 1 slave has a higher spec, then this would be the one to select – memory and disk space being of most importance.

Otherwise if the slaves are only being used for backups it doesn’t really matter, but if we using the slaves for reads to reduce load on the master then the best bet is to pick the slave being used for this purpose. If both slaves are being used for reads then migrate all the reads to the slave to be promoted first. Traffic can be checked using


or on the command line

sudo tcpdump -i any port 3306

In both cases we want the slave which is not being promoted to only be displaying replication connections, the reasons for this is are :

  1. allow us to shutdown this server and take a snapshot to create a new slave from without impacting the application(s)
  2. minimise impact on the application as modify the hierarchy, we can change connections back to a slave later with minimal to no impact on the user experience.

Moving the slave

The objective here is to create a chain, where the slave to be promoted will be in the middle. This will allow us to redirect traffic from the current slave and then remove the old master from the top of the chain leaving us with a new master with at least 1 slave already in place. Adding a second during this process is also recommended. For this example lets say we are moving slave (id: 3) to become a slave if salve (id: 2).

To do this we need to set up a replication user, bring the 2 slaves to a stop at exactly the same point in time and then change the master config of slave (id: 3) to point to slave (id:2).

  1. On the slave to be promoted (id: 2), create a user for replication to use:

2. On the slave to be moved (id : 3), stop the slave from running and check the Master_Log_File and Exec_Master_Log_Pos – these indicate at what point transactions replicated from the master have been executed on this slave. There is also Read_Master_Log_Pos, however this indicates the point at which the server has read transactions on to the local relay log – these have not yet been committed on this slave!

*********************** 1. row ***********************
 Master_Log_File: mysql-bin.000002
 Exec_Master_Log_Pos: 93425058

3. On the slave to be promoted (id: 2), also stop the slave and check the log position, the important thing here is that it must be ahead of the instance which will become a slave of this one (id :3)

*********************** 1. row ************************
Master_Log_File: mysql-bin.000002 Exec_Master_Log_Pos: 93427899

4. restart the slave to be moved (id: 3), but have it stop at the same position as the one to be promoted, i.e. Master_Log_File = mysql-bin.000002, Master_Log_Pos = 93427899

MASTER_LOG_FILE = <Master_Log_File>, 
MASTER_LOG_POS = <Exec_Master_Log_Pos>;

5. Prepare the new slave to be promoted (id: 2) by purging the binary logs (requires pre-planning of having the slave write bin logs and have the log-slave-updates setting in the mysql config file, otherwise this server will need to be configured and restarted with the settings.

RESET MASTER; // if server current has NO replicas 
PURGE_BINARY_LOGS; // if server already has replicas; 
| File             | Position | 
| mysql-bin.000001 |      154 |

6. Now we have both slaves at the same point in time we can switch slave (id: 3) to use slave (id: 2) as it’s master creating the new structure. On the slave being moved (id: 3)

STOP SLAVE;  -- stop both threads SQL & IO
RESET SLAVE; -- clean up relay logs read from the old master 
               -- (even if we roll back from here these can be pulled again from the master)*

-- Copy slave status somewhere, so you know the state of the server if you need 
-- to work back

-- change for details for the server to use as master
        MASTER_HOST='<IP OF NEW MASTER (slave id:2)',
        MASTER_LOG_FILE='<BIN FILE NAME>',   -- i.e. mysql-bin.000001
        MASTER_LOG_POS=<MASTER POSITION,     -- i.e. 154

-- check LAST_IO_ERROR, if empty this process should be almost complete

7. Test – make an update we can check on the slave to be promoted (id: 2)- remember as new master is still also a slave itself we need to take care not to make a change which could affect replication from the current actual master at the top of the chain. You could create a new table for this and drop it after, or in the case of the savewatersavemoney databases you can update the heartbeat table.

-- Slave to be promoted (id: 2)
USE heartbeat;
-- as we stopped this slave it should not be being updated
SELECT * FROM heartbeat; 
SELECT * FROM heartbeat; 

UPDATE heartbeat SET ts = NOW();
SELECT * FROM heartbeat;
-- Slave moved (id: 3), timestamp should match (maybe to test again to be sure)
SELECT * FROM heartbeat.heartbeat; 

8. Restart slave on the slave to be promoted (id: 2) and now all updates on the master should trickle down through the salve to be promoted (id: 2) and on to the migrated slave (id: 3). The topology will now be as follows:

Step 2 – Promoting the new master, slave (id: 2)

This is less about DB work and more about about service management, we need to :

1. Identify all services reading and writing to the master (id: 1), the tcpdump tool can help here

sudo tcp dump -i any -p 3306

2. Disable applications, this may require shutting down services or load balancers – to ensure we are not at risk of data loss ALL interactions should be stopped

3. Update necessary configurations to direct all master (read/write and write) traffic to the new master, slave (id:2)

4. check traffic to the master again, if all is quite with the exception of the slave pulling the binary logs we are good to continue.

5. Stop the slave thread on the slave to be promoted

6. Start re-enabling services, 1 at a time and verify traffic is going to the new master. N.B. if you have multiple application servers behind a load balancer you can remove one from the load balancer and update the configuration, and then switch the traffic between the servers creating a almost seamless switchover to the new master.

7. Continue to bring back up the applications and services until everything is running again.

8. Test, Test, Test – new servers can have different default configurations, check SQL_MODE, which may only affect certain type of query – so try to check a varied selection of use cases, most importantly any thing client facing.

9. Once everything is moved over check for activity on the original master (id: 1) and if you’re happy the it’s time to shutdown the mysql service and watch out for any reported errors. All good we can now power down the old master, leaving with our promoted master and it’s slave.

! DO NOT DESTROY the old master yet, if you really want to save the bucks be sure to snapshot it first…. just incase we missed some STORED PROCEDURES or FUNCTIONS or …. something we later regret being so rash on the clean up for.

3. Create a new second

If you have time I’d suggest doing this as part of the preparation so by the end of the previous step we already have our triage of master with 2 direct slaves, but as long as the slave is not yet responsible for serving reads we can still do this – if it is then we need to redirect that traffic first.

1 – shutdown slave

sudo service mysql stop
sudo shutdown -h now 

2. snapshot the droplet, if DigitalOcean otherwise its plan B, build a new server and restore from backup of the the slave (id :3) which may be preferable in any case.

3. Create a new droplet from the sanpshot

4. Logon and shutdown server, delete auto.cnf from the data path to ensure server gets a fresh UUID and update the server_id in the mysql configuration file

5. Create a slave user of the new server

6. Start the slave – all being well it’l start ticking a way nicely, if not double check the other slave (id: 3) as if the UUID was not updated it’ll have stopped as well due to the conflict – you can just restart that slave once the new one has been stop again.

7. If you find you get a lot of update/write errors chances are the slave stopped at a later point in time – you can check the error log of the salve teh snapshot was taken from to find out at what position it restarted – set the master log details of the new to thses and try starting again

4 . Done

At this point we should be back to square one, and as the new slave is not yet being used by any services it the perfect candidate for upgrading to the latest OS – unless you created a fresh one and restored from backup in which case hopefully everything is running the latest stable versions.

Hive lights with Siri

If you have Hive sockets, you can control them using Siri – for this you’ll need to utilise Apple’s ‘Shortcuts‘ App.

Once installed open the app and go to the add shortcuts screen by tapping on the ‘+’ in the top right corner.

At the bottom you see a ‘search’ input, for each component shown below start typing the component title to find and add that component, e.g. for the parameters we need a couple of ‘Text’ and ‘Set Variable’ components

1 . Login Parameters

Although not essential, it is good idea to put the login details where they can be easily found and modified. Also if you decide to share your shortcuts you can easily blank out these details otherwise someone could access and play with your devices!

In the Settings, you can setup ‘Import Questions’, this will allow you to set up prompts for these fields when the shortcut is added. (try out the shortcuts at the end to see this in action)

Built – Shortcuts


Socket - toggle a named hive socket on/off based on it's current state 

Lights (GU10)

Tested on GU10 spots, but should work on any bulb. It may be necessary to check the device name of your bulbs and change 'warmwhitelight' accordingly.

Lights on - turns all 'warmwhitelights' on (tested on GU10)

Lights off - turns all 'warmwhitelights' off (tested on GU10)

Lights (GU10) Dimmable

Again these should probably work with any bulbs which are dimmable as long as the device name is set correctly.

Lights dim - set all 'warmwhitelights' to 30% brightness (tested on GU10) N.B. requires dimmable bulbs

Lights bright - increase brightness of all 'warmwhitelights' by 20% brightness (duplicate and subtract to create dimmer version):

Lights random - randomly sets the brightness of each bulb:

Banishing Loops

PHP has a number of functions for manipulating arrays of data, which will almost always achieve a result quicker than creating a solution in PHP itself, often involving one or more nested loops. Part of the reason for this is that PHP is interpreted whereas the underlying PHP functions are compiled. 

Output data as csv

A common solution for converting an associative array in to a format which can be returned as a CSV file is to use one loop to output each value followed by a comma nested inside another loop to traverse the rows.

$dataset = [
    ['a', 'b', 'c'],
    ['d', 'e', 'f'],
    ['g', 'h', 'i'],

foreach ($dataset as $row) {
    $outputRow = '';
    foreach ($row as $value) {
        $outputRow .= (empty($outputRow)) ? $value : ",$value";
    $outputCSV .= $outputRow . "\n";

This can be improved quite easily by switching the internal loop for a call to implode:

foreach ($dataset as $row) {
    $outputRow = implode(',',$row);
    $outputCSV .= $outputRow . "\n";

But we can take this a step further to remove both loops, using array_map() to execute a callable on each row and array_fill() to provide the separator to the implode() function, followed by a second call to implode():

$outputRows = array_map(
    array_fill(0, count($dataset), ',')

$outputCSV = implode("\n", $outputRows);

Whats happening here is array_fill() is creating an array of the same length as $dataset where each value = ‘,’. array_map() is calling implode() on each row and supplying the value at the same index of the result from the call to array_fill() as the first parameter – i.e.

$param = array_fill(0, count($dataset), ',');
for ($i = 0; $i < count($dataset); $i++) {
    $output .= implode($param[i], $dataset[i]);

As a footnote, if there is a possibility that empty rows may occur in the data set, the following can be used to remove them:

$dataset = array_filter($dataset);  

As oppose to building csv output to be returned in the browser it may be preferable to output to file, if so then use fputcsv() in place of implode, passing in the file handle as the first parameter, Note the order of the parameters passed to array_map(), after the first argument which is the name of the callable function, the following parameters are the values to be passed to the callable in the order expected by the callable: 

$dataset = [
    ['a', 'b', 'c'],
    ['d', 'e', 'f'],
    ['g', 'h', 'i'],

$file = fopen('out.csv', 'w');

$outputRows = array_map(
    array_fill(0, count($dataset), $file),



There are numerous sorting functions provided by PHP which will sort() an array,  or in reverse, rsort(). Sort an array maintaining the key mappings, asort() and arsort(). Or sort an array by the keys as oppose to its values with ksort() and krsort().

How to sort an associative array containing a dataset, e.g. [date, clicks, impressions, acquisitions]? For this task the function in the toolbox to reach for is the array_multisort(). This function will sort an associative array on a specified column, or columns:

$dataset = [
    [1,3,1], [2,3,2], [2,2,3],
    [1,2,1], [3,1,2], [1,3,3]

$sortColumn = array_column($dataset, ‘date’);
array_multisort($sortColumn, $dataset);

It can also be given a sort order and sort flags, i.e. whether the data should be treated as numeric int or string value:

    array_column($dataset, 2),  //sort 3rd column, $dataset[2]

As hinted above this method can also be used to sort on multiple columns, e.g.  date and impressions, this could be descending order for date and then ascending order for impressions. As many columns as are present in the dataset can be added for sorting on.


    array_column($dataset, 2),

    array_column($dataset, 1),


DataSet to [Key] => value

This derived from a requirement to pull a number of rows from the database and needing to map one field from each row as the key for a second field in each row. This is reasonably simple to do using a loop by looping over each row and building an array as :

for ($row in $resultSet) { 
     $array[$row[‘field1’] = $row[‘field2’];

For smaller result sets this is fine, but may not scale so well when the result set gets large, an alternative more performant solution can be achieved by combining array_combine() which creates an array using 2 other arrays for the keys and values respectively and array_column() which extracts a column from an associative array:

$keyValueArray = array_combine(
    array_column($resultSet, ‘field1’),
    array_column($resultSet, ‘field2’)

Slicing horizontally

It’s simple enough to slice an array vertically using the array_pop(), array_shift(), array_slice(), array_splice() or array_chunk() to split an array in to parts and then use array_push(), array_unshift() or array_merge() to construct an array from any number of other arrays (preferably with the same number of columns).  However it may not be quite so obvious to achieve a similar result horizontally.

Say we have a dataset of username and ages, and both columns are formatted as strings although the age column contains numeric data. If we require the age column to be treated as an integer, possibly because we intend to send this data out via an api, we need to convert them so that they will not be surrounded by quotes in the output. One option would be to loop over each row and build a new array casting the age as an int as we go, here is an alternative approach :

$a = [
    ['Username 1', '25', ''],
    ['Username 2', '46', ''],
    ['Username 3', '23', '']

$ages = array_column($a, 1);
$agesInt = array_map('intval', $ages);

$dataset = array_map(
    array_column($a, 1),
    array_column($a, 2)

print json_encode($dataset);

 What this does is extract the 1st column, the one with the ages, and runs intval() on each element in that vector. Then by passing null as the callback to array_map() it splices the given arrays together horizontally, refoming the original dataset but replacing the age column with the parsed one.

Non utf8 characters passed to json_encode()

A similar problem can arise when converting an array of data to JSON if the dataset contains some non-UTF8 characters, which results in json_encode() returning an empty string. The issue can be caught by checking the output of json_last_error(), from PHP 7.3 it will alsos throws an exception which can be caught in a try … catch. In order to resolve this issue it is necessary to traverse the array of data and utf8 encode the values, if the column(s) are known this can be achieved in a similar manner as the previous example, calling utf8_encode in place of intval:

$a = [
    ['Username 1', '25', ''],
    ['Username 2', '46', ''],
    ['Username 3', '23', '']

$ages = array_column($a, 1, 0);
$agesInt = array_map('intval', $ages);

foreach($dataset as $key => $row) {
    $dataset[$key][1] = $agesInt[$row[0]];


$dataset = array_walk(
    array_column($a, 1),
    array_column($a, 2)

print json_encode($dataset);
$utf8EncodedOutput = array_map(
     array_column($dataSet, 'columnToParse')

array_column() returns the column to be encoded, ‘name’, and the resulting output can then be merged back in to the original dataset. N.B.  array_column() takes an optional 3rd parameter to specify another column from the dataset to use for the index for the resulting vector array, which can be ussful if a loop is being used to generate the final output. 

N.B. This does not maintain keys, if you still requre to maintain keys a third parameter can be passed to array_column() to specify a column to use to index the output array, this should be a column which is unique to the rows in the dataset. The original dataset can then be traversed using a loop, or using a lambda call back passed to array_walk() in order to replace the values in the dataset with the converted ones in the parsed column:

$a = [
    ['k1' => 'Username 1', 'k2' => '25', ''],
    ['k1' => 'Username 2', 'k2' => '46', ''],
    ['k1' => 'Username 3', 'k2' => '23', '']

$ages = array_column($a, 'k2', 'k1');
$agesInt = array_map('intval', $ages);

// foreach($a as $key => $row) {
//    $a[$key]['k2'] = $agesInt[$row['k1']];
// }

array_walk($a, function(&$row) use ($agesInt) {
    $row['k2'] = $agesInt[$row['k1']];

print json_encode($a);

Other Useful Array functions


There are many ways to skin a cat, but generally loops may provide a quick solution to a problem which can later become a performance issue simply because more data === more iterations === more time to process. PHP’s provided utility functions can handle much of the heavy lifting for us and should be the first port of call before employing native solutions, whether home grown or imported as a package.