emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Continuent, helping the big players shine in the market

Planet MySQL - Tue, 11/20/2018 - 10:56

We are honored to be ranked as one of the 10 fastest growing AWS solution providers to watch in the year 2018 by The Technology Headlines.

Read about our journey, the host of benefits for our customers, our exceptional team, and future roadmap in this Amazon Special Edition.

Categories: Web Technologies

How To Install and Secure phpMyAdmin on Ubuntu 18.04 LTS

Planet MySQL - Tue, 11/20/2018 - 07:52
phpMyAdmin is a free and open source administration tool for MySQL and MariaDB. phpMyAdmin is a web-based tool that allows you to easily manage MySQL or MariaDB databases. In this tutorial, we will be going to explain how to install and secure phpMyAdmin on Ubuntu 18.04 server.
Categories: Web Technologies

401 Unauthorized - Evert Pot

Planet PHP - Tue, 11/20/2018 - 07:00

When a client makes a HTTP request, but the server requires the request to be authenticated a 401 Unauthorized status is returned.

This could mean that a user needs to log in first, or more generally that authentication credentials are required. It could also mean that the provided credentials were incorrect.

The name Unauthorized can be a bit confusing, and was regarded as a bit of misnomer. 401 is strictly used for Authentication. In cases where you want to indicate to a client that they simply aren’t allowed to do something, you need 403 Forbidden instead.

When a server sends back 401, it must also send back a WWW-Authenticate header. This header tells a client what kind of authentication scheme the server expects.

Examples

This is an example of a server that wants the client to login using Basic authentication.

HTTP/1.1 401 Unauthorized WWW-Authenticate: Basic; realm="Secured area"

This is an example using Digest auth:

HTTP/1.1 401 Unauthorized WWW-Authenticate: Digest realm="http-auth@example.org", qop="auth, auth-int", algorithm=SHA-256, nonce="7ypf/xlj9XXwfDPEoM4URrv/xwf94BcCAzFZH4GiTo0v", opaque="FQhe/qaU925kfnzjCev0ciny7QMkPqMAFRtzCUYo5tdS

OAuth2 uses something called Bearer tokens, which is really just a secret string:

HTTP/1.1 401 Unauthorized WWW-Authenticate: Bearer

It’s possible for a server to tell a client it supports more than one scheme. This example might be from an API that normally uses OAuth2, but also allows Basic for developing/debugging purposes.

HTTP/1.1 401 Unauthorized WWW-Authenticate: Basic; realm="Dev zone", Bearer

Due to how HTTP works, the above header is identical to the following:

HTTP/1.1 401 Unauthorized WWW-Authenticate: Basic; realm="Dev zone" WWW-Authenticate: Bearer

If a client got the correct credentials, it generally sends them to servers using the Authorization header:

GET / HTTP/1.1 Authorization: Basic d2VsbCBkb25lOnlvdSBmb3VuZCB0aGUgZWFzdGVyIGVnZwo= Other authentication schemes

IANA has a list of standard authenticaton schemes. Aside from Bearer, Digest and Bearer there is als

Truncated by Planet PHP, read more at the original (another 1488 bytes)

Categories: Web Technologies

MariaDB 10.3.11, and MariaDB Connector/C 3.0.7, Connector/ODBC 3.0.7 and Connector/Node.js 2.0.1 now available

Planet MySQL - Tue, 11/20/2018 - 06:43

The MariaDB Foundation is pleased to announce the availability of MariaDB 10.3.11, the latest stable release in the MariaDB 10.3 series, as well as MariaDB Connector/C 3.0.7 and MariaDB Connector/ODBC 3.0.7, both stable releases, and MariaDB Connector/Node.js 2.0.1, the first beta release of the new 100% JavaScript non-blocking MariaDB client for Node.js, compatible with Node.js […]

The post MariaDB 10.3.11, and MariaDB Connector/C 3.0.7, Connector/ODBC 3.0.7 and Connector/Node.js 2.0.1 now available appeared first on MariaDB.org.

Categories: Web Technologies

Push and ye shall receive

CSS-Tricks - Tue, 11/20/2018 - 06:42

Sometimes the seesaw of web tech is fascinating. Service workers have arrived, and beyond offline networking (read Jeremy's book) which is possibly their best feature, they can enable push notifications via the Push API.

I totally get the push (pun intended) to make that happen. There is an omnipresent sentiment that we want the web to win, as there should be in this industry. Losing on the web means losing to native apps on all the different platforms out there. Native apps aren't evil or anything — they are merely competitive and exclusionary in a way the web isn't. Making the web a viable platform for any type of "app" is a win for us and a win for humans.

One of the things native apps do well is push notifications which gives them a competitive advantage. Some developers choose native for stuff like that. But now that we actually have them on the web, there is pushback from the community and even from the browsers themselves. Firefox supports them, then rolled out a user setting to entirely block them.

We're seeing articles like Moses Kim's Don't @ me:

Push notifications are a classic example of good UX intentions gone bad because we know no bounds.

Very few people are singing the praises of push notifications. And yet! Jeremy Keith wrote up a great experiment by Sebastiaan Andeweg. Rather than an obnoxious and intrusive push notification...

Here’s what Sebastiaan wanted to investigate: what if that last step weren’t so intrusive? Here’s the alternate flow he wanted to test:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker fetches the contents of the URL provided in the push message and caches the page. Silently.

It worked.

Imagine a PWA podcast app that works offline and silently receives and caches new podcasts. Sweet. Now we need a permissions model that allows for silent notifications.

The post Push and ye shall receive appeared first on CSS-Tricks.

Categories: Web Technologies

Interview with Lorna Mitchell - Voices of the ElePHPant

Planet PHP - Tue, 11/20/2018 - 06:00
Video

@lornajane

Audio Show Notes This episode sponsored by:

The post Interview with Lorna Mitchell appeared first on Voices of the ElePHPant.

Categories: Web Technologies

Interview with Lorna Mitchell - Voices of the ElePHPant

Planet PHP - Tue, 11/20/2018 - 06:00
Video

@lornajane

Audio Show Notes This episode sponsored by:

The post Interview with Lorna Mitchell appeared first on Voices of the ElePHPant.

Categories: Web Technologies

Review of TRIAD: Creating Synergies Between Memory, Disk and Log in Log Structured Key-Value Stores

Planet MySQL - Mon, 11/19/2018 - 12:32
This is review of TRIAD which was published in USENIX ATC 2017. It explains how to reduce write amplification for RocksDB leveled compaction although the ideas are useful for many LSM implementations. I share a review here because the paper has good ideas. It isn't easy to keep up with all of the LSM research, even when limiting the search to papers that reference RocksDB, and I didn't notice this paper until recently.

TRIAD reduces write amplification for an LSM with leveled compaction and with a variety of workloads gets up to 193% more throughput, up to 4X less write amplification and spends up to 77% less time doing compaction and flush. Per the RUM Conjecture improvements usually come at a cost and the cost in this case is more cache amplification (more memory overhead/key) and possibly more read amplification. I assume this is a good tradeoff in many cases.

The paper explains the improvements via 3 components -- TRIAD-MEM, TRIAD-DISK and TRIAD-LOG -- that combine to reduce write amplification.

TRIAD-MEM

TRIAD-MEM reduces write-amp by keeping frequently updated keys (hot keys) in the memtable. It divides keys into the memtable into two classes: hot and cold. On flush the cold keys are written into a new L0 SST while the hot keys are copied over to the new memtable. The hot keys must be written again to the new WAL so that the old WAL can be dropped. TRIAD-MEM tries to keep the K hottest keys in the memtable and there is work in progress to figure out a good value for K without being told by the DBA.

An extra 4-bytes/key is used for the memtable to track write frequency and identify hot keys. Note that RocksDB already 8 bytes/key for metadata. So TRIAD-MEM has a cost in cache-amp but I don't think that is a big deal.

Assuming the per-level write-amp is 1 from the memtable flush this reduces it to 0 in the best case where all keys are hot.

TRIAD-DISK

TRIAD-DISK reduces write-amp by delaying L0:L1 compaction until there is sufficient overlap between keys to be compacted. TRIAD continues to use an L0:L1 compaction trigger based on the number of files in the L0 but can trigger compaction earlier when there is probably sufficient overlap between the L0 and L1 SSTs.

Overlap is estimated via Hyperloglog (HLL) which requires 4kb/SST and is estimated as the following where file-i is the i-th SST under consideration, UniqueKeys is the estimated number of distinct keys across all of the SSTs and Keys(file-i) is the number of keys in the i-th SST. The paper states that both UniqueKeys and Keys are approximated using HLL. But I assume that per-SST metadata already has an estimate or exact value for the number of keys in the SST. The formula for overlap is:
    UniqueKeys(file-1, file-2, ... file-n) / sum( Keys( file-i))

The benefit from early L0:L1 compaction is less read-amp, because there will be fewer sorted runs to search on a query. The cost from always doing early compaction is more per-level write-amp which is etimated by size(L1 input) / size(L0 input). TRIAD-DISK provides the benefit with less cost.

In RocksDB today you can manually schedule early compaction by setting the trigger to 1 or 2 files, or you can always schedule it to be less early with a trigger set to 8 or more files. But this setting is static. TRIAD-DISK uses a cost-based approach to do early compaction when it won't hurt the per-level write-amp. This is an interesting idea.

TRIAD-LOG

TRIAD-LOG explains improvements to memtable flush that reduce write-amp. Data in an L0 SST has recently been written to the WAL. So they use the WAL in place of writing the L0 SST. But something extra, an index into the WAL, is written on memtable flush because everything in the L0 must have an index. The WAL in the SST (called the CL-SST for commit log SST) will be deleted when it is compacted into the L1.

There is cache-amp from TRIAD-LOG. Each key in the CL-SST (L0) and maybe in the memtable needs 8 extra bytes -- 4 bytes for CL-SST ID, 4 bytes for the WAL offset.

Assuming the per-level write-amp is one from the memtable flush for cold keys this reduces that to 0.

Reducing write amplification

The total write-amp for an LSM tree with leveled compaction is the sum of:
  • writing the WAL = 1
  • memtable flush = 1
  • L0:L1 compaction ~= size(L1) / size(L0)
  • Ln compaction for n>1 ~= fanout, the per-level growth factor, usually 8 or 10. Note that this paper explains why it is usually a bit less than fanout.
TRIAD avoids the write-amp from memtable flush thanks to TRIAD-MEM for hot keys and TRIAD-LOG for cold keys. I will wave my hands and suggest that TRIAD-DISK reduces write-amp for L0:L1 from 3 to 1 based on the typical LSM configuration I use. So TRIAD reduces the total write-amp by 1+2 or 3.

Reducing total write-amp by 3 is a big deal when the total write-amp for the LSM tree is small, for example <= 10. But that only happens when there are few levels beyond the L1. Assuming you accept my estimate for total write-amp above then per-level write-amp is ~8 for both L1:L2 and L2:L3. The total write-amp for an LSM tree without TRIAD would be 1+1+3+8 = 13 if the max level is L2 and 1+1+3+8+8 = 21 if the max level is L3. And then TRIAD reduces that from 13 to 10 or from 21 to 18.

But my write-amp estimate above is more true for workloads without skew and less true for workloads with skew. Many of the workloads tested in the paper have a large amount of skew. So while I have some questions about the paper I am not claiming they are doing it wrong. What I am claiming is that the benefit from TRIAD is significant when total write-amp is small and less significant otherwise. Whether this matters is workload dependent. It would help to know more about the LSM tree from each benchmark. How many levels were in the LSM tree per benchmark? What is the per-level write-amp with and without TRIAD? Most of this can be observed from compaction statistics provided by RocksDB. The paper has some details on the workloads but that isn't sufficient to answer the questions above.
Questions

The paper documents the memory overhead, but limits the definition of read amplification to IO and measured none. I am interested in IO and CPU and suspect there might be some CPU read-amp from using the commit-log SST in the L0 both for searches and during compaction as logically adjacent data is no longer physically adjacent in the commit-log SST.
impact of more levels?

Another question is how far down the LSM compaction occurs. For example if the write working set fits in the L2, should compaction stop at the L2. It might with some values of compaction priority in RocksDB but it doesn't for all.  When the workload has significant write skew then the write working set is likely to fit into one of the smaller levels of the LSM tree.

An interesting variant on this is a workload with N streams of inserts that are each appending (right growing). When N=1 there is an optimization in RocksDB that limits write-amp to 2 (one for WAL, one for SST). I am not aware of optimizations in RocksDB for N>2 but am curious if we could do something better.
Categories: Web Technologies

Updated: Doctrine and MySQL 8 - An Odd Connection Refused Error

Planet MySQL - Mon, 11/19/2018 - 11:23
I am currently working my way through the many PHP Frameworks to see how they get on with MySQL 8.  The Frameworks that can take advantage of the MySQL Improved Extension or mysqli can take advantage of the SHA256 Caching Authentication method. But those that are PDO based need to use the older MySQL Native Authentication method.

I wanted to check the PDO based frameworks and today I just happened to be wearing the very nice Symfony shirt I received as part of my presentation at Symfony USA.  So I started with a fresh install of Symfony.  All was going well until it came time to get it to work with MySQL 8 through Doctrine.

Doctrine
Symfony uses Doctrine as an ORM (Object Relational Mapper) and DBAL  (Database Abstraction Layer) as an intermediary to the database.  While I myself am not a big fan of ORMs Doctrine does manage version migration very nicely.  When I tried to tie the frame work and the database together I received a stern connection refused error.

So I double checked the database connection parameters, making sure that I could get to where I wanted using the old MySQL shell.  Yes, the account to be used is identified by the native passwords and I had spelled the account name correctly. Then I double checked for fat-fingering on my part on the .env file where the connection details are stored. Then I did some searching and found someone else had stumbled onto the answer.

What does not work:
DATABASE_URL=mysql://account:password@127.0.0.1:3306/databasename

What does work:
DATABASE_URL=mysql://account:password@localhost:3306/databasename

So a simple s/127.0.0.1/hostname/ got things going.  I double checked the /etc/hosts file to make sure that alias was there (it was).


From then on I was able to create a table with VARCHAR and JSON columns and go about my merry way.

Update: An Oracle MySQL Engineer who works with the PHP connectors informed me that libmysql and all derived clients interpret "localhost" to mean "don't use TCP/ip, but Unix domain socket". And there was a kind post on the Doctrine mailing list informing me that the problems was upstream from Doctrine. Thanks to all who responded to solve this mystery for me.



Categories: Web Technologies

5 Ways to Convert React Class Components to Functional Components w/ React Hooks

Planet MySQL - Mon, 11/19/2018 - 09:02

In the latest alpha release of React, a new concept was introduced, it is called Hooks. Hooks were introduced to React to solve many problems as explained in the introduction to Hooks session however, it primarily serves as an alternative for classes. With Hooks, we can create functional components that uses state and lifecycle methods.

Related Reading:

Hooks are relatively new, as matter of fact, it is still a feature proposal. However, it is available for use at the moment if you'd like to play with it and have a closer look at what it offers. Hooks are currently available in React v16.7.0-alpha.

It's important to note that there are no plans to ditch classes. React Hooks just give us another way to write React. And that's a good thing!

Given that Hooks are still new, many developers are yet to grasp its concepts or understand how to apply it in their existing React applications or even in creating new React apps. In this post, we'll demonstrate 5 simple way to convert React Class Components to Functional Components using React Hooks.

Class without state or lifecycle methods

Let's start off with a simple React class that has neither state nor lifecycle components. Lets use a class that simply alerts a name when a user clicks a button:

import React, { Component } from 'react'; class App extends Component { alertName = () => { alert('John Doe'); }; render() { return ( <div> <h3> This is a Class Component </h3> <button onClick={this.alertName}> Alert </button> </div> ); } } export default App;

Here we have a usual React class, nothing new and nothing unusual. This class doesn't have state or any lifecycle method in it. It just alerts a name when a button is clicked. The functional equivalent of this class will look like this:

import React from 'react'; function App() { const alertName = () => { alert(' John Doe '); }; return ( <div> <h3> This is a Functional Component </h3> <button onClick={alertName}> Alert </button> </div> ); }; export default App;

Like the class component we had earlier, there's nothing new or unusual here. We haven't even used Hooks or anything new as of yet. This is because we've only considered an example where we have no need for state or lifecycle. But let's change that now and look at situations where we have a class based component with state and see how to convert it to a functional component using Hooks.

Class with state

Let's consider a situation where we have a global name variable that we can update within the app from a text input field. In React, we handle cases like this by defining the name variable in a state object and calling setState() when we have a new value to update the name variable with:

import React, { Component } from 'react'; class App extends Component { state = { name: '' } alertName = () => { alert(this.state.name); }; handleNameInput = e => { this.setState({ name: e.target.value }); }; render() { return ( <div> <h3> This is a Class Component </h3> <input type="text" onChange={this.handleNameInput} value={this.state.name} placeholder="Your name" /> <button onClick={this.alertName}> Alert </button> </div> ); } } export default App;

When a user types a name in the input field and click the Alert button, it alerts the name we've defined in state. Once again this is a simple React concept, however, we can convert this entire class into a functional React component using Hooks like this:

import React, { useState } from 'react'; function App() { const [name, setName] = useState('John Doe'); const alertName = () => { alert(name); }; const handleNameInput = e => { setName(e.target.value); }; return ( <div> <h3> This is a Functional Component </h3> <input type="text" onChange={handleNameInput} value={name} placeholder="Your name" /> <button onClick={alertName}> Alert </button> </div> ); }; export default App;

Here, we introduced the useState Hook. It serves as a way of making use of state in React functional components. With theuseState() Hook, we've been able to use state in this functional component. It uses a similar syntax with destructuring assignment for arrays. Consider this line :

const [name, setName] = useState("John Doe")

Here, name is the equivalent of this.state in a normal class components while setName is the equivalent of this.setState . The last thing to understand while using the useState() Hook is that it takes an argument that serves as the initial value of the state. Simply put, the useState() argument is the initial value of the state. In our case, we set it to "John Doe" such that the initial state of the name in state is John Doe.

This is primarily how to convert a class based React component with state to a functional component using Hooks. There are a lot more other useful ways of doing this as we'll see in subsequent examples.

Class with multiple state properties

It is one thing to easily convert one state property with useState however, the same approach doesn't quite apply when you have to deal with multiple state properties. Take for instance we had two or more input fields for userName, firstName and lastName then we would have a class based component with three state properties like this:

import React, { Component } from 'react'; class App extends Component { state = { userName: '', firstName: '', lastName: '' }; logName = () => { // do whatever with the names ... let's just log them here console.log(this.state.userName); console.log(this.state.firstName); console.log(this.state.lastName); }; handleUserNameInput = e => { this.setState({ userName: e.target.value }); }; handleFirstNameInput = e => { this.setState({ firstName: e.target.value }); }; handleLastNameInput = e => { this.setState({ lastName: e.target.value }); }; render() { return ( <div> <h3> This is a Class Component </h3> <input type="text" onChange={this.handleUserNameInput} value={this.state.userName} placeholder="Your username" /> <input type="text" onChange={this.handleFirstNameInput} value={this.state.firstName} placeholder="Your firstname" /> <input type="text" onChange={this.handleLastNameInput} value={this.state.lastName} placeholder="Your lastname" /> <button className="btn btn-large right" onClick={this.logName}> {' '} Log Names{' '} </button> </div> ); } } export default App;

To convert this class to a functional component with Hooks, we'll have to take a somewhat unconventional route. Using the useState() Hook, the above example can be written as:

import React, { useState } from 'react'; function App() { const [userName, setUsername] = useState(''); const [firstName, setFirstname] = useState(''); const [lastName, setLastname] = useState(''); const logName = () => { // do whatever with the names... let's just log them here console.log(userName); console.log(firstName); console.log(lastName); }; const handleUserNameInput = e => { setUsername(e.target.value); }; const handleFirstNameInput = e => { setFirstname(e.target.value); }; const handleLastNameInput = e => { setLastname(e.target.value); }; return ( <div> <h3> This is a functional Component </h3> <input type="text" onChange={handleUserNameInput} value={userName} placeholder="username..." /> <input type="text" onChange={handleFirstNameInput} value={firstName} placeholder="firstname..." /> <input type="text" onChange={handleLastNameInput} value={lastName} placeholder="lastname..." /> <button className="btn btn-large right" onClick={logName}> {' '} Log Names{' '} </button> </div> ); }; export default App;

This demonstrates how we can convert a class based component with multiple state properties to a functional component using the useState() Hook.

Here's the Codesandbox for this example.

https://codesandbox.io/s/ypjynxx16x

Class with state and componentDidMount

Let's consider a class with only state and componentDidMount. To demonstrate such a class, let's create a scenario where we set an initial state for the three input fields and have them all update to a different set of values after 5 seconds.

To achieve this, we'll have to declare an initial state value for the input fields and implement a componentDidMount() lifecycle method that will run after the initial render to update the state values.

import React, { Component, useEffect } from 'react'; class App extends Component { state = { // initial state userName: 'JD', firstName: 'John', lastName: 'Doe' } componentDidMount() { setInterval(() => { this.setState({ // update state userName: 'MJ', firstName: 'Mary', lastName: 'jane' }); }, 5000); } logName = () => { // do whatever with the names ... let's just log them here console.log(this.state.userName); console.log(this.state.firstName); console.log(this.state.lastName); }; handleUserNameInput = e => { this.setState({ userName: e.target.value }); }; handleFirstNameInput = e => { this.setState({ firstName: e.target.value }); }; handleLastNameInput = e => { this.setState({ lastName: e.target.value }); }; render() { return ( <div> <h3> The text fields will update in 5 seconds </h3> <input type="text" onChange={this.handleUserNameInput} value={this.state.userName} placeholder="Your username" /> <input type="text" onChange={this.handleFirstNameInput} value={this.state.firstName} placeholder="Your firstname" /> <input type="text" onChange={this.handleLastNameInput} value={this.state.lastName} placeholder="Your lastname" /> <button className="btn btn-large right" onClick={this.logName}> {' '} Log Names{' '} </button> </div> ); } } export default App;

When the app runs, the input fields will have the intial values we've defined in the state object. These values will then update to the values we've define inside the componentDidMount() method after 5 seconds. Now, let's convert this class to a functional component using the React useState and useEffect Hooks.

import React, { useState, useEffect } from 'react'; function App() { const [userName, setUsername] = useState('JD'); const [firstName, setFirstname] = useState('John'); const [lastName, setLastname] = useState('Doe'); useEffect(() => { setInterval(() => { setUsername('MJ'); setFirstname('Mary'); setLastname('Jane'); }, 5000); }); const logName = () => { // do whatever with the names ... console.log(this.state.userName); console.log(this.state.firstName); console.log(this.state.lastName); }; const handleUserNameInput = e => { setUsername({ userName: e.target.value }); }; const handleFirstNameInput = e => { setFirstname({ firstName: e.target.value }); }; const handleLastNameInput = e => { setLastname({ lastName: e.target.value }); }; return ( <div> <h3> The text fields will update in 5 seconds </h3> <input type="text" onChange={handleUserNameInput} value={userName} placeholder="Your username" /> <input type="text" onChange={handleFirstNameInput} value={firstName} placeholder="Your firstname" /> <input type="text" onChange={handleLastNameInput} value={lastName} placeholder="Your lastname" /> <button className="btn btn-large right" onClick={logName}> {' '} Log Names{' '} </button> </div> ); }; export default App;

This component does exactly the same thing as the previous one. The only difference is that instead of using the conventional state object and componentDidMount() lifecycle method as we did in the class component, here, we used the useState and useEffect Hooks. Here's a Codesanbox for this example.

https://codesandbox.io/s/jzoz2n97my

Class with state, componentDidMount and componentDidUpdate

Next, let's look at a React class with state and two lifecycle methods. So far you may have noticed that we've mostly been working with the useState Hook. In this example, let's pay more attention to the useEffect Hook.

To best demonstrate how this works, let's update our code to dynamically update the <h3> header of the page. Currently the header says This is a Class Component. Now we'll define a componentDidMount() method to update the header to say Welcome to React Hooks after 3 seconds:

import React, { Component } from 'react'; class App extends Component { state = { header: 'Welcome to React Hooks' } componentDidMount() { const header = document.querySelectorAll('#header')[0]; setTimeout(() => { header.innerHTML = this.state.header; }, 3000); } logName = () => { // do whatever with the names ... }; // { ... } render() { return ( <div> <h3 id="header"> This is a Class Component </h3> <input type="text" onChange={this.handleUserNameInput} value={this.state.userName} placeholder="Your username" /> <input type="text" onChange={this.handleFirstNameInput} value={this.state.firstName} placeholder="Your firstname" /> <input type="text" onChange={this.handleLastNameInput} value={this.state.lastName} placeholder="Your lastname" /> <button className="btn btn-large right" onClick={this.logName}> {' '} Log Names{' '} </button> </div> ); } } export default App;

At this point, when the app runs, it starts with the initial header This is a Class Component and changes to Welcome to React Hooks after 3 seconds. This is the classic componentDidMount() behaviour since it runs after the render function is executed successfully.

What if we want to dynamically update the header from another input field such that while we type, the header gets updated with the new text. To do that, we'll need to also implement the componentDidUpdate() lifecycle method like this:

import React, { Component } from 'react'; class App extends Component { state = { header: 'Welcome to React Hooks' } componentDidMount() { const header = document.querySelectorAll('#header')[0]; setTimeout(() => { header.innerHTML = this.state.header; }, 3000); } componentDidUpdate() { const node = document.querySelectorAll('#header')[0]; node.innerHTML = this.state.header; } logName = () => { // do whatever with the names ... let's just log them here console.log(this.state.username); }; // { ... } handleHeaderInput = e => { this.setState({ header: e.target.value }); }; render() { return ( <div> <h3 id="header"> This is a Class Component </h3> <input type="text" onChange={this.handleUserNameInput} value={this.state.userName} placeholder="Your username" /> <input type="text" onChange={this.handleFirstNameInput} value={this.state.firstName} placeholder="Your firstname" /> <input type="text" onChange={this.handleLastNameInput} value={this.state.lastName} placeholder="Your lastname" /> <button className="btn btn-large right" onClick={this.logName}> {' '} Log Names{' '} </button> <input type="text" onChange={this.handleHeaderInput} value={this.state.header} />{' '} </div> ); } } export default App;

Here, we have state, componentDidMount() and componentDidUpdate() . So far when you run the app, the header updates to Welcome to React Hooks after 3 seconds as we defined in componentDidMount(). Then when you start typing in the header text input field, the <h3> text will update with the input text as defined in the componentDidUpdate() method. Now lets convert this class to a functional component with the useEffect() Hook.

import React, { useState, useEffect } from 'react'; function App() { const [userName, setUsername] = useState(''); const [firstName, setFirstname] = useState(''); const [lastName, setLastname] = useState(''); const [header, setHeader] = useState('Welcome to React Hooks'); const logName = () => { // do whatever with the names... console.log(userName); }; useEffect(() => { const newheader = document.querySelectorAll('#header')[0]; setTimeout(() => { newheader.innerHTML = header; }, 3000); }); const handleUserNameInput = e => { setUsername(e.target.value); }; const handleFirstNameInput = e => { setFirstname(e.target.value); }; const handleLastNameInput = e => { setLastname(e.target.value); }; const handleHeaderInput = e => { setHeader(e.target.value); }; return ( <div> <h3 id="header"> This is a functional Component </h3> <input type="text" onChange={handleUserNameInput} value={userName} placeholder="username..." /> <input type="text" onChange={handleFirstNameInput} value={firstName} placeholder="firstname..." /> <input type="text" onChange={handleLastNameInput} value={lastName} placeholder="lastname..." /> <button className="btn btn-large right" onClick={logName}> {' '} Log Names{' '} </button> <input type="text" onChange={handleHeaderInput} value={header} /> </div> ); }; export default App;

We've achieved exactly the same functionailty using the useEffect() Hook. It's even better or cleaner as some would say because here, we didn't have to write separate codes for componentDidMount() and componentDidUpdate(). With the useEffect() Hook, we are able to achieve both functions. This is because by default, useEffect() runs both after the initial render and after every subsequent update. Check out this example on this CodeSandbox.

https://codesandbox.io/s/ork242q3y

Convert PureComponent to React memo

React PureComponent works in a similar manner to Component. The major difference between them is that React.Component doesn’t implement the shouldComponentUpdate() lifecycle method while React.PureComponent implements it. If your application's render() function renders the same result given the same props and state, you can use React.PureComponent for a performance boost in some cases.

Related Reading: React 16.6: React.memo() for Functional Components Rendering Control

The same thing applies to React.memo(). While the former refers to class based componentrs, React memo refers to functional components such that when your function component renders the same result given the same props, you can wrap it in a call to React.memo() to enhance performance. Using PureComponent and React.memo() gives React applications a considerable increase in performance as it reduces the number of render operations in the app.

Here, we'll demonstrate how to convert a PureComponent class to a React memo component. To understand what exactly they both do, first, let's simulate a terrible situation where a component renders every 2 seconds wether or not there's a change in value or state. We can quickly create this scenario like this:

import React, { Component } from 'react'; function Unstable(props) { // monitor how many times this component is rendered console.log(' Rendered this component '); return ( <div> <p> {props.value}</p> </div> ); }; class App extends Component { state = { value: 1 }; componentDidMount() { setInterval(() => { this.setState(() => { return { value: 1 }; }); }, 2000); } render() { return ( <div> <Unstable value={this.state.value} /> </div> ); } } export default App;

When you run the app and check the logs, you'll notice that it renders the component every 2 seconds without any change in state or props. As terrible as it is, this is exactly the scenario we wanted to create so we can show you how to fix it with both PureComponent and React.memo().

Most of the time, we only want to re-render a component when there's been a change in state or props. Now that we have experienced this awful situation, let's fix it with PureComponent such that the component only re-renders when there's a change in state or props. We do this by importing PureComponent and extending it like this:

import React, { PureComponent } from 'react'; function Unstable(props) { console.log(' Rendered Unstable component '); return ( <div> <p> {props.value}</p> </div> ); }; class App extends PureComponent { state = { value: 1 }; componentDidMount() { setInterval(() => { this.setState(() => { return { value: 1 }; }); }, 2000); } render() { return ( <div> <Unstable value={this.state.value} /> </div> ); } } export default App;

Now if you run the app again, you only get the initial render. Nothing else happens after that, why's that ? well, instead of class App extends Component{} now we have class App extends PureComponent{}

This solves our problem of re-rendering components, without respect to the current state. If however, we change this method:

componentDidMount() { setInterval(() => { this.setState(() => { return { value: 1 }; }); }, 2000); }

To:

componentDidMount() { setInterval(() => { this.setState(() => { return { value: Math.random() }; }); }, 2000); }

The component will re-render each time the value updates to the next random number. So, PureComponent makes it possible to only re-render components when there's been a change in state or props. Now let's see how we can use React.memo() to achieve the same fix. To do this with React memo, simply wrap the component with React.memo() Like this:

import React, { Component } from "react"; const Unstable = React.memo(function Unstable (props) { console.log(" Rendered Unstable component "); return <div>{props.val} </div>; }); class App extends Component { state = { val: 1 }; componentDidMount() { setInterval(() => { this.setState({ val: 1 }); }, 2000); } render() { return ( <div> <header className="App-header"> <Unstable val={this.state.val} /> </header> </div> ); } } export default App;

This achieves the same result as PureComponent did. Hence, the component only renders after the initial render and doesn't re-render again until there's a change in state or props. Here's the Codesandbox for this example.

https://codesandbox.io/s/100zmv7ljj

Conclusion

In this post we have demonstrated a few ways to covert an existing class based component to a functional component using React Hooks. We have also looked at a special case of converting a React PureComponent class to React.memo(). It may be obvious but i still feel the need to mention that in order to use Hooks in your applications, you'll need to update your React to the supported version:

"react": "^16.7.0-alpha", "react-dom": "^16.7.0-alpha",

React Hooks is still a feature proposal, however, we are hoping that it will be part of the next stable release as it makes it possible for us to eat our cake (use state in function components) and still have it back (retain the simplicity of writing functional components).

Categories: Web Technologies

Migrating to Amazon Aurora: Design for Flexibility

Planet MySQL - Mon, 11/19/2018 - 08:55

In this Checklist for Success series, we will discuss reducing unknowns when hosting in the cloud using and migrating to Amazon Aurora. These tips might also apply to other database as a service (DBaaS) offerings.

Previous blogs in the migrating to Amazon Aurora series:

The whole premise of a database as a service offering is that you do not need to worry about the operating the service, you just need to use it. But all DBaaS offerings have limitations as well as strengths. You should not get too comfortable with all the niceties of such services. You need to remain flexible and ultimately design to prevent database failure.

Have a Grasp of Your Cluster Behavior

Disable lab mode. You should not depend on this setting to take advantage of new features. It can break and leave you in a bad position. If you rely on this feature, and designed your application around it, you might find yourself working around the same problem if, for example, you are running the same queries on a non-Aurora deployment. This is not to say that you shouldn’t take advantage of all Aurora features. Lab mode, however, is “lab mode” and should not be enabled on a production environment.

Separate parameter group per cluster to keep your configuration changes isolated. In some cases, you might have a group of clusters that operate the same workload, But this should be rare, and also prohibits you from making rolling changes against each cluster.

Some might point out that syncing the parameter groups in this situation might be difficult. It really isn’t, and you don’t need any complicated tools to do it. For example, you can use

pt-config-diff to regularly inspect the differences between the runtime config on each cluster and identify or resolve differences.

While it is ideal that your clusters are always up to date, it can be intrusive to let them run on their own. Especially if your workload is not dependent on high/low traffic periods. I recommend having more control over the upgrade process, and this excellent community blog post from Renato on how to do just that is worth a read.

Don’t Put All Your Eggs in One Basket

On another note, Aurora can hold up to 64TB of data. Yes, that’s big. It might not be a problem and some of you might even be excited about this potential. But when you think about it, do you really want to store that amount of data in a single basket? What if you need to analyze this data for a particular time period. Is the cost worth it? Surely at some point, you will need to transport that data somewhere.

We’ve seen problems even at sizes less than 2TB. If you need to rebuild an asynchronous replica, for example, it takes a while. You have to be really ahead of capacity planning to ensure that you add new read-replicas when needed. This can be a challenge when you are on the spot. A burst of traffic might already be over before the replica provisioning is complete no matter how fast Aurora replica provisioning is.

Another challenge with datasets that are too big is when you have large tables. Schema changes become increasingly difficult in these situations, especially when such tables are subject to highly concurrent reads and writes. Recall that in the blog Migrating to Amazon Aurora: Optimize for Binary Log Replication we recommend setting

binlog_format to ROW to be able to use tools like gh-ost in these types of situations. High Availability On Your Terms

One limitation with Aurora cluster instances is that there is no easy way of taking a misbehaving read-replica out of rotation. Sure, you can delete the read replica. That leads to transient errors to the application, however, and impacts performance due to the time lag required to replace it to cover the workload.

Similarly, a misbehaving query can easily spoil the whole cluster, even if that query is spread out evenly to the read-replicas. Depending on how quickly you can disable the query, it might result in losing some business in the process. It would be nice if you could blackhole, rewrite or redirect such queries on demand so as to isolate the impact (or even fix it immediately).

Lastly, certain situations require that you restart the cluster. However, doing so could violate your uptime SLA. These situations can occur when you need to apply a non-dynamic cluster parameter, or you need to perform a cluster upgrade.

You can avoid most of these problems by not solely relying on Aurora’s own implementation of high availability. I say this because they are continuously improving this process. For now, however, you can use tools like ProxySQL to redirect traffic both in-cluster and between clusters replicating asynchronously. Percona has existing blog posts on this topic: Leveraging ProxySQL with AWS Aurora to Improve Performance, Or How ProxySQL Out-performs Native Aurora Cluster Endpoints and How to Implement ProxySQL with AWS Aurora.

Meanwhile, we’d like to hear your success stories in migrating to Amazon Aurora in the comments below!

Don’t forget to come by and see us at AWS re:Invent, November 26-30, 2018 in booth 1605! Percona CEO Peter Zaitsev will deliver a keynote on MySQL High Availability & Disaster Recovery, Tuesday, November 27 at 1:45 PM – 2:45 PM in the Bellagio Hotel, Level 1, Gauguin 2

Categories: Web Technologies

Why can’t we use Functional CSS and regular CSS at the same time?

CSS-Tricks - Mon, 11/19/2018 - 06:48

Harry Nicholls recently wrote all about simplifying styles with functional CSS and you should definitely check it out. In short, functional CSS is another name for atomic CSS or using “helper” or “utility” classes that would just handle padding or margin, background-color or color, for example.

Harry completely adores the use of adding multiple classes like this to an element:

So what I'm trying to advocate here is taking advantage of the work that others have done in building functional CSS libraries. They're built on solid foundations in design, people have spent many hours thinking about how these libraries should be built, and what the most useful classes will be.

And it's not just the classes that are useful, but the fundamental design principles behind Tachyons.

This makes a ton of sense to me. However, Chris notes that he hasn’t heard much about the downsides of a functional/atomic CSS approach:

What happens with big redesigns? Is it about the same, time- and difficulty-wise, or do you spend more time tearing down all those classes? What happens when you need a style that isn't available? Write your own? Or does that ruin the spirit of all this and put you in dangerous territory? How intense can all the class names get? I can think of areas I've styled that have three or more media queries that dramatically re-style an element. Putting all that information in HTML seems like it could get awfully messy. Is consistency harder or easier?

This also makes a ton of sense to me, but here’s the thing: I’m a big fan of both methods and even combine them in the same projects.

Before you get mad, hear me out

At Gusto, the company I work for today, I’ve been trying to design a system that uses both methods because I honestly believe that they can live in harmony with one another. Each solve very different use cases for writing CSS.

Here’s an example: let’s imagine we’re working in a big ol’ React web app and our designer has handed off a page design where a paragraph and a button need more spacing beneath them. Our code looks like this:

<p>Item 1 description goes here</p> <Button>Checkout item</Button>

This is just the sort of problem for functional CSS to tackle. At Gusto, we would do something like this:

<div class="margin-bottom-20px"> <p>Item 1 description goes here</p> <button>Checkout item</button> </div>

In other words, we use functional classes to make layout adjustments that might be specific to a particular feature that we’re working on. However! That Button component is made up of a regular ol’ CSS file. In btn.scss, we have code like this which is then imported into our btn.jsx component:

.btn { padding: 10px 15px; margin: 0 15px 10px; // rest of the styles go here }

I think making brand new CSS files for custom components is way easier than trying to make these components out of a ton of classes like margin-*, padding-*, etc. Although, we could be using functional styles in our btn.jsx component instead like this:

const Button = ({ onClick, className, children }) => { return ( <button className='padding-top-10px padding-bottom-10px padding-left-15px padding-right-15px margin-bottom-none margin-right-15px margin-left-15px margin-bottom-10px ${className}')} onClick={onClick} > {children} </button> ); };

This isn’t a realistic example because we're only dealing with two properties and we’d probably want to be styling this button’s background color, text color, hover states, etc. And, yes, I know these class names are a little convoluted but I think my point still stands even if you combine vertical and horizontal classes together.

So I reckon that we solve the following three issues with functional CSS by writing our custom styles in a separate CSS file for this particular instance:

  1. Readability
  2. Managing property dependencies
  3. Avoiding the painful fact that visual design doesn’t like math

As you can see in the earlier code example, it’s pretty difficult to read and immediately see which classes have been applied to the button. More classes means more difficulty to scan.

Secondly, a lot of CSS property/value pairs are written in relation to one another. Say, for example, position: relative and position: absolute. In our stylesheets, I want to be able to see these dependencies and I believe it’s harder to do that with functional CSS. CSS often depends on other bits of CSS and it’s important to see those connections with comments or groupings of properties/values.

And, finally, visual design is an issue. A lot of visual design requires imperfect numbers that don’t properly scale. With a functional CSS system, you’ll probably want a system of base 10, or base 8, where each value is based on that scale. But when you’re aligning items together visually, you may need to do so in a way that it won’t align to those values. This is called optical adjustment and it’s because our brains are, well, super weird. What makes sense mathematically often doesn’t visually. So, in this case, we'd need to add more bottom padding to the button to make the text feel like it’s positioned in the center. With a functional CSS approach it’s harder to do stuff like that neatly, at least in my experience.

In those cases where you need to balance readability, dependencies, and optical adjustments, writing regular CSS in a regular old-fashioned stylesheet is still my favorite thing in the world. But functional CSS still solves a ton of other problems very eloquently.

For example, what we’re trying to prevent with functional classes at Gusto is creating tons of stylesheets that do a ton of very specific or custom stuff. Going back to that earlier example with the margin beneath those two elements for a second:

<div className='margin-bottom-20px'> <p>Item 1 description goes here</p> <Button>Checkout item</Button> </div>

In the past our teams might have written something like this instead:

<div className='cool-feature-description-wrapper'> <p>Item 1 description goes here</p> <button>Checkout item</button> </div>

A new CSS file called cool_feature_description_wrapper.scss would need to be created in our application like so:

.cool-feature-description-wrapper { margin-bottom: 20px; }

I would argue that styles like this make our code harder to understand, harder to read, and encourages diversions from our library of components. By replacing this with a class from our library of functional classes, it’s suddenly much easier to read, and to change in the future. It also solves a custom solution for our particular needs without forking our library of styles.

So, I haven’t read much about balancing both approaches this way, although I assume someone has covered this in depth already. I truly believe that a combination of these two methods is much more useful than trying to solve all problems with a single bag of tricks.

I know, right? Nuanced opinions are the worst.

The post Why can’t we use Functional CSS and regular CSS at the same time? appeared first on CSS-Tricks.

Categories: Web Technologies

What’s new in Angular: Version 7.1 release candidate arrives

InfoWorld JavaScript - Mon, 11/19/2018 - 03:00

The release candiate of Version 7.1 of Angular, Google’s popular JavaScript framework for building mobile and desktop applications, is now available, with an improvement to the framework’s router. 

To read this article in full, please click here

(Insider Story)
Categories: Web Technologies

Design A Highly Available MySQL Clusters With Orchestrator And ProxySQL In GCP — Part 2

Planet MySQL - Sun, 11/18/2018 - 23:28
Design A Highly Available MySQL Clusters With Orchestrator And ProxySQL In GCP — Part 2

In part1, we explained how we are going to approach the HA setup. Here we can see how to install and configure Orchestrator and ProxySQL, then do the failover testing.

Install and configure MySQL Replication:

We need a MySQL with 4 Read replica and the 4'th replica will have a replica for it. And we must have to use GTID replication. Because once the master failover done, the remaining replicas will start replicating from the new master. Without GTID its not possible, but as an alternate Orchestrator provides Pseudo-GTID.

VM Details:
  • Subnet: 10.142.0.0/24
  • OS: Ubuntu 18.04LTS
Installing MySQL on all servers: wget https://dev.mysql.com/get/mysql-apt-config_0.8.10-1_all.deb
dpkg -i mysql-apt-config_0.8.10-1_all.deb
apt-get update
apt-get install -y mysql-server Enable GTID & Other settings:

Do the below changes on all the servers on my.cnf file and restart mysql service.
Note: server-id must be unique for all the servers. So use different ids for other servers.

vi /etc/mysql/mysql.conf.d/mysqld.cnf server-id = 101
gtid_mode = ON
enforce_gtid_consistency = ON
log_slave_updates = ON
binlog-format = ROW
log_bin = /var/log/mysql/mysql-bin.log
master_info_repository = TABLE service mysql restart Create a database with sample data:

Run the below queries on mysql-01

mysql> create database sqladmin;
mysql> use sqladmin
mysql> create table test (id int );
mysql> insert into test values(1); Backup the database:

Run the command on mysql-01

mysqldump -u root -p --databases sqladmin --routines --events --triggers > sqladmin.sql Create the user for replication:

Run the query on mysql-01

create user 'rep_user'@'10.142.0.%' identified by 'rep_password';
GRANT REPLICATION SLAVE ON *.* TO 'rep_user'@'10.142.0.%';
flush privileges; Establish the Replication:

Restore the database on the below servers and run the below query.

  1. mysql-ha
  2. replica-01
  3. replica-02
  4. report-01
-- Restore the database
mysql -u root -p < sqladmin.sql -- Start replication
CHANGE MASTER TO MASTER_HOST='10.142.0.13',
MASTER_USER='rep_user',
MASTER_PASSWORD='rep_password',
MASTER_AUTO_POSITION = 1; start slave; -- Check the replication status
show slave status\G Master_Host: 10.142.0.13
Master_User: rep_user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 1381
Relay_Log_File: replica-03-relay-bin.000002
Relay_Log_Pos: 1033
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Setup Replication for Report Server:

Take the dump of report-01server and restore it on report-ha

mysqldump -u root -p --databases sqladmin --routines --events --triggers > sqladmin.sql mysql -u root -p < sqladmin.sql -- from mysql shell CHANGE MASTER TO MASTER_HOST='10.142.0.21',
MASTER_USER='rep_user',
MASTER_PASSWORD='rep_password',
MASTER_AUTO_POSITION = 1; start slave; Enable SEMI-SYNC replication:

To prevent the dataloss and make sure the pending binlogs on the failover instances during the failover process, we need to enable Semi_sync between mysql-01 and mysql-ha.

Install the plugin on both servers.

INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so';
INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so'; Enable Semi-Sync on Master:

rpl_semi_sync_master_timeout — Master will wait for the acknowledgment till this value. Please give this value in milliseconds.

SET GLOBAL rpl_semi_sync_master_enabled = 1;
SET GLOBAL rpl_semi_sync_master_timeout = 5000; Enable Semi-Sync on Slave (replica-03): SET GLOBAL rpl_semi_sync_slave_enabled = 1; STOP SLAVE IO_THREAD;
START SLAVE IO_THREAD;

We need to add this parameter in my.cnf as well.

On the master:
[mysqld]
rpl_semi_sync_master_enabled =1
rpl_semi_sync_master_timeout = 5000
On each slave:
[mysqld]
rpl_semi_sync_slave_enabled = 1

Sometimes it’ll say invalid parameter then use the below lines instead.

loose-rpl_semi_sync_master_enabled = 1
loose-rpl_semi_sync_slave_enabled = 1

Replication part has been done. Now its time to play with Orchestrator.

Install Orchestrator:

Orchestrator VM’s IP address: 10.142.0.4

wget https://github.com/github/orchestrator/releases/download/v3.0.13/orchestrator_3.0.13_amd64.deb dpkg -i orchestrator_3.0.13_amd64.deb

It’s installed on /usr/local/orchestrator

Configuring Orchestrator:

We have a sample conf file the orchestrator’s home location. We need to copy that file as the main config file.

cd /usr/local/orchestrator
cp orchestrator-sample.conf.json orchestrator.conf.json MySQL Backend:

Orchestrator needs a backend database either SQLite or MySQL. I prefer MySQL. And to make sure this will be in HA, we are going to use CloudSQL with Failover. But for this PoC I have installed MySQL on the server where I have Orchestrator.

So install MySQL and create a database and user for orchestrator.

apt-get install -y mysql-server mysql -u root -p CREATE DATABASE IF NOT EXISTS orchestrator; CREATE USER 'orchestrator'@'127.0.0.1' IDENTIFIED BY '0rcP@sss'; GRANT ALL PRIVILEGES ON `orchestrator`.* TO 'orchestrator'@'127.0.0.1';

Orchestrator needs to login on all of your nodes to detect the topology and perform seamless failover and etc. So we need to create a user for Orchestrator on all the servers. Run the below query on mysql-01 it’ll replicate it to all other slaves.

We are using orchestrator in autoscaling, so while creating the user use subnet range for the host.

CREATE USER 'orchestrator'@'10.142.0.%' IDENTIFIED BY '0rcTopology'; GRANT SUPER, PROCESS, REPLICATION SLAVE, RELOAD ON *.* TO 'orchestrator'@'10.142.0.%'; GRANT SELECT ON mysql.slave_master_info TO 'orchestrator'@'10.142.0.%'; Edit the Conf file:

Now we need to make the below changes on /usr/local/orchestrator/orchestrator.conf.json

Orchestrator backend details: ...
"MySQLOrchestratorHost": "127.0.0.1",
"MySQLOrchestratorPort": 3306,
"MySQLOrchestratorDatabase": "orchestrator",
"MySQLOrchestratorUser": "orchestrator",
"MySQLOrchestratorPassword": "0rcP@sss",
... MySQL Topology User: "MySQLTopologyUser": "orchestrator",
"MySQLTopologyPassword": "0rcTopology", Promotion Node filters:

We want to Promote replica-03 when mysql-01 went down. It should not promote any other replica. So we need to tell don’t promote these nodes.

"PromotionIgnoreHostnameFilters": ["replica-01","replica-02","report-01","report-ha"], Other Parameters for failover: "DetachLostSlavesAfterMasterFailover": true,
"ApplyMySQLPromotionAfterMasterFailover": true,
"MasterFailoverDetachSlaveMasterHost": false,
"MasterFailoverLostInstancesDowntimeMinutes": 0,

Then start the Orchestrator.

service orchestrator start

The Web UI will run on port 3000.

To Read about the exact meaning of all parameters read the link.

github/orchestrator

Add the Topology to Orchestrator:

Open the web UI. In Clusters select Discovery. In IP address provide the mysql-01 IP address and Click submit. You’ll get a notification that its detected.

To view your topology, Click on Clusters -> mysql-01:3306

ProxySQL Setup:

Now we can move to ProxySQL setup. Lets install and configure it.

Install: wget https://github.com/sysown/proxysql/releases/download/v1.4.12/proxysql_1.4.12-ubuntu16_amd64.deb dpkg -i proxysql_1.4.12-ubuntu16_amd64.deb
service proxysql start Connect to ProxySQL: mysql -h 127.0.0.1 -uadmin -p -P6032 --prompt='ProxySQL> '
Enter Password: admin --This is default password. you can change it. Add MySQL servers to proxySQL: INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (10, '10.142.0.13', 3306);
INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (20, '10.142.0.16', 3306);
INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (20, '10.142.0.17', 3306);
INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (20, '10.142.0.20', 3306); LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK; Create ProxySQL Monitor User:

ProxySQL needs an user to check read_only flag on all the mysql servers. So we need to create this user on mysql-01 then it’ll replicate to all the servers.

Create user 'monitor'@'10.142.0.%' identified by 'moniP@ss';
Grant REPLICATION CLIENT on *.* to 'monitor'@'10.142.0.%';
Flush privileges;

Update the Proxysql with monitor user’s password.

UPDATE global_variables SET variable_value='monitor' WHERE variable_name='mysql-monitor_username';
UPDATE global_variables SET variable_value='moniP@ss' WHERE variable_name='mysql-monitor_password'; LOAD MYSQL VARIABLES TO RUNTIME;
SAVE MYSQL VARIABLES TO DISK; Add Read/Write host groups:

For us HostGroup ID 10 is writer and 20 is reader.

INSERT INTO mysql_replication_hostgroups VALUES (10,20,'mysql-01');
LOAD MYSQL SERVERS TO RUNTIME;
SAVE MYSQL SERVERS TO DISK;

Once its added, Proxysql will continually check the read_only flag on all the servers and those records will be logged into monitor.mysql_server_read_only_log.

mysql> select hostname, success_time_us, read_only from monitor.mysql_server_read_only_log ORDER BY time_start_us DESC limit 10;
+-------------+-----------------+-----------+
| hostname | success_time_us | read_only |
+-------------+-----------------+-----------+
| 10.142.0.20 | 644 | 1 |
| 10.142.0.17 | 596 | 1 |
| 10.142.0.13 | 468 | 0 |
| 10.142.0.16 | 470 | 1 |
| 10.142.0.20 | 474 | 1 |
| 10.142.0.17 | 486 | 1 |
| 10.142.0.16 | 569 | 1 |
| 10.142.0.13 | 676 | 0 |
| 10.142.0.17 | 463 | 1 |
| 10.142.0.13 | 473 | 0 |
+-------------+-----------------+-----------+ MySQL Users for benchmark test:

Create an user for sysbench test on mysql-01 It’ll replicate the user to all the nodes.

Create user 'sysdba'@'10.142.0.%' identified by 'DB@dmin';
Grant all privileges on *.* to 'sysdba'@'10.142.0.%';
Flush privileges;

But we are connecting the DB via ProxySQL right, then we need to added this user to ProxySQL as well. And this user should connect to Writer host (HostID 10)

INSERT INTO mysql_users(username, password, default_hostgroup) VALUES ('sysdba', 'DB@dmin', 10); LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL USERS TO DISK; Sysbench for HA test:

We are going to use sysbench. To install sysbench, run the below command.

sudo apt-get install sysbench

Create a database for benchmark on mysql-01.

create database sbtest; Prepare the sysbech database:

Before running the sysbench we need to create tables. Sysbench will do that for us.

sysbench --test=/usr/share/sysbench/oltp_read_write.lua \
--mysql-host=127.0.0.1 \
--mysql-port=6033 \
--mysql-user=sysdba \
--mysql-password=DB@dmin \
prepare WARNING: the --test option is deprecated. You can pass a script name or path on the command line without any options.
sysbench 1.0.15 (using bundled LuaJIT 2.1.0-beta2)

Creating table 'sbtest1'...
Inserting 10000 records into 'sbtest1'
Creating a secondary index on 'sbtest1'... Test HA for mysql-01

Now start inserting the data for 1min and after 10s stop mysql service on mysql-01

sysbench --test=/usr/share/sysbench/oltp_read_write.lua \
--time=60 \
--mysql-host=127.0.0.1 \
--mysql-port=6033 \
--mysql-user=sysdba \
--mysql-password=DB@dmin \
--report-interval=1 \
run [ 9s ] thds: 1 tps: 89.01 qps: 1780.18 (r/w/o: 1246.13/356.04/178.02) lat (ms,95%): 12.98 err/s: 0.00 reconn/s: 0.00
[ 10s ] thds: 1 tps: 17.00 qps: 320.96 (r/w/o: 223.98/63.99/33.00) lat (ms,95%): 13.70 err/s: 0.00 reconn/s: 0.00
[ 11s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 12s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 13s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 14s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 15s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 1.00
[ 16s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 17s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 18s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 19s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 1 tps: 52.00 qps: 1058.96 (r/w/o: 741.97/211.99/105.00) lat (ms,95%): 11.45 err/s: 0.00 reconn/s: 0.00
[ 21s ] thds: 1 tps: 95.00 qps: 1888.91 (r/w/o: 1322.94/375.98/189.99) lat (ms,95%): 11.45 err/s: 0.00 reconn/s: 0.00
[ 22s ] thds: 1 tps: 95.01 qps: 1900.11 (r/w/o: 1330.08/380.02/190.01) lat (ms,95%): 11.87 err/s: 0.00 reconn/s: 0.00

Within 9 seconds it came up. So proxysql detected that there read_only flag changed for mysql-ha Then immediately moved the Hostgroup ID to 10.

mysql> select hostgroup,srv_host,status from stats_mysql_connection_pool;
+-----------+-------------+--------+
| hostgroup | srv_host | status |
+-----------+-------------+--------+
| 10 | 10.142.0.20 | ONLINE |
| 20 | 10.142.0.16 | ONLINE |
| 20 | 10.142.0.17 | ONLINE |
| 20 | 10.142.0.20 | ONLINE |
| 20 | 10.142.0.13 | ONLINE |
+-----------+-------------+--------+ HA for Report-01:

The main master HA part is done. Now we can work on report-01 HA. For this we need to use a VIP (Alias IP). Your application will talk to that VIP.

Im going 10 add the IP 10.142.0.142 to Report-01 node.

VIP for report-01

During the replica-01 downtime, Orchestrator will trigger a hook to remove the Alias IP from the failed node and attach that IP to the Failover node. In our case, we VIP will switch from replica-01 to replica-ha

Create the hook in /opt/report-hook.sh

echo "Removing VIP"
gcloud compute instances network-interfaces update report-01 \
--zone us-east1-b \
--aliases ""
echo "Done" echo "attaching IP"
gcloud compute instances network-interfaces update report-ha \
--zone us-east1-b \
--aliases "10.142.0.142/32"
echo "Done"

Add this hook to Orchestrator’s conf file under PostIntermediateMasterFailoverProcesses

"PostIntermediateMasterFailoverProcesses": [
"/opt/report-hook.sh >> /tmp/recovery.log",
"echo 'Recovered from {failureType} on {failureCluster}. Failed: {failedHost}:{failedPort}; Successor: {successorHost}:{successorPort}' >> /tmp/recovery.log"
],

Now test the failover.

sysbench \
--test=/usr/share/sysbench/oltp_read_write.lua \
--time=60 \
--mysql-host=10.142.0.142 \
--mysql-user=admin \
--mysql-password=admin \
--report-interval=1 \
--mysql-ignore-errors=all \
run [ 3s ] thds: 1 tps: 71.00 qps: 1439.08 (r/w/o: 1008.05/288.02/143.01) lat (ms,95%): 15.27 err/s: 0.00 reconn/s: 0.00
[ 4s ] thds: 1 tps: 79.00 qps: 1580.00 (r/w/o: 1106.00/316.00/158.00) lat (ms,95%): 14.73 err/s: 0.00 reconn/s: 0.00
[ 5s ] thds: 1 tps: 16.00 qps: 309.00 (r/w/o: 217.00/60.00/32.00) lat (ms,95%): 15.00 err/s: 0.00 reconn/s: 0.00
[ 6s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 7s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
........
........
[ 30s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 31s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 32s ] thds: 1 tps: 0.00 qps: 0.00 (r/w/o: 0.00/0.00/0.00) lat (ms,95%): 0.00 err/s: 0.00 reconn/s: 0.00
[ 33s ] thds: 1 tps: 70.00 qps: 1406.93 (r/w/o: 985.95/279.99/140.99) lat (ms,95%): 12.98 err/s: 0.00 reconn/s: 1.00
[ 34s ] thds: 1 tps: 84.00 qps: 1685.98 (r/w/o: 1181.99/336.00/168.00) lat (ms,95%): 13.46 err/s: 0.00 reconn/s: 0.00
[ 35s ] thds: 1 tps: 82.01 qps: 1646.19 (r/w/o: 1150.13/332.04/164.02) lat (ms,95%): 13.22 err/s: 0.00 reconn/s: 0.00

This time it took 25Sec, but I have reproduced this many times, each time I got different values. But the average value is 40second.

report-ha got the VIPConclusion:

Finally we achieved what were thinking. But in this solution is also having few bugs. But its production ready HA solution. In the next part I’ll explain the bugs in this solution and possible workaround for them.

Design A Highly Available MySQL Clusters With Orchestrator And ProxySQL In GCP — Part 2 was originally published in Searce Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categories: Web Technologies

Design A Highly Available MySQL Clusters With Orchestrator And ProxySQL In GCP — Part 1

Planet MySQL - Sun, 11/18/2018 - 23:26
Design A Highly Available MySQL Clusters With Orchestrator And ProxySQL In GCP — Part 1

Recently we have migrated one of our customer's infra to GCP and post the migration we published some adventures on ProxySQL which we implemented for them.

  1. Reduce MySQL Memory Utilization With ProxySQL Multiplexing
  2. How max_prepared_stmt_count can bring down production

Now, we are going to implement an HA solution with customer filter for failover. We have done a PoC and the blog is about this PoC configurations. And again the whole setup has been implemented in GCP. You can follow the same steps for AWS and Azure.

Credits: A big thanks to Shlomi (Developer of Orchestrator) who helped a lot to setup Orchestrator.
René Cannaò (Author of ProxySQL) who cleared our doubts while merging Orchestrator setup with Proxysql. Background of the DB setup:
  • We are using MySQL 5.6.
  • GTID Replication Enabled.
  • ProxySQL is configured on top of MySQL Layer.
  • ProxySQL implemented to split read/write workloads.
  • One Master and 4 slaves (and one more slave for an existing slave)
  • Slave 1,2 will handle all the read workloads and data science/analytic queries.
  • The 4'th replica has a separate replica. And this server is used for internal Application and other services but it needs 90% of Production tables with real time data. Writes are enabled on this replica. But writeable tables are not available on Production, So replication never gets affect because of this write.
HA Requirements:
  • Slave 1,2 are splitting read loads, So we can achieve HA even with if one node down. So as of now, we didn’t set any addition HA for read group servers.
  • Slave 4 has additional writable tables, and internal applications are using this. So HA is mandatory for this.
  • Finally Master node. Yeah, of course, it should be in HA or in other terms auto failover must be in place.
  • During the main master failover, there should not be any data loss between the failed node and the new master.
Our approach for HA:
  1. For read groups, we already have 2 instances. It not a big problem for now.
  2. Slave 4 already have a replica. So we’ll make automatic failover to that replica. And we can use Virtual IP and during the failover will swap that IP to the new master.
  3. Still, we have one more replica (Replica 3). So we can use this for failover the master.
Overall Replication TopologyThe Risks/Restrictions in Failover:

We can achieve the above solution with some HA software for MySQL. But we have some restrictions.

  • If the master goes down then the HA tool will failover to Replica3. It shouldn’t promote any other nodes.
  • Once the master failover has been performed then all other slaves will start replicating from the newly promoted Master.
  • The Report Server is an intermediate Master. So if it went down, the responsibilities of this server will move to its own replica means the Replica of Report Server will continue the replication from any node (from the master or any slaves) and the report application will communicate to the replica server. In simple words, the replica will be promoted like a master.
  • I have simulated some various failover scenarios from the below image.
Top Layer - Reader (Blue)
Middle Layer - Writer (Yellow)
Low Layer - Report (Pink) Problems with Other HA tools:
  • They will promote any replica as a master. In our case finance db is also a replica of Production DB, So there are many possibilities that the Finance DB will become a master for production.
  • Due to network outage between HA tool and Master Node, but the connectivity between App servers and Slave nodes are fine. Then they will consider that maser has been down and start promoting any slaves.
  • There is no guarantee for make sure the HA tools are not in single point of failure.
  • During the planned maintenance, graceful master failover (manual failover) is not possible.
The Orchestrator:

To achieve the auto failover with the filters (don’t promote the report server and prefer Replica3 for new master) we decided to use Orchestrator. There are couple of impressive features of Orchestrator made us to try it.

  • If the master is down, Orchestrator will ask all the slave nodes that the master is really down or not. If slaves responds as Down, then the failover will happen.
  • We can define, which node can be a new master.
  • Graceful failover will help us during the maintenance window, so weCan perform the maintenance activity on all the nodes without breaking the replication and major downtime.
  • We can setup HA for the Orchestrator tool. So the Orchestrator service will not be in a single point of failure.
  • Once the failover has been done, Orchestrator will help the slaves to replicating from the new master.
  • Web UI is also available.
  • HOOKS are there to perform/run/automate any scripts during the detection of failure/pre-failover/post-failover.
  • You can read more about Orchestrator here.
ProxySQL — An alternate for Virtual IP:

After the promotion has been done, Orchestrator will make READ_ONLY=OFF on the promoted replica. In general mostly we are using VIP. Once the failover has performed, then a hook or script will switch the VIP from the Dead node to the new Master. But in many cases this is not a real quick task.

Here ProxySQL will help us and act as an alternate for VIP. How this magic happens? In ProxySQL there is a table called mysql_replication_hostgroups. Once we insert the data initially with which host ID id current read and writer. Then every N seconds(I guess its 3) they will check the read_only flag. Whenever it identified there is a change in this flag, and mean while writer node is down, then it’ll understand that the failover has been performed and the read become a master. So it’ll swap the read nodes and writer nodes host IDs. So all the connections will continue to go with the Hostgroup id 1.

Semi-Sync Replication:

This is a great feature to prevent dataloss during the failover. With this replication master will wait to get the acknowledgement from any one of the slave and the commit the transaction. So we enabled Semi-Sync between Master and Replica3.

Alias IP in GCP:

This the point where we got stuck for sometime. Because for Report server, we are not using ProxySQL. So once the failover done, then there is no way to intimate that the replica of Report server is the new master. Report server and its replica are in same subnet but different zone.

So we decided to use VIP for this. In GCP, we can use Alias IP address for this. And we can swap this IP address to the any node which is available on the same subnet. We need to worry about zone. (AWS and Azure — they don’t provide a subnet across multi zones, so your report and replica must be in a same subnet).

Final Solution:
  1. We used ProxySQL servers in managed instance group and at point of time 2 instances must be running with autoscaling.
  2. We have deployed the Orchestrator also in managed instance group and keep 3 nodes at any time.
  3. Orchestrator needs a MySQL backend to work. But they will not care about this database. So we used a tiny CloudSQL instance with Failover.
The MySQL HA Solution with Orchestrator and ProxySQL

In the next part, we explained how to configure this setup in GCP. So stay tune and hit the below link.

Design A Highly Available MySQL Clusters With Orchestrator And ProxySQL In GCP - Part 2

Design A Highly Available MySQL Clusters With Orchestrator And ProxySQL In GCP — Part 1 was originally published in Searce Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categories: Web Technologies

Pages