emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Rethinking Unit Test Assertions

Echo JS - Wed, 10/10/2018 - 03:27
Categories: Web Technologies

How to best use Sinon with Chai

Echo JS - Wed, 10/10/2018 - 03:27
Categories: Web Technologies

Intro to Vuex and Accessing State

Echo JS - Wed, 10/10/2018 - 03:27
Categories: Web Technologies


Planet MySQL - Wed, 10/10/2018 - 01:26

The FOSDEM organization just confirmed that again this year the ecosystem of your favorite database will have its Devroom !

More info to come soon, but save the day: 2 & 3rd February 2019 in Brussels !

It seems the MySQL & Friends Devroom  (MariaDB, Percona, Oracle, and all tools in the ecosystem) will be held on Saturday (to be confirmed).

Stay tuned !

Categories: Web Technologies

Globalizing Player Accounts at Riot Games While Maintaining Availability

Planet MySQL - Tue, 10/09/2018 - 10:25

The Player Accounts team at Riot Games needed to consolidate the player account infrastructure and provide a single, global accounts system for the League of Legends player base. To do this, they migrated hundreds of millions of player accounts into a consolidated, globally replicated composite database cluster in AWS. This provided higher fault tolerance and lower latency access to account data. In this talk by Tyler Turk (Infrastructure Engineer, Riot Games), we discuss this effort to migrate eight disparate database clusters into AWS as a single composite database cluster replicated in four different AWS regions, provisioned with terraform, and managed and operated by Ansible.

Join us Tuesday, Nov 27, 1:45 – 2:45 PM

Categories: Web Technologies

MySQL Books - 2018 has been a very good year

Planet MySQL - Tue, 10/09/2018 - 09:53
Someone once told me you can tell how healthy a software project is by the number of new books each year.  For the past few years the MySQL community has been blessed with one or two books each year. Part of that was the major shift with MySQL 8 changes but part of it was that the vast majority of the changes were fairly minor and did not need detailed explanations. But this year we have been blessed with four new books.  Four very good books on new facets of MySQL.

Introducing the MySQL 8 Document Store is the latest book from Dr. Charles Bell on MySQL.  If you have read any other of Dr. Chuck's book you know they are well written with lots of examples.  This is more than a simple introduction with many intermediate and advanced concepts covered in detail.

Introducing the MySQL 8 Document Store MySQL & JSON - A Practical Programming Guide by yours truly is a guide for developers who want to get the most of the JSON data type introduced in MySQL 5.7 and improved in MySQL 8.  While I love MySQL's documentation, I wanted to provide detailed examples on how to use the various functions and features of the JSON data type. 

MySQL and JSON A Practical Programming Guide
Jesper Wisborg Krogh is a busy man at work and somehow found the time to author and co-author two books.  The newest is MySQL Connector/Python Revealed: SQL and NoSQL Data Storage Using MySQL for Python Programmers which I have only just received.  If you are a Python Programmer (or want to be) then you need to order your copy today.  A few chapters in and I am already finding it a great, informative read.
MySQL Connector/Python Revealed
Jesper and Mikiya Okuno produced a definitive guide to the MySQL NDB cluster with Pro MySQL NDB Cluster.  NDB cluster is often confusing and just different enough from 'regular' MySQL to make you want to have a clear, concise guidebook by your side.  And this is that book.

Pro MySQL NDB Cluster
RecommendationEach of these books have their own primary MySQL niche (Docstore, JSON, Python & Docstore, and NDB Cluster) but also have deeper breath in that they cover material you either will not find in the documentation or have to distill that information for yourself.  They not only provide valuable tools to learn their primary facets of technology but also provide double service as a reference guide. 

Categories: Web Technologies

Build a Custom Toggle Switch with React

Planet MySQL - Tue, 10/09/2018 - 08:31

Building web applications usually involves making provisions for user interactions. One of the major ways of making provision for user interactions is through forms. Different form components exist for taking different kinds of input from the user. For example, a password component takes sensitive information from a user and masks the information so that it is not visible.

Most times, the information you need to get from a user is boolean-like - for example: yes or no, true or false, enable or disable, on or off, etc. Traditionally, the checkbox form component is used for getting these kinds of input. However, in modern interface designs, toggle switches are commonly used as checkbox replacements, although there are some accessibility concerns.

In this tutorial, we will see how to build a custom toggle switch component with React. At the end of the tutorial, we would have built a very simple demo React app that uses our custom toggle switch component.

Here is a demo of the final application we will be building in this tutorial.


Before getting started, you need to ensure that you have Node already installed on your machine. I will also recommend that you install the Yarn package manager on your machine, since we will be using it for package management instead of npm that ships with Node. You can follow this Yarn installation guide to install yarn on your machine.

We will create the boilerplate code for our React app using the create-react-app command-line package. You also need to ensure that it is installed globally on your machine. If you are using npm >= 5.2 then you may not need to install create-react-app as a global dependency since we can use the npx command.

Finally, this tutorial assumes that you are already familiar with React. If that is not the case, you can check the React Documentation to learn more about React.

Getting Started Create new Application

Start a new React application using the following command. You can name the application however you desire.

create-react-app react-toggle-switch

npm >= 5.2

If you are using npm version 5.2 or higher, it ships with an additional npx binary. Using the npx binary, you don't need to install create-react-app` globally on your machine. You can start a new React application with this simple command:

npx create-react-app react-toggle-switch Install Dependencies

Next, we will install the dependencies we need for our application. Run the following command to install the required dependencies.

yarn add lodash bootstrap prop-types classnames yarn add -D npm-run-all node-sass-chokidar

We have installed node-sass-chokidar as a development dependency for our application to enable us use SASS. For more information about this, see this guide.

Modify the npm Scripts

Edit the package.json file and modify the scripts section to look like the following:

"scripts": { "start:js": "react-scripts start", "build:js": "react-scripts build", "start": "npm-run-all -p watch:css start:js", "build": "npm-run-all build:css build:js", "test": "react-scripts test --env=jsdom", "eject": "react-scripts eject", "build:css": "node-sass-chokidar --include-path ./src --include-path ./node_modules src/ -o src/", "watch:css": "npm run build:css && node-sass-chokidar --include-path ./src --include-path ./node_modules src/ -o src/ --watch --recursive" } Include Bootstrap CSS

We installed the bootstrap package as a dependency for our application since we will be needing some default styling. To include Bootstrap in the application, edit the src/index.js file and add the following line before every other import statement.

import "bootstrap/dist/css/bootstrap.min.css"; Start the Application

Start the application by running the following command with yarn:

yarn start

The application is now started and development can begin. Notice that a browser tab has been opened for you with live reloading functionality to keep in sync with changes in the application as you develop.

At this point, the application view should look like the following screenshot:

The ToggleSwitch Component

Create a new directory named components inside the src directory of your project. Next, create another new directory named ToggleSwitch inside the components directory. Next, create two new files inside src/components/ToggleSwitch, namely: index.js and index.scss.

Add the following content into the src/components/ToggleSwitch/index.js file.

/_ src/components/ToggleSwitch/index.js _/ import PropTypes from 'prop-types'; import classnames from 'classnames'; import isString from 'lodash/isString'; import React, { Component } from 'react'; import isBoolean from 'lodash/isBoolean'; import isFunction from 'lodash/isFunction'; import './index.css'; class ToggleSwitch extends Component {} ToggleSwitch.propTypes = { theme: PropTypes.string, enabled: PropTypes.oneOfType([ PropTypes.bool, PropTypes.func ]), onStateChanged: PropTypes.func } export default ToggleSwitch;

In this code snippet, we created the ToggleSwitch component and added typechecks for some of its props.

  • theme - is a string indicating the style and color to be used for the toggle switch.

  • enabled - can be either a boolean or a function that returns a boolean, and it determines the state of the toggle switch when it is rendered.

  • onStateChanged - is a callback function that will be called when the state of the toggle switch changes. This is useful for triggering actions on the parent component when the switch is toggled.

Initializing the ToggleSwitch State

In the following code snippet, we initialize the state of the ToggleSwitch component and define some component methods for getting the state of the toggle switch.

/_ src/components/ToggleSwitch/index.js _/ class ToggleSwitch extends Component { state = { enabled: this.enabledFromProps() } isEnabled = () => this.state.enabled enabledFromProps() { let { enabled } = this.props; // If enabled is a function, invoke the function enabled = isFunction(enabled) ? enabled() : enabled; // Return enabled if it is a boolean, otherwise false return isBoolean(enabled) && enabled; } }

Here, the enabledFromProps() method resolves the enabled prop that was passed and returns a boolean indicating if the switch should be enabled when it is rendered. If enabled prop is a boolean, it returns the boolean value. If it is a function, it first invokes the function before determining if the returned value is a boolean. Otherwise, it returns false.

Notice that we used the return value from enabledFromProps() to set the initial enabled state. Also, we have added the isEnabled() method to get the current enabled state.

Toggling the ToggleSwitch

Let's go ahead and add the method that actually toggles the switch when it is clicked.

/_ src/components/ToggleSwitch/index.js _/ class ToggleSwitch extends Component { // ...other class members here toggleSwitch = evt => { evt.persist(); evt.preventDefault(); const { onClick, onStateChanged } = this.props; this.setState({ enabled: !this.state.enabled }, () => { const state = this.state; // Augument the event object with SWITCH_STATE const switchEvent = Object.assign(evt, { SWITCH_STATE: state }); // Execute the callback functions isFunction(onClick) && onClick(switchEvent); isFunction(onStateChanged) && onStateChanged(state); }); } }

Since this method will be triggered as a click event listener, we have declared it with the evt parameter. First, this method toggles the current enabled state using the logical NOT (!) operator. When the state has been updated, it triggers the callback functions passed to the onClick and onStateChanged props.

Notice that since onClick requires an event as its first argument, we augmented the event with an additional SWITCH_STATE property containing the new state object. However, the onStateChanged callback is called with the new state object.

Rendering the ToggleSwitch

Finally, let's implement the render() method of the ToggleSwitch component.

/_ src/components/ToggleSwitch/index.js _/ class ToggleSwitch extends Component { // ...other class members here render() { const { enabled } = this.state; // Isolate special props and store the remaining as restProps const { enabled: _enabled, theme, onClick, className, onStateChanged, ...restProps } = this.props; // Use default as a fallback theme if valid theme is not passed const switchTheme = (theme && isString(theme)) ? theme : 'default'; const switchClasses = classnames( `switch switch--${switchTheme}`, className ) const togglerClasses = classnames( 'switch-toggle', `switch-toggle--${enabled ? 'on' : 'off'}` ) return ( <div className={switchClasses} onClick={this.toggleSwitch} {...restProps}> <div className={togglerClasses}></div> </div> ) } }

A lot is going on in this render() method - so let's try to break it down.

  1. First, the enabled state is destructured from the component state.

  2. Next, we destructure the component props and extract the restProps that will be passed down to the switch. This enables us to intercept and isolate the special props of the component.

  3. Next, we use classnames to construct the classes for the switch and the inner toggler, based on the theme and the enabled state of the component.

  4. Finally, we render the DOM elements with the appropriate props and classes. Notice that we passed in this.toggleSwitch as the click event listener on the switch.

Styling the ToggleSwitch

Now that we have the ToggleSwitch component and its required functionality, we will go ahead and write the styles for the toggle switch.

Add the following code snippet to the src/components/ToggleSwitch/index.scss file you created earlier:

/_ src/components/ToggleSwitch/index.scss _/ // DEFAULT COLOR VARIABLES $ball-color: #ffffff; $active-color: #62c28e; $inactive-color: #cccccc; // DEFAULT SIZING VARIABLES $switch-size: 32px; $ball-spacing: 2px; $stretch-factor: 1.625; // DEFAULT CLASS VARIABLE $switch-class: 'switch-toggle'; /_ SWITCH MIXIN _/ @mixin switch($size: $switch-size, $spacing: $ball-spacing, $stretch: $stretch-factor, $color: $active-color, $class: $switch-class) {}

Here, we defined some default variables and created a switch mixin. In the next section, we will we will implement the mixin, but first, let's examine the parameters of the switch mixin:

  • $size - The height of the switch element. It must have a length unit. It defaults to 32px.

  • $spacing - The space between the circular ball and the switch container. It must have a length unit. It defaults to 2px.

  • $stretch - A factor used to determine the extent to which the width of the switch element should be stretched. It must be a unitless number. It defaults to 1.625.

  • $color - The color of the switch when in active state. This must be a valid color value. Note that the circular ball is always white irrespective of this color.

  • $class - The base class for identifying the switch. This is used to dynamically create the state classes of the switch. It defaults to 'switch-toggle'. Hence, the default state classes are .switch-toggle--on and .switch-toggle--off.

Implementing the Switch Mixin

Here is the implementation of the switch mixin:

/_ src/components/ToggleSwitch/index.scss _/ @mixin switch($size: $switch-size, $spacing: $ball-spacing, $stretch: $stretch-factor, $color: $active-color, $class: $switch-class) { // SELECTOR VARIABLES $self: '.' + $class; $on: #{$self}--on; $off: #{$self}--off; // SWITCH VARIABLES $active-color: $color; $switch-size: $size; $ball-spacing: $spacing; $stretch-factor: $stretch; $ball-size: $switch-size - ($ball-spacing _ 2); $ball-slide-size: ($switch-size _ ($stretch-factor - 1) + $ball-spacing); // SWITCH STYLES height: $switch-size; width: $switch-size * $stretch-factor; cursor: pointer !important; user-select: none !important; position: relative !important; display: inline-block; &#{$on}, &#{$off} { &::before, &::after { content: ''; left: 0; position: absolute !important; } &::before { height: inherit; width: inherit; border-radius: $switch-size / 2; will-change: background; transition: background .4s .3s ease-out; } &::after { top: $ball-spacing; height: $ball-size; width: $ball-size; border-radius: $ball-size / 2; background: $ball-color !important; will-change: transform; transition: transform .4s ease-out; } } &#{$on} { &::before { background: $active-color !important; } &::after { transform: translateX($ball-slide-size); } } &#{$off} { &::before { background: $inactive-color !important; } &::after { transform: translateX($ball-spacing); } } }

In this mixin, we start by setting some variables based on the parameters passed to the mixin. Then we go ahead, creating the styles. Notice that we are using the ::after and ::before pseudo-elements to dynamically create the components of the switch. ::before creates the switch container while ::after creates the circular ball.

Also notice how we constructed the state classes from the base class and assign them to variables. The $on variable maps to the selector for the enabled state, while the $off variable maps to the selector for the disabled state.

We also ensured that the base class (.switch-toggle) must be used together with a state class (.switch-toggle--on or .switch-toggle--off) for the styles to be available. Hence, we used the &#{$on} and &#{$off} selectors.

Creating Themed Switches

Now that we have our switch mixin, we will continue to create some themed styles for the toggle switch. We will create two themes, namely: default and graphite-small.

Append the following code snippet to the src/components/ToggleSwitch/index.scss file.

/_ src/components/ToggleSwitch/index.scss _/ @function get-switch-class($selector) { // First parse the selector using `selector-parse` // Extract the first selector in the first list using `nth` twice // Extract the first simple selector using `simple-selectors` and `nth` // Extract the class name using `str-slice` @return str-slice(nth(simple-selectors(nth(nth(selector-parse($selector), 1), 1)), 1), 2); } .switch { $self: &; $toggle: #{$self}-toggle; $class: get-switch-class($toggle); // default theme &#{$self}--default > #{$toggle} { // Always pass the $class to the mixin @include switch($class: $class); } // graphite-small theme &#{$self}--graphite-small > #{$toggle} { // A smaller switch with a `gray` active color // Always pass the $class to the mixin @include switch($color: gray, $size: 20px, $class: $class); } }

Here we first create a Sass function named get-switch-class that takes a $selector as parameter. It runs the $selector through a chain of Sass functions and tries to extract the first class name. For example, if it receives:

  • .class-1 .class-2, .class-3 .class-4, it returns class-1.

  • .class-5.class-6 > .class-7.class-8, it returns class-5.

Next, we define styles for the .switch class. We dynamically set the toggle class to .switch-toggle and assign it to the $toggle variable. Notice that we assign the class name returned from the get-switch-class() function call to the $class variable. Finally, we include the switch mixin with the necessary parameters to create the theme classes.

Notice that the structure of the selector for the themed switch looks like this: &#{$self}--default > #{$toggle} (using the default theme as an example). Putting everything together, this means that the elements hierarchy should look like the following in order for the styles to be applied:

<!-- Use the default theme: switch--default --> <element class="switch switch--default"> <!-- The switch is in enabled state: switch-toggle--on --> <element class="switch-toggle switch-toggle--on"></element> </element>

Here is a simple demo showing what the toggle switch themes look like:

Building the Sample App

Now that we have the ToggleSwitch React component with the required styling, let's go ahead and start creating the sample app we saw at the beginning section.

Modify the src/App.js file to look like the following code snippet:

/_ src/App.js _/ import classnames from 'classnames'; import snakeCase from 'lodash/snakeCase'; import React, { Component } from 'react'; import Switch from './components/ToggleSwitch'; import './App.css'; // List of activities that can trigger notifications const ACTIVITIES = [ 'News Feeds', 'Likes and Comments', 'Live Stream', 'Upcoming Events', 'Friend Requests', 'Nearby Friends', 'Birthdays', 'Account Sign-In' ]; class App extends Component { // Initialize app state, all activities are enabled by default state = { enabled: false, only: ACTIVITIES.map(snakeCase) } toggleNotifications = ({ enabled }) => { const { only } = this.state; this.setState({ enabled, only: enabled ? only : ACTIVITIES.map(snakeCase) }); } render() { const { enabled } = this.state; const headingClasses = classnames( 'font-weight-light h2 mb-0 pl-4', enabled ? 'text-dark' : 'text-secondary' ); return ( <div className="App position-absolute text-left d-flex justify-content-center align-items-start pt-5 h-100 w-100"> <div className="d-flex flex-wrap mt-5" style={{width: 600}}> <div className="d-flex p-4 border rounded align-items-center w-100"> <Switch theme="default" className="d-flex" enabled={enabled} onStateChanged={this.toggleNotifications} /> <span className={headingClasses}>Notifications</span> </div> {/_ ...Notification options here... _/} </div> </div> ); } } export default App;

Here we initialize the ACTIVITIES constant with an array of activities that can trigger notifications. Next, we initialized the app state with two properties:

  • enabled - a boolean that indicates whether notifications are enabled.

  • only - an array that contains all the activities that are enabled to trigger notifications.

Notice that we used the snakeCase utility from Lodash to convert the activities to snakecase before updating the state. Hence, 'News Feeds' becomes 'news_feeds'.

Next, we defined the toggleNotifications() method that updates the app state based on the state it receives from the notification switch. This is used as the callback function passed to the onStateChanged prop of the toggle switch. Notice that when the app is enabled, all activities will be enabled by default, since the only state property is populated with all the activities.

Finally, we rendered the DOM elements for the app and left a slot for the notification options which will be added soon. At this point, the app should look like the following screenshot:

Next go ahead and look for the line that has this comment:

{/_ ...Notification options here... _/}

and replace it with the following content in order to render the notification options:

{ enabled && ( <div className="w-100 mt-5"> <div className="container-fluid px-0"> <div className="pt-5"> <div className="d-flex justify-content-between align-items-center"> <span className="d-block font-weight-bold text-secondary small">Email Address</span> <span className="text-secondary small mb-1 d-block"> <small>Provide a valid email address with which to receive notifications.</small> </span> </div> <div className="mt-2"> <input type="text" placeholder="mail@domain.com" className="form-control" style={{ fontSize: 14 }} /> </div> </div> <div className="pt-5 mt-4"> <div className="d-flex justify-content-between align-items-center border-bottom pb-2"> <span className="d-block font-weight-bold text-secondary small">Filter Notifications</span> <span className="text-secondary small mb-1 d-block"> <small>Select the account activities for which to receive notifications.</small> </span> </div> <div className="mt-5"> <div className="row flex-column align-content-start" style={{ maxHeight: 180 }}> { this.renderNotifiableActivities() } </div> </div> </div> </div> </div> ) }

Notice here that we made a call to this.renderNotifiableActivities() to render the activities. Let's go ahead and implement this method and the other remaining methods.

Add the following methods to the App component.

/_ src/App.js _/ class App extends Component { toggleActivityEnabled = activity => ({ enabled }) => { let { only } = this.state; if (enabled && !only.includes(activity)) { only.push(activity); return this.setState({ only }); } if (!enabled && only.includes(activity)) { only = only.filter(item => item !== activity); return this.setState({ only }); } } renderNotifiableActivities() { const { only } = this.state; return ACTIVITIES.map((activity, index) => { const key = snakeCase(activity); const enabled = only.includes(key); const activityClasses = classnames( 'small mb-0 pl-3', enabled ? 'text-dark' : 'text-secondary' ); return ( <div key={index} className="col-5 d-flex mb-3"> <Switch theme="graphite-small" className="d-flex" enabled={enabled} onStateChanged={ this.toggleActivityEnabled(key) } /> <span className={activityClasses}>{ activity }</span> </div> ); }) } }

Here, we have implemented the renderNotifiableActivities method. We iterate through all the activities using ACTIVITIES.map() and render each with a toggle switch for it. Notice that the toggle switch uses the graphite-small theme. We also detect the enabled state of each activity by checking whether it already exists in the only state variable.

Finally, we defined the toggleActivityEnabled method which was used to provide the callback function for the onStateChanged prop of each activity's toggle switch. We defined it as a higher-order function so that we can pass the activity as argument and return the callback function. It checks if an activity is already enabled and updates the state accordingly.

Now the app should look like the following screenshot:

If you prefer to have all the activities disabled by default instead of enabled as shown in the initial screenshot, then you could make the following changes to the App component:

/_ src/App.js _/ class App extends Component { // Initialize app state, all activities are disabled by default state = { enabled: false, only: [] } toggleNotifications = ({ enabled }) => { const { only } = this.state; this.setState({ enabled, only: enabled ? only : [] }); } } Accessibility Concerns

Using toggle switches in our applications instead of traditional checkboxes can enable us create neater interfaces, especially considering the fact that it is difficult to style a traditional checkbox however we want.

However, using toggle switches instead of checkboxes has some accessibility issues, since the user-agent may not be able to interpret the component's function correctly.

A few things can be done to improve the accessibility of the toggle switch and enable user-agents to understand the role correctly. For example, you can use the following ARIA attributes:

<switch-element tabindex="0" role="switch" aria-checked="true" aria-labelledby="#label-element"></switch-element>

You can also listen to more events on the toggle switch to create more ways the user can interact with the component.


In this tutorial, we have been able to create a custom toggle switch for our React applications with proper styling that supports different themes. We have also been able to see how we can use it in our application instead of traditional checkboxes and the accessibility concerns involved.

For the complete sourcecode of this tutorial, checkout the react-toggle-switch-demo repository on Github. You can also get a live demo of this tutorial on Code Sandbox.

Categories: Web Technologies

304 Not Modified - Evert Pot

Planet PHP - Tue, 10/09/2018 - 08:00

304 Not Modified is used in response to a conditional GET or HEAD request. A request can be made condtional with one of the following headers:

  • If-Match
  • If-None-Match
  • If-Modified-Since
  • If-Unmodified-Since
  • If-Range

If-Modified-Since and If-None-Match are used specifically to allow a client to cache results and asks the server to only send a new representation if it has changed.

If-Modified-Since does this based on a Last-Modified header, and If-None-Match with an ETag.


A client does an initial request:

GET /foo HTTP/1.1 Accept: text/html

A server responds with an ETag:

HTTP/1.1 200 Ok Content-Type: text/html ETag: "some-string"

The next time a client makes a request, it can include the ETag:

GET /foo HTTP/1.1 Accept: text/html If-None-Match: "some-string"

If the resource didn’t change on the server, it can return a 304.

HTTP/1.1 304 Not Modified ETag: "some-string" References
Categories: Web Technologies

What are Durable Functions?

CSS-Tricks - Tue, 10/09/2018 - 06:55

Oh no! Not more jargon! What exactly does the term Durable Functions mean? Durable functions have to do with Serverless architectures. It’s an extension of Azure Functions that allow you to write stateful executions in a serverless environment.

Think of it this way. There are a few big benefits that people tend to focus on when they talk about Serverless Functions:

  • They’re cheap
  • They scale with your needs (not necessarily, but that’s the default for many services)
  • They allow you to write event-driven code

Let’s talk about that last one for a minute. When you can write event-driven code, you can break your operational needs down into smaller functions that essentially say: when this request comes in, run this code. You don’t mess around with infrastructure, that’s taken care of for you. It’s a pretty compelling concept.

In this paradigm, you can break your workflow down into smaller, reusable pieces which, in turn, can make them easier to maintain. This also allows you to focus on your business logic because you’re boiling things down to the simplest code you need run on your server.

So, here’s where Durable Functions come in. You can probably guess that you’re going to need more than one function to run as your application grows in size and has to maintain more states. And, in many cases, you’ll need to coordinate them and specify the order in which they should be run for them to be effective. It's worth mentioning at this point that Durable Functions are a pattern available only in Azure. Other services have variations on this theme. For example, the AWS version is called Step Functions. So, while we're talking about something specific to Azure, it applies more broadly as well.

Durable in action, some examples

Let’s say you’re selling airline tickets. You can imagine that as a person buys a ticket, we need to:

  1. check for the availability of the ticket
  2. make a request to get the seat map
  3. get their mileage points if they’re a loyalty member
  4. give them a mobile notification if the payment comes through and they have an app installed/have requested notifications

(There’s typically more, but we’re using this as a base example)

Sometimes these will all run be run concurrently, sometimes not. For instance, let’s say they want to purchase the ticket with their mileage rewards. Then you’d have to first check the awards, and then the availability of the ticket. And then do some dark magic to make sure no customers, even data scientists, can actually understand the algorithm behind your rewards program.

Orchestrator functions

Whether you’re running these functions at the same moment, running them in order, or running them according to whether or not a condition is met, you probably want to use what’s called an orchestrator function. This is a special type of function that defines your workflows, doing, as you might expect, orchestrating the other functions. They automatically checkpoint their progress whenever a function awaits, which is extremely helpful for managing complex asynchronous code.

Without Durable Functions, you run into a problem of disorganization. Let’s say one function relies on another to fire. You could call the other function directly from the first, but whoever is maintaining the code would have to step into each individual function and keep in their mind how it’s being called while maintaining them separately if they need changes. It's pretty easy to get into something that resembles callback hell, and debugging can get really tricky.

Orchestrator functions, on the other hand, manage the state and timing of all the other functions. The orchestrator function will be kicked off by an orchestration trigger and supports both inputs and outputs. You can see how this would be quite handy! You’re managing the state in a comprehensive way all in one place. Plus, the serverless functions themselves can keep their jobs limited to what they need to execute, allowing them to be more reusable and less brittle.

Let’s go over some possible patterns. We’ll move beyond just chaining and talk about some other possibilities.

Pattern 1: Function chaining

This is the most straightforward implementation of all the patterns. It's literally one orchestrator controlling a few different steps. The orchestrator triggers a function, the function finishes, the orchestrator registers it, and then then next one fires, and so on. Here's a visualization of that in action:

See the Pen Durable Functions: Pattern #1- Chaining by Sarah Drasner (@sdras) on CodePen.

Here's a simple example of that pattern with a generator.

const df = require("durable-functions") module.exports = df(function*(ctx) { const x = yield ctx.df.callActivityAsync('fn1') const y = yield ctx.df.callActivityAsync('fn2', x) const z = yield ctx.df.callActivityAsync('fn3', y) return yield ctx.df.callActivityAsync('fn3', z) })

I love generators! If you're not familiar with them, check out this great talk by Bodil on the subject).

Pattern 2: Fan-out/fan-in

If you have to execute multiple functions in parallel and need to fire one more function based on the results, a fan-out/fan-in pattern might be your jam. We'll accumulate results returned from the functions from the first group of functions to be used in the last function.

See the Pen Durable Functions: Pattern #2, Fan Out, Fan In by Sarah Drasner (@sdras) on CodePen.

const df = require('durable-functions') module.exports = df(function*(ctx) { const tasks = [] // items to process concurrently, added to an array const taskItems = yield ctx.df.callActivityAsync('fn1') taskItems.forEach(item => tasks.push(ctx.df.callActivityAsync('fn2', item)) yield ctx.df.task.all(tasks) // send results to last function for processing yield ctx.df.callActivityAsync('fn3', tasks) }) Pattern 3: Async HTTP APIs

It's also pretty common that you'll need to make a request to an API for an unknown amount of time. Many things like the distance and amount of requests processed can make the amount of time unknowable. There are situations that require some of this work to be done first, asynchronously, but in tandem, and then another function to be fired when the first few API calls are completed. Async/await is perfect for this task.

See the Pen Durable Functions: Pattern #3, Async HTTP APIs by Sarah Drasner (@sdras) on CodePen.

const df = require('durable-functions') module.exports = df(async ctx => { const fn1 = ctx.df.callActivityAsync('fn1') const fn2 = ctx.df.callActivityAsync('fn2') // the responses come in and wait for both to be resolved await fn1 await fn2 // then this one this one is called await ctx.df.callActivityAsync('fn3') })

You can check out more patterns here! (Minus animations. &#x1f609;)

Getting started

If you'd like to play around with Durable Functions and learn more, there's a great tutorial here, with corresponding repos to fork and work with. I'm also working with a coworker on another post that will dive into one of these patterns that will be out soon!

Alternative patterns

Azure offers a pretty unique thing in Logic Apps, which allows you the ability to design workflows visually. I'm usually a code-only-no-WYSIWYG lady myself, but one of the compelling things about Logic Apps is that they have readymade connectors with services like Twilio and SendGrid, so that you don't have to write that slightly annoying, mostly boilerplate code. It can also integrate with your existing functions so you can abstract away just the parts connect to middle-tier systems and write the rest by hand, which can really help with productivity.

The post What are Durable Functions? appeared first on CSS-Tricks.

Categories: Web Technologies

MySQL Replication Notes

Planet MySQL - Tue, 10/09/2018 - 04:45
The MySQL Replication was my first project as a Database Administrator (DBA) and I have been working with Replication technologies for last few years and I am indebted to contribute my little part for development of this technology. MySQL supports different replication topologies, having better understanding of basic concepts will help you in building and managing various and complex topologies. I am writing here, some of the key points to taken care when you are building MySQL replication. I consider this post as a starting point for building a high performance and consistent MySQL servers.  Let me start with below key points
Hardware. MySQL Server Version MySQL Server Configuration Primary Key Storage EngineI will update this post with relevant points, whenever I get time. I am trying to provide generic concepts and it will be applicable to all version of MySQL, however, some of the concepts are new and applicable to latest versions (>5.0).
Resourcing of the slave must be on par (or better than) for any Master to keep up with the Master. The slave resource includes the following things:
Disk IO Computation (vCPU) InnoDB Buffer Pool (RAM)
MySQL 5.7 supports Multi Threaded Replication, but are limited to one thread per database. In case of heavy writes (multiple threads) on Master databases, there is a chance that, Slave will be lag behind the Master, since only one thread is applying BINLOG to the Slave per database and its writes are all serialised. MySQL Version: It is highly recommended to have Master and Slave servers should run on same version. Different version of MySQL on slave can affect the SQL execution timings. For example, MySQL 8.0 is comparatively much faster than 5.5. Also, it is worth to consider the features addition, deletion and modifications. MySQL Server Configuration: The MySQL server configuration should be identical, we may have identical hardware resources and same MySQL version, but if MySQL is not configured to utilize the available resources in similar method, there will be changes in execution plan. For example, InnoDB buffer pool size should be configured on MySQL server to utilize the memory. Even if we have a identical hardwares, buffer pool must be configured at the MySQL instance. Primary Key: The primary key plays an important role in Row-Based-Replication (when binlog_format is either ROW or MIXED). Most often, slave lagging behind master while applying RBR event is due to the lack of primary key on the table involved. When no primary key is defined, for each affected row on master, the entire row image has to be compared on a row-by-row basis against the matching table’s data on the slave. This can be explained by how a transaction is performed on master and slave based on the availability of primary key:

With Primary Key Without Primary Key On Master Uniquely identifies the row Make use of any available key or performs a full table scan On Slave Uniquely identifies each rows & changes can be quickly applied to appropriate row images on the slave. Entire row image is compared on a row-by-row basis against the matching table’s data on slave. Row-by-row scan can be very expensive and time consuming and cause slave to lag behind master. When there is no primary key defined on a table, InnoDB internally generates a hidden clustered index named GEN_CLUST_INDEX containing row ID values. MySQL replication cannot use this hidden primary key for sort operations, because this hidden row IDs are unique to each MySQL instance and are not consistent between a master and a slave. The best solution is to ensure all tables have a primary key. When there is no unique not null key available on table, at least create an auto-incrementing integer column (surrogate key) as primary key. If immediately, it is not possible to create a primary key on all such tables, there is a workaround to overcome this for short period of time by changing slave rows search algorithm. This is not the scope of this post, I will write future post on this topic. Mixing of Storage Engines: MySQL Replication supports different storage engines on master and slave servers. But, there are few important configuration to be taken care when mixing of storage engines. It should be noted that, InnoDB is a transactional storage engine and MyISAM is a non-transactional. On Rollback: If binlog_format is STATEMENT and when a transaction updates, InnoDB and MyISAM tables and then performs ROLLBACK, only InnoDB tables data is removed and when this statement is written to binlog it will be send to slave, on slave where both the tables are MyISAM will not perform the ROLLBACK, since it does not supports transaction. It will leave the table inconsistent with master. Auto-Increment column: This should be noted that, the way auto-increment is implemented on MyISAM and InnoDB different, MyISAM will lock a entire table to generate auto-increment and the auto-increment is part of a composite key, insert operation on MyISAM table marked as unsafe. Refer this page for better understanding https://dev.mysql.com/doc/refman/8.0/en/replication-features-auto-increment.html Referential Integrity Constraints: InnoDB supports foreign keys and MyISAM does not. Cascading updates and deletes operations on InnoDB tables on master will replicate to slave, only if the tables are InnoDB on both master and slave. This is true for both STATEMENT and ROW based replications. Refer this page for explanation: https://dev.mysql.com/doc/refman/5.7/en/innodb-and-mysql-replication.html Locking: InnoDB performs row-level locking and MyISAM performs table-level locking and all transaction on the slave are executed in a serialized manner, this will negatively impact the slave performance and end up in slave lagging behind the master. Logging: MyISAM is a non-transactional storage engine and transactions are logged into binary log by client thread, immediately after execution, but before the locks are released. If the query is part of the transaction and if there is a InnoDB table involved on same transaction and it is executed before the MyISAM query, then it will not written to binlog immediately after execution, it will wait for either commit or rollback. This is done to ensure, order of execution is same in slave as in the master. Transaction on InnoDB tables will be written to the binary log, only when the transaction is committed. It is highly advisable to use transactional storage engine on MySQL Replication. Mixing of storage engine may leads to inconsistency and performance issues between master and slave server. Though MySQL does not produce any warnings, it should be noted and taken care from our end. Also, the introduction of MySQL 8.0 (from 5.6) with default storage engine as InnoDB and deprecating older ISAM feature indicates the future of MySQL database, it is going to be completely transactional and it is recommended to have InnoDB storage engine. There is a discussion online, about the removal of other storage engines and development on InnoDB engine by Oracle, though it is not scope of this article, as a Database Administrator, I prefer having different storage engine for different use cases and it has been unique feature of MySQL. I hope this post is useful, please share your thoughts / feedbacks on comment section.
Categories: Web Technologies

MySQL 8: Performance Schema Digests Improvements

Planet MySQL - Tue, 10/09/2018 - 02:04

Since MySQL 5.6, the digest feature of the MySQL Performance Schema has provided a convenient and effective way to obtain statistics of queries based on their normalized form. The feature works so well that it has almost completely (from my experience) replaced the connector extensions and proxy for collecting query statistics for the Query Analyzer (Quan) in MySQL Enterprise Monitor (MEM).

MySQL 8 adds further improvements to the digest feature in the Performance Schema including a sample query with statistics for each digest, percentile information, and a histogram summary. This blog will explore these new features.

MySQL Enterprise Monitor is one of the main users of the Performance Schema digests for its Query Analyzer.

Let’s start out looking at the the good old summary by digest table.

Query Sample

The base table for digest summary information is the events_statements_summary_by_digest table. This has been around since MySQL 5.6. In MySQL 8.0 it has been extended with six columns of which three have data related to a sample query will be examined in this section.

The three sample columns are:

  • QUERY_SAMPLE_TEXT: An actual example of a query.
  • QUERY_SAMPLE_SEEN: When the sample query was seen.
  • QUERY_SAMPLE_TIMER_WAIT: How long time the sample query took to execute (in picoseconds).

As an example consider the query SELECT * FROM world.city WHERE id = <value>. The sample information for that query as well as the digest and digest text (normalized query) may look like:

mysql> SELECT DIGEST, DIGEST_TEXT, QUERY_SAMPLE_TEXT, QUERY_SAMPLE_SEEN, sys.format_time(QUERY_SAMPLE_TIMER_WAIT) AS SampleTimerWait FROM performance_schema.events_statements_summary_by_digest WHERE DIGEST_TEXT LIKE '%`world` . `city`%'\G *************************** 1. row *************************** DIGEST: 9431aed9635923565d7bc92cc36d6411c3abb9f52d2c22715be21b5472e3c366 DIGEST_TEXT: SELECT * FROM `world` . `city` WHERE `ID` = ? QUERY_SAMPLE_TEXT: SELECT * FROM world.city WHERE ID = 130 QUERY_SAMPLE_SEEN: 2018-10-09 17:19:20.500944 SampleTimerWait: 17.34 ms 1 row in set (0.00 sec)

There are a few things to note here:

  • The digest in MySQL 8 is a sha256 hash whereas in 5.6 and 5.7 it was an md5 hash.
  • The digest text is similar to the normalized query that the mysqldumpslow script can generate for queries in the slow query log; just that the Performance Schema uses a question mark as a placeholder.
  • The QUERY_SAMPLE_SEEN value is in the system time zone.
  • The sys.format_time() function is in the query used to convert the picoseconds to a human readable value.

The maximum length of the sample text is set with the performance_schema_max_sql_text_length option. The default is 1024 bytes. It is the same option that is used for the SQL_TEXT columns in the statement events tables. It requires a restart of MySQL to change the value. Since the query texts are stored in several contexts and some of the Performance Schema tables can have thousands of rows, do take care not to increase it beyond what you have memory for.

How is the sample query chosen? The sample is the slowest example of a query with the given digest. If the performance_schema_max_digest_sample_age option is set to a non-zero value (the default is 60 seconds) and the existing sample is older than the specified value, it will always be replaced.

The events_statements_summary_by_digest also has another set of new columns: percentile information.

Percentile Information

Since the beginning, the events_statements_summary_by_digest table has included some statistical information about the query times for a given digest: the minimum, average, maximum, and total query time. In MySQL 8 this has been extended to include information about the 95th, 99th, and 99.9th percentile. The information is available in the QUANTILE_95, QUANTILE_99, and QUANTILE_999 column respectively. All of the values are in picoseconds.

What does the new columns mean? Based on the histogram information of the query (see the next section), MySQL calculates a high estimate of the query time. For a given digest, 95% of the executed queries are expected to be faster than the query time given by QUANTILE_95. Similar for the two other columns.

As an example consider the same digest as before:

mysql> SELECT DIGEST, DIGEST_TEXT, QUERY_SAMPLE_TEXT, sys.format_time(SUM_TIMER_WAIT) AS SumTimerWait, sys.format_time(MIN_TIMER_WAIT) AS MinTimerWait, sys.format_time(AVG_TIMER_WAIT) AS AvgTimerWait, sys.format_time(MAX_TIMER_WAIT) AS MaxTimerWait, sys.format_time(QUANTILE_95) AS Quantile95, sys.format_time(QUANTILE_99) AS Quantile99, sys.format_time(QUANTILE_999) AS Quantile999 FROM performance_schema.events_statements_summary_by_digest WHERE DIGEST_TEXT LIKE '%`world` . `city`%'\G *************************** 1. row *************************** DIGEST: 9431aed9635923565d7bc92cc36d6411c3abb9f52d2c22715be21b5472e3c366 DIGEST_TEXT: SELECT * FROM `world` . `city` WHERE `ID` = ? QUERY_SAMPLE_TEXT: SELECT * FROM world.city WHERE ID = 130 SumTimerWait: 692.77 ms MinTimerWait: 50.32 us AvgTimerWait: 68.92 us MaxTimerWait: 17.34 ms Quantile95: 104.71 us Quantile99: 165.96 us Quantile999: 363.08 us 1 row in set (0.00 sec)

Having the 95th, 99th, and 99.9th percentile helps predict the performance of a query and show the spread of the query times. Even more information about the spread can be found using the new family member: histograms.


Histograms is a way to put the query execution times into buckets, so it is possible to see how the query execution times spread. This can for example be useful to see how evenly the query time is. The average query time may be fine, but if that is based on some queries executing super fast and others very slow, it will still result in unhappy users and customers.

The MAX_TIMER_WAIT column of the events_statements_summary_by_digest table discussed this far shows the high watermark, but it does not say whether it is a single outlier or a result of general varying query times. The histograms give the answer to this.

Using the query digest from earlier in the blog, the histogram information for the query can be found in the events_statements_histogram_by_digest table like:

mysql> SELECT BUCKET_NUMBER, sys.format_time(BUCKET_TIMER_LOW) AS TimerLow, sys.format_time(BUCKET_TIMER_HIGH) AS TimerHigh, COUNT_BUCKET, COUNT_BUCKET_AND_LOWER, BUCKET_QUANTILE FROM performance_schema.events_statements_histogram_by_digest WHERE DIGEST = '9431aed9635923565d7bc92cc36d6411c3abb9f52d2c22715be21b5472e3c366' AND COUNT_BUCKET > 0 ORDER BY BUCKET_NUMBER; +---------------+-----------+-----------+--------------+------------------------+-----------------+ | BUCKET_NUMBER | TimerLow | TimerHigh | COUNT_BUCKET | COUNT_BUCKET_AND_LOWER | BUCKET_QUANTILE | +---------------+-----------+-----------+--------------+------------------------+-----------------+ | 36 | 50.12 us | 52.48 us | 524 | 524 | 0.052400 | | 37 | 52.48 us | 54.95 us | 2641 | 3165 | 0.316500 | | 38 | 54.95 us | 57.54 us | 310 | 3475 | 0.347500 | | 39 | 57.54 us | 60.26 us | 105 | 3580 | 0.358000 | | 40 | 60.26 us | 63.10 us | 48 | 3628 | 0.362800 | | 41 | 63.10 us | 66.07 us | 3694 | 7322 | 0.732200 | | 42 | 66.07 us | 69.18 us | 611 | 7933 | 0.793300 | | 43 | 69.18 us | 72.44 us | 236 | 8169 | 0.816900 | | 44 | 72.44 us | 75.86 us | 207 | 8376 | 0.837600 | | 45 | 75.86 us | 79.43 us | 177 | 8553 | 0.855300 | | 46 | 79.43 us | 83.18 us | 236 | 8789 | 0.878900 | | 47 | 83.18 us | 87.10 us | 186 | 8975 | 0.897500 | | 48 | 87.10 us | 91.20 us | 203 | 9178 | 0.917800 | | 49 | 91.20 us | 95.50 us | 116 | 9294 | 0.929400 | | 50 | 95.50 us | 100.00 us | 135 | 9429 | 0.942900 | | 51 | 100.00 us | 104.71 us | 105 | 9534 | 0.953400 | | 52 | 104.71 us | 109.65 us | 65 | 9599 | 0.959900 | | 53 | 109.65 us | 114.82 us | 65 | 9664 | 0.966400 | | 54 | 114.82 us | 120.23 us | 59 | 9723 | 0.972300 | | 55 | 120.23 us | 125.89 us | 40 | 9763 | 0.976300 | | 56 | 125.89 us | 131.83 us | 34 | 9797 | 0.979700 | | 57 | 131.83 us | 138.04 us | 33 | 9830 | 0.983000 | | 58 | 138.04 us | 144.54 us | 27 | 9857 | 0.985700 | | 59 | 144.54 us | 151.36 us | 16 | 9873 | 0.987300 | | 60 | 151.36 us | 158.49 us | 25 | 9898 | 0.989800 | | 61 | 158.49 us | 165.96 us | 20 | 9918 | 0.991800 | | 62 | 165.96 us | 173.78 us | 9 | 9927 | 0.992700 | | 63 | 173.78 us | 181.97 us | 11 | 9938 | 0.993800 | | 64 | 181.97 us | 190.55 us | 11 | 9949 | 0.994900 | | 65 | 190.55 us | 199.53 us | 4 | 9953 | 0.995300 | | 66 | 199.53 us | 208.93 us | 6 | 9959 | 0.995900 | | 67 | 208.93 us | 218.78 us | 6 | 9965 | 0.996500 | | 68 | 218.78 us | 229.09 us | 6 | 9971 | 0.997100 | | 69 | 229.09 us | 239.88 us | 3 | 9974 | 0.997400 | | 70 | 239.88 us | 251.19 us | 2 | 9976 | 0.997600 | | 71 | 251.19 us | 263.03 us | 2 | 9978 | 0.997800 | | 72 | 263.03 us | 275.42 us | 2 | 9980 | 0.998000 | | 73 | 275.42 us | 288.40 us | 4 | 9984 | 0.998400 | | 74 | 288.40 us | 302.00 us | 2 | 9986 | 0.998600 | | 75 | 302.00 us | 316.23 us | 2 | 9988 | 0.998800 | | 76 | 316.23 us | 331.13 us | 1 | 9989 | 0.998900 | | 78 | 346.74 us | 363.08 us | 3 | 9992 | 0.999200 | | 79 | 363.08 us | 380.19 us | 2 | 9994 | 0.999400 | | 80 | 380.19 us | 398.11 us | 1 | 9995 | 0.999500 | | 83 | 436.52 us | 457.09 us | 1 | 9996 | 0.999600 | | 100 | 954.99 us | 1.00 ms | 1 | 9997 | 0.999700 | | 101 | 1.00 ms | 1.05 ms | 1 | 9998 | 0.999800 | | 121 | 2.51 ms | 2.63 ms | 1 | 9999 | 0.999900 | | 162 | 16.60 ms | 17.38 ms | 1 | 10000 | 1.000000 | +---------------+-----------+-----------+--------------+------------------------+-----------------+ 49 rows in set (0.02 sec)

In this example, 3694 times (the COUNT_BUCKET column) when the query were executed, the query time was between 63.10 microseconds and 66.07 microseconds, so the execution time matched the interval of bucket number 41. There has been at total of 7322 executions (the COUNT_BUCKET_AND_LOWER column) of the query with a query time of 66.07 microseconds or less. This means that 73.22% (the BUCKET_QUANTILE column) of the queries have a query time of 66.07 microseconds or less.

In addition to the shown columns, there is SCHEMA_NAME and DIGEST (which together with BUCKET_NUMBER form a unique key). For each digest there are 450 buckets with the width of the bucket (in terms of difference between the low and high timers) gradually becoming larger and larger. The first, middle, and last five buckets are:

mysql> SELECT BUCKET_NUMBER, sys.format_time(BUCKET_TIMER_LOW) AS TimerLow, sys.format_time(BUCKET_TIMER_HIGH) AS TimerHigh FROM performance_schema.events_statements_histogram_by_digest WHERE DIGEST = '9431aed9635923565d7bc92cc36d6411c3abb9f52d2c22715be21b5472e3c366' AND (BUCKET_NUMBER < 5 OR BUCKET_NUMBER > 444 OR BUCKET_NUMBER BETWEEN 223 AND 227); +---------------+-----------+-----------+ | BUCKET_NUMBER | TimerLow | TimerHigh | +---------------+-----------+-----------+ | 0 | 0 ps | 10.00 us | | 1 | 10.00 us | 10.47 us | | 2 | 10.47 us | 10.96 us | | 3 | 10.96 us | 11.48 us | | 4 | 11.48 us | 12.02 us | | 223 | 275.42 ms | 288.40 ms | | 224 | 288.40 ms | 302.00 ms | | 225 | 302.00 ms | 316.23 ms | | 226 | 316.23 ms | 331.13 ms | | 227 | 331.13 ms | 346.74 ms | | 445 | 2.11 h | 2.21 h | | 446 | 2.21 h | 2.31 h | | 447 | 2.31 h | 2.42 h | | 448 | 2.42 h | 2.53 h | | 449 | 2.53 h | 30.50 w | +---------------+-----------+-----------+ 15 rows in set (0.02 sec)

The bucket thresholds are fixed and thus the same for all digests. There is also a global histogram in the events_statements_histogram_global.

This includes the introduction to the new Performance Schema digest features. As monitoring tools start to use this information, it will help create a better monitoring experience. Particularly the histograms will benefit from being shown as graphs.

Categories: Web Technologies

Announcement: Second Alpha Build of Percona XtraBackup 8.0 Is Available

Planet MySQL - Mon, 10/08/2018 - 23:33

The second alpha build of Percona XtraBackup 8.0.2 is now available in the Percona experimental software repositories.

Note that, due to the new MySQL redo log and data dictionary formats, the Percona XtraBackup 8.0.x versions will only be compatible with MySQL 8.0.x and Percona Server for MySQL 8.0.x. This release supports backing up Percona Server 8.0 Alpha.

For experimental migrations from earlier database server versions, you will need to backup and restore and using XtraBackup 2.4 and then use mysql_upgrade from MySQL 8.0.x

PXB 8.0.2 alpha is available for the following platforms:

  • RHEL/Centos 6.x
  • RHEL/Centos 7.x
  • Ubuntu 14.04 Trusty*
  • Ubuntu 16.04 Xenial
  • Ubuntu 18.04 Bionic
  • Debian 8 Jessie*
  • Debian 9 Stretch

Information on how to configure the Percona repositories for apt and yum systems and access the Percona experimental software is here.

* We might drop these platforms before GA release.

  • PXB-1658: Import keyring vault plugin from Percona Server 8
  • PXB-1609: Make version_check optional at build time
  • PXB-1626: Support encrypted redo logs
  • PXB-1627: Support obtaining binary log coordinates from performance_schema.log_status
Fixed Bugs
  • PXB-1634: The CREATE TABLE statement could fail with the DUPLICATE KEY error
  • PXB-1643: Memory issues reported by ASAN in PXB 8
  • PXB-1651: Buffer pool dump could create a (null) file during prepare stage of Mysql8.0.12 data
  • PXB-1671: A backup could fail when the MySQL user was not specified
  • PXB-1660: InnoDB: Log block N at lsn M has valid header, but checksum field contains Q, should be P

Other bugs fixed: PXB-1623PXB-1648PXB-1669PXB-1639, and PXB-1661.

Categories: Web Technologies

Unbuttoning Buttons

CSS-Tricks - Mon, 10/08/2018 - 13:21

We dug into overriding default buttons styles not long ago here on CSS-Tricks. With garden-variety fully cross-browser-supported styles, you're looking at 6-10 CSS rules to tear down anything you need to off a button and then put in place your own styles. Hardly a big deal if you ask me, especially since it's extremely likely you'll be styling buttons anyway.

Scott O'Hara has taken a look as well. I think the solution offered to use a <span role="button" tabindex="0" onClick="..."> is a little bizarre since you need bring your own keyboard handling with is non-trivial and requires JavaScript. But there are a couple of interesting other CSS explorations, neither of which stacked up for different reasons:

  • display: contents; - some semantics-based accessibility problems.
  • all: unset; - doesn't reset display value, not good enough browser support.

Direct Link to ArticlePermalink

The post Unbuttoning Buttons appeared first on CSS-Tricks.

Categories: Web Technologies

You’re not storing sensitive data in your database. Seriously?

Planet MySQL - Mon, 10/08/2018 - 12:11

At technology events, I often ask attendees if they’re storing sensitive data in MySQL. Only a few hands go up. Then, I rephrase and ask, “how many of you would be comfortable if your database tables were exposed on the Internet?” Imagine how it would be perceived by your customers, your manager, your employees or your board of directors. Once again, “how many of you are storing sensitive data in MySQL?” Everyone.


1.) You are storing sensitive data.

Even if it’s truly meaningless data, you can’t afford for your company to be perceived as loose with data security. If you look closely at your data; however, you’ll likely realize that it could be exploited. Does it include any employee info, server IP addresses or internal routing information?

A recent article by Lisa Vaas from Naked Security highlights a spate of data leaks from poorly configured MongoDB instances.

Here we Mongo again! Millions of records exposed by insecure database

What’s striking is that these leaks didn’t include credit cards, social security numbers or so-called sensitive data. Nevertheless, companies are vulnerable to ransomware and diminished customer trust.

2). Your data will be misplaced, eventually.

Employees quit, servers get decommissioned; but database tables persist. Your tables are passed among developers, DBA’s and support engineers. They are moved between bare metal, VM’s and public cloud providers. Given enough time, your data will end up in a place it shouldn’t be.

Often people don’t realize that their binary data is easily exposed. Take any binary data, for example, and run the Linux strings function against it. On a Linux command line, just type “strings filename”. You’ll see your data scroll across the screen in readable text.


Two years ago, MySQL developers had to change their application to encrypt data. Now, transparent data encryption in MySQL 5.7 and 8.0 require no application changes. With Oracle’s version of MySQL, there’s little performance overhead after the data is encrypted.

Below are a few simple steps to encrypt your data in MySQL 8.0. This process relies on a keyring file. This won’t meet compliance requirements (see KEY MANAGEMENT SYSTEMS below), but it’s a good first step.

  1. Check your version of MySQL. It should be MySQL 5.7 or 8.0.
  2. Pre-load the plugin in your my.cnf: early-plugin-load = keyring_file.so
  3. Execute the following queries:
  • INSTALL PLUGIN keyring_udf SONAME ‘keyring_udf.so’;
  • CREATE FUNCTION keyring_key_generate RETURNS INTEGER SONAME ‘keyring_udf.so’;
  • SELECT keyring_key_generate(‘alongpassword’, ‘DSA’, 256);

Per documentation warning: The keyring_file and keyring_encrypted file plugins are not intended as regulatory compliance solutions. Security standards such as PCI, FIPS, and others require use of key management systems to secure, manage, and protect encryption keys in key vaults or hardware security modules (HSMs).


Credit card and data privacy regulations require that keys are restricted and rotated. If your company collects payment information, it’s likely that your organization already has one a key management system (KMS). These systems are usually software or hardware appliances used strictly for managing your corporate encryption keys. The MySQL Enterprise Edition includes a plugin for communicating directly with the KMS. MySQL is compatible with Oracle Key Vault, SafeNet KeySecure, Thales Vormetric Key Management and Fornetix Key Orchestration.

Introduction to Oracle Key Vault

In summary, reconsider if you believe that you’re not storing sensitive data. If using MySQL, capabilities in the latest releases make it possible to encrypt data without changing your application. At the very least, encrypt your data with the key file method (above). Ideally, however; investigate a key management system to also meet regulatory requirements.

Categories: Web Technologies