emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Frogger in Hyperapp

Echo JS - Wed, 04/25/2018 - 10:08
Categories: Web Technologies

Try MariaDB Server 10.3 in Docker

Planet MySQL - Wed, 04/25/2018 - 10:07
Try MariaDB Server 10.3 in Docker rasmusjohansson Wed, 04/25/2018 - 13:07

There are times when you may want to test specific software or a specific version of software. In my case, I wanted to play with MariaDB Server 10.3.6 Release Candidate and some of the new, upcoming features. I didn’t want to have a permanent installation of it on my laptop so I chose to put it in a Docker container that I can easily copy to another place or remove. These are the steps I had to take to get it done.

I won’t go through how to install Docker itself. There is good documentation for it, which can be found here: https://docs.docker.com/install/

After the installation is completed, make sure Docker is up and running by opening a terminal and typing in a terminal window:

docker info

There are a lot of other alternatives to see that Docker is up and running, but “info” provides useful information about your Docker environment.

After Docker is set up, it’s time to create a container running MariaDB Server. The easy way to do it is to use the MariaDB Dockerfiles available on Docker Hub. These images are updated fairly quickly when a new release is made of MariaDB. It’s this easy to get MariaDB Server 10.3 RC up and running by using the Dockerfile:

docker pull mariadb:10.3 docker run --name mariadbtest -e MYSQL_ROOT_PASSWORD=mypass -d mariadb:10.3

Check that MariaDB started correctly by looking at the logs:

docker logs mariadbtest

The last row in the log will also tell you what version of MariaDB is running.

For documentation on this, refer to Installing and using MariaDB via Docker in the MariaDB documentation.

In my case, I wanted to test out the latest version of MariaDB that wasn’t yet at the time of writing available in the Dockerfile on Docker Hub. I will next go through the steps to create and populate a container without using a Dockerfile.

To get going we’ll need a new container. We need the container to be based on a operating system that is supported for MariaDB. I’ll base it off Ubuntu Xenial (16.04).

docker run -i -t ubuntu:xenial /bin/bash

When running that command, Docker will download the Ubuntu Xenial Docker image and use it as the base for the container. The /bin/bash at the end will take us into the shell of the container.

Inside the container I want to install MariaDB 10.3. I used the repository configuration tool for MariaDB to get the right configuration to add to the clean Xenial installation I now have. The tool gave me the following three commands to run.

add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://mirror.netinch.com/pub/mariadb/repo/10.3/ubuntu xenial main' apt update apt install mariadb-server

The last command will start installing MariaDB, which will ask for a root password for MariaDB to be defined. Once that is done and the installation finishes we can exit from the container and save the configuration that we’ve done. The container id, which is needed as an argument for the commit command is easily fetched from the shell prompt , root@[container id].

exit docker commit [container id] rasmus/mariadb103

It’s pretty useful to be able to have the database data stored outside the container. This is easily done by first defining a place for the data on the host machine. In my case, I chose to put it in /dbdata in my home directory. We want to expose it as the /data directory inside the container. We start the container with this command.

docker run -v="$HOME/dbdata":"/data" -i -t -p 3306 rasmus/mariadb103 /bin/bash

Inside the container, let’s start the MariaDB server and run the normal installation and configuration scripts.

/usr/bin/mysqld_safe & mysql_install_db mysql_secure_installation

After this we can test connecting to MariaDB 10.3 and hopefully everything works.

mysql -p

Welcome to the MariaDB monitor.  Commands end with ; or \g.

Your MariaDB connection id is 16

Server version: 10.3.6-MariaDB-1:10.3.6+maria~xenial-log mariadb.org binary distribution

Now I want to save the configuration so far to easily be able to start from state whenever needed. First, I exit the MariaDB monitor and then shutdown MariaDB.

exit mysqladmin -p shutdown

Then another exit will get us out of the container and then we can save the new version of the container by running the below docker commit command in the host terminal. Again, take the container id from the shell prompt of the container.

exit docker commit -m "mariadb 10.3.6" -author="Rasmus" [container id] rasmus/mariadb103:"basic configuration"

Tadaa, done! MariaDB 10.3.6 is now available in a Docker container and I can start playing with the cool new features of MariaDB Server 10.3 like System Versioned Tables. To start the container, I just run:

docker run -v="$HOME/dbdata":"/data" -i -t -p 3306 rasmus/mariadb103:”basic configuration” /bin/bash

 

There are times when you may want to test specific software or a specific version of software. In my case, I wanted to play with MariaDB Server 10.3.6 Release Candidate and some of the new, upcoming features. I didn’t want to have a permanent installation of it on my laptop so I chose to put it in a Docker container that I can easily copy to another place or remove. These are the steps I had to take to get it done.

Login or Register to post comments

Categories: Web Technologies

Understanding React `setState`

CSS-Tricks - Wed, 04/25/2018 - 06:36

React components can, and often do, have state. State can be anything, but think of things like whether a user is logged in or not and displaying the correct username based on which account is active. Or an array of blog posts. Or if a modal is open or not and which tab within it is active.

React components with state render UI based on that state. When the state of components changes, so does the component UI.

That makes understanding when and how to change the state of your component important. At the end of this tutorial, you should know how setState works, and be able to avoid common pitfalls that many of us hit when when learning React.

Workings of `setState()`

setState() is the only legitimate way to update state after the initial state setup. Let’s say we have a search component and want to display the search term a user submits.

Here’s the setup:

import React, { Component } from 'react' class Search extends Component { constructor(props) { super(props) state = { searchTerm: '' } } }

We’re passing an empty string as a value and, to update the state of searchTerm, we have to call setState().

setState({ searchTerm: event.target.value })

Here, we’re passing an object to setState(). The object contains the part of the state we want to update which, in this case, is the value of searchTerm. React takes this value and merges it into the object that needs it. It’s sort of like the Search component asks what it should use for the value of searchTerm and setState() responds with an answer.

This is basically kicking off a process that React calls reconciliation. The reconciliation process is the way React updates the DOM, by making changes to the component based on the change in state. When the request to setState() is triggered, React creates a new tree containing the reactive elements in the component (along with the updated state). This tree is used to figure out how the Search component’s UI should change in response to the state change by comparing it with the elements of the previous tree. React knows which changes to implement and will only update the parts of the DOM where necessary. This is why React is fast.

That sounds like a lot, but to sum up the flow:

  • We have a search component that displays a search term
  • That search term is currently empty
  • The user submits a search term
  • That term is captured and stored by setState as a value
  • Reconciliation takes place and React notices the change in value
  • React instructs the search component to update the value and the search term is merged in

The reconciliation process does not necessarily change the entire tree, except in a situation where the root of the tree is changed like this:

// old <div> <Search /> </div> // new <span> <Search /> </span>

All <div> tags become <span> tags and the whole component tree will be updated as a result.

The rule of thumb is to never mutate state directly. Always use setState() to change state. Modifying state directly, like the snippet below will not cause the component to re-render.

// do not do this this.state = { searchTerm: event.target.value } Passing a Function to `setState()`

To demonstrate this idea further, let's create a simple counter that increments and decrements on click.

See the Pen setState Pen by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

Let’s register the component and define the markup for the UI:

class App extends React.Component { state = { count: 0 } handleIncrement = () => { this.setState({ count: this.state.count + 1 }) } handleDecrement = () => { this.setState({ count: this.state.count - 1 }) } render() { return ( <div> <div> {this.state.count} </div> <button onClick={this.handleIncrement}>Increment by 1</button> <button onClick={this.handleDecrement}>Decrement by 1</button> </div> ) } }

At this point, the counter simply increments or decrements the count by 1 on each click.

But what if we wanted to increment or decrement by 3 instead? We could try to call setState() three times in the handleDecrement and handleIncrement functions like this:

handleIncrement = () => { this.setState({ count: this.state.count + 1 }) this.setState({ count: this.state.count + 1 }) this.setState({ count: this.state.count + 1 }) } handleDecrement = () => { this.setState({ count: this.state.count - 1 }) this.setState({ count: this.state.count - 1 }) this.setState({ count: this.state.count - 1 }) }

If you are coding along at home, you might be surprised to find that doesn’t work.

The above code snippet is equivalent to:

Object.assign( {}, { count: this.state.count + 1 }, { count: this.state.count + 1 }, { count: this.state.count + 1 }, )

Object.assign() is used to copy data from a source object to a target object. If the data being copied from the source to the target all have same keys, like in our example, the last object wins. Here's a simpler version of how Object.assign() works;

let count = 3 const object = Object.assign({}, {count: count + 1}, {count: count + 2}, {count: count + 3} ); console.log(object); // output: Object { count: 6 }

So instead of the call happening three times, it happens just once. This can be fixed by passing a function to setState(). Just as you pass objects to setState(), you can also pass functions, and that is the way out of the situation above.

If we edit the handleIncrement function to look like this:

handleIncrement = () => { this.setState((prevState) => ({ count: prevState.count + 1 })) this.setState((prevState) => ({ count: prevState.count + 1 })) this.setState((prevState) => ({ count: prevState.count + 1 })) }

...we can now increment count three times with one click.

In this case, instead of merging, React queues the function calls in the order they are made and updates the entire state ones it is done. This updates the state of count to 3 instead of 1.

Access Previous State Using Updater

When building React applications, there are times when you'll want to calculate state based the component’s previous state. You cannot always trust this.state to hold the correct state immediately after calling setState(), as it is always equal to the state rendered on the screen.

Let's go back to our counter example to see how this works. Let's say we have a function that decrements our count by 1. This function looks like this:

changeCount = () => { this.setState({ count: this.state.count - 1}) }

What we want is the ability to decrement by 3. The changeCount() function is called three times in a function that handles the click event, like this.

handleDecrement = () => { this.changeCount() this.changeCount() this.changeCount() }

Each time the button to decrement is clicked, the count will decrement by 1 instead of 3. This is because the this.state.count does not get updated until the component has been re-rendered. The solution is to use an updater. An updater allows you access the current state and put it to use immediately to update other items. So the changeCount() function will look like this.

changeCount = () => { this.setState((prevState) => { return { count: prevState.count - 1} }) }

Now we are not depending on the result of this.state. The states of count are built on each other so we are able to access the correct state which changes with each call to changeCount().

setState() should be treated asynchronously — in other words, do not always expect that the state has changed after calling setState().

Wrapping Up

When working with setState(), these are the major things you should know:

  • Update to a component state should be done using setState()
  • You can pass an object or a function to setState()
  • Pass a function when you can to update state multiple times
  • Do not depend on this.state immediately after calling setState() and make use of the updater function instead.

The post Understanding React `setState` appeared first on CSS-Tricks.

Categories: Web Technologies

a critical piece is missing for Oracle MySQL 8 (GA) …

Planet MySQL - Wed, 04/25/2018 - 04:57

Oracle MySQL 8.0 has been declared GA but a critical piece is missing … MySQL 8 is a fantastic release embedding the work of brilliant Oracle engineering. I will not detail all the great features of MySQL 8 as there are a lot of great presentations around it. https://mysqlserverteam.com/whats-new-in-mysql-8-0-generally-available/

One of my main concern regarding [...]

Categories: Web Technologies

What’s new in the Node.js 10 JavaScript runtime

InfoWorld JavaScript - Wed, 04/25/2018 - 03:00

Node.js 10.0.0 has been released, and will become the platform’s Long Term Support (LTS) line in October 2018. As the LTS line, it will be supported for three years.

Version 10.0.0 add supports for the OpenSSL 1.1.0 security toolkit but focuses mainly on incremental improvements. Also, while Node.js 10.0.0 ships with NPM 5.7, the 10.x line will be upgraded to NPM Version 6 later on; NPM 6 will offer performance, stability, and security improvements.

[ Getting to know Node? Don’t miss: Node.js tutorial: Get started with Node.js10 JavaScript concepts every Node developer must master.The complete guide to Node.js frameworks7 keys to structuring your Node app. | Keep up with hot topics in programming with InfoWorld’s App Dev Report newsletter. ] New features in Node.js 10

In addition to OpenSSL 1.1.0 support, other features in the Node.js 10.0.0 release include:

To read this article in full, please click here

Categories: Web Technologies

Improve MariaDB Performance using Query Profiling

Planet MySQL - Wed, 04/25/2018 - 02:22

Query profiling is a useful technique for analyzing the overall performance of a database. Considering that a single mid-to-large sized application can execute numerous queries each and every second, query profiling is an important part of database tuning, both as a proactive measure and in diagnosing problems.  In fact, it can become difficult to determine the exact sources and causes of bottlenecks and sluggish performance without employing some sort of query profiling techniques. This blog will present a few useful query profiling techniques that exploit MariaDB server’s own built-in tools: the Slow Query Log and the Performance Schema.

MariaDB vs. MySQL

Needless to say, the techniques that we’ll be covering here today are likely to be equally applicable to MySQL, due to the close relationship between the two products.

The day that Oracle announced the purchase of Sun back in 2010, Michael “Monty” Widenius forked MySQL and launched MariaDB, taking a swath of MySQL developers with him in the process.  His goal was for the relational database management system (DBMS) to remain free under the GNU GPL.

Today, MariaDB is a drop-in replacement for MySQL, one with more features and better performance.

MariaDB used to be based on the corresponding version of MySQL, where one existed. For example, MariaDB 5.1.53 was based on MySQL 5.1.53, with some added bug fixes, additional storage engines, new features, and performance improvements.  As of this writing, the latest version of MariaDB is 10.2.x. Meanwhile, MySQL 8 is still in RC (Release Candidate) mode.

The Slow Query Log

One feature shared by both MariaDB and MySQL is the slow query log.  Queries that are deemed to be slow and potentially problematic are recorded in the log.  A “slow” query is defined as a query that takes longer than the long_query_time global system variable value (of 10 seconds by default) to run. Microseconds are allowed for file logging, but not for table logging.  

Configuring the Slow Query Log via Global System Variables

Besides the long_query_time global system variable mentioned above, there are a few other variables that determine the behavior of the slow query log.

The slow query log is disabled by default. To enable it, set the slow_query_log system variable to 1. The log_output server system variable determines how the output will be written, and can also disable it. By default, the log is written to file, but it can also be written to table.  

Valid values for the log_output server system variable are TABLE, FILE or NONE.  The default name of the file is host_name-slow.log, but can also be set using the –slow_query_log_file=file_name option. The table used is the slow_log table in the mysql system database.

These variables are best set in the my.cnf or mariadb.cnf configuration files, typically stored in the /etc/mysql/ directory on Linux and in the Windows System Directory, usually C:\Windows\System, on Windows.  (See Configuring MariaDB with my.cnf for all of the possible locations.)  The following settings, appended in the [mysqld] section, will:

  1. Enable the slow query log.
  2. Set time in seconds/microseconds defining a slow query.
  3. Provide the name of the slow query log file.
  4. Log queries that don’t use indexes.

[1] slow_query_log = 1
[2] long_query_time = 5
[3] slow_query_log_file = /var/log/mysql/slow-query.log
[4] log_queries_not_using_indexes

Settings will take effect after a server restart.

Viewing the Slow Query Log

Slow query logs written to file can be viewed with any text editor.  Here are some sample contents:

# Time: 150109 11:38:55 # User@Host: root[root] @ localhost [] # Thread_id: 40 Schema: world Last_errno: 0 Killed: 0 # Query_time: 0.012989 Lock_time: 0.000033 Rows_sent: 4079 Rows_examined: 4079 Rows_affected: 0 Rows_read: 4079 # Bytes_sent: 161085 # Stored routine: world.improved_sp_log SET timestamp=1420803535; SELECT * FROM City; # User@Host: root[root] @ localhost [] # Thread_id: 40 Schema: world Last_errno: 0 Killed: 0 # Query_time: 0.001413 Lock_time: 0.000017 Rows_sent: 4318 Rows_examined: 4318 Rows_affected: 0 Rows_read: 4318 # Bytes_sent: 194601 # Stored routine: world.improved_sp_log SET timestamp=1420803535;

The only drawback to viewing the slow query log with text editor is that it could (and does!) soon grow to such a size that it becomes increasingly difficult to parse through all the data.  Hence, there is a risk that problematic queries will get lost in the log contents. MariaDB offers the mysqldumpslow tool to simplify the process by summarizing the information.  The executable is bundled with MariaDB. To use it, simply run the command and pass in the log path. The resulting rows are more readable as well as grouped by query:

mysqldumpslow /tmp/slow_query.log Reading mysql slow query log from /tmp/slow_query.log Count: 1 Time=23.99s (24s) Lock=0.00s (0s) Rows_sent=1.0 (1), Rows_examined=0.0 (0), Rows_affected=0.0 (0), root[root](#)@localhost SELECT * from large_table Count: 6 Time=6.83s (41s) Lock=0.00s (0s) Rows_sent=1.0 (6), Rows_examined=0.0 (0), Rows_affected=0.0 (0), root[root](#)@localhost SELECT * from another_large_table

There are various parameters that can be used with the command to help customize the output. In the next example, the top 5 queries sorted by the average query time will be displayed:

mysqldumpslow -t 5 -s at /var/log/mysql/localhost-slow.log

Working with the slow_log Table

Slow query logs written to table can be viewed by querying the slow_log table.

It contains the following fields:

Field Type Default Description start_time timestamp(6) CURRENT_TIMESTAMP(6) Time the query began. user_host mediumtext NULL User and host combination. query_time time(6) NULL Total time the query took to execute. lock_time time(6) NULL Total time the query was locked. rows_sent int(11) NULL Number of rows sent. rows_examined int(11) NULL Number of rows examined. db varchar(512) NULL Default database. last_insert_id int(11) NULL last_insert_id. insert_id int(11) NULL Insert id. server_id int(10) unsigned NULL The server’s id. sql_text mediumtext NULL Full query. thread_id bigint(21) unsigned NULL Thread id. rows_affected int(11) NULL Number of rows affected by an UPDATE or DELETE (as of MariaDB 10.1.2)

Here are some sample results of a SELECT ALL against the slow_log table:

SELECT * FROM mysql.slow_log\G *************************** 2. row *************************** start_time: 2014-11-11 07:56:28.721519 user_host: root[root] @ localhost [] query_time: 00:00:12.000215 lock_time: 00:00:00.000000 rows_sent: 1 rows_examined: 0 db: test last_insert_id: 0 insert_id: 0 server_id: 1 sql_text: SELECT * FROM large_table thread_id: 74 rows_affected: 0 ... Ordering Slow Query Log Rows

If you want to emulate the Linux “tail -100 log-slow.log” command with the slow_log table, which lists the latest queries in the end, you can issue the following query:

SELECT * FROM (SELECT * FROM slow_log ORDER BY start_time DESC LIMIT 100) sl ORDER BY start_time;

That will list the last 100 queries in the table.
Rather than typing the same SELECT statement every time you want to list the latest queries last, I would recommend creating a stored procedure, something like SHOW_LATEST_SLOW_QUERIES and use it instead. The number of queries to show can be passed to your proc as an input parameter.

Testing the Slow Query Log

Before attempting to scour through the slow query log in a production environment, it’s a good idea to test its operation by executing a few test queries; perhaps some that should trigger logging and others that should not.

As mentioned previously, when logging is enabled, queries that takes longer than the long_query_time global system variable value to run are recorded in the slow log file or the slow_log table, based on the log_output variable’s value.

You can certainly work with your data to construct SELECT queries that take variable amounts of time to execute, but perhaps an easier approach is to employ the sleep() function:

SLEEP(duration)

The sleep() function pauses query execution for the number of seconds given by the duration argument, then returns 0.  If SLEEP() is interrupted, it returns 1. The duration may have a fractional part given in microseconds, but there is really no need for the purposes of slow log testing.  Here’s an example:

SELECT sleep(5); +------------+ | sleep(5) | +------------+ | 0 | +------------+ 1 row in set (5.0 sec)

Suppose that the long_query_time global system variable has not been explicitly assigned a value. In that instance it would have the default value of 10 seconds. Therefore, the following SELECT statement would be recorded to the slow log:
SELECT SLEEP(11);

Query Profiling with Performance Schema

Another tool that we can use to monitor server performance is the Performance Schema. Introduced in MariaDB 5.5, the Performance Schema is implemented as a storage engine, and so will appear in the list of storage engines available:

SHOW ENGINES; +--------------------+---------+-------------------------------+--------------+------+------------+ | Engine | Support | Comment | Transactions | XA | Savepoints | +--------------------+---------+-------------------------------+--------------+------+------------+ | InnoDB | DEFAULT | Default engine | YES | YES | YES | | PERFORMANCE_SCHEMA | YES | Performance Schema | NO | NO | NO | | ... | | | | | | +--------------------+---------+-------------------------------+--------------+------+------------+

It is disabled by default for performance reasons, but it can easily be enabled as follows:
First, add the following line in your my.cnf or my.ini file, in the [mysqld] section:
performance_schema=on

The performance schema cannot be activated at runtime – it must be set when the server starts, via the configuration file.

The Performance Schema storage engine contains a database called performance_schema, which in turn consists of a number of tables that can be queried with regular SQL statements for a wide range of performance information.

In order to collect data, you need to set up all consumers (starting the collection of data) and instrumentations (what data to collect).  These may be set either on server startup or at runtime.

The following statements set up consumers and instrumentations at runtime:

UPDATE performance_schema.setup_consumers SET ENABLED = 'YES';

UPDATE performance_schema.setup_instruments SET ENABLED = ‘YES’, TIMED = ‘YES’;

You can decide what to enable/disable with WHERE NAME like “%what_to_enable”; conversely, you can disable instrumentations by setting ENABLED to “NO”.
The following enables all instrumentation of all stages (computation units) in the configuration file:

[mysqld] performance_schema=ON performance-schema-instrument='stage/%=ON' performance-schema-consumer-events-stages-current=ON performance-schema-consumer-events-stages-history=ON performance-schema-consumer-events-stages-history-long=ON

With regards to Query Profiling:

  1. Ensure that statement and stage instrumentation is enabled by updating the setup_instruments table as follows:

UPDATE performance_schema.setup_instruments SET ENABLED = 'YES', TIMED = 'YES'
WHERE NAME LIKE '%statement/%';

UPDATE performance_schema.setup_instruments SET ENABLED = 'YES', TIMED = 'YES'
WHERE NAME LIKE '%stage/%';

      2. Enable the events_statements_* and events_stages_* consumers:

UPDATE performance_schema.setup_consumers SET ENABLED = 'YES'
WHERE NAME LIKE '%events_statements_%';

UPDATE performance_schema.setup_consumers SET ENABLED = 'YES'
WHERE NAME LIKE '%events_stages_%';

Once you’ve narrowed down what you are interested in, there are two ways to start monitoring:

  1. View raw data in the summary views.
    This gives you an overall picture of usage on the instance.
  2. Snapshot data, and compute deltas over time.
    This gives you an idea of the rates of changes for events.

Let’s start with viewing raw summary data:

  1. Run the statement(s) that you want to profile. For example:
SELECT * FROM acme.employees WHERE emp_no = 99; +--------+------------+------------+-----------+--------+------------+ | emp_no | birth_date | first_name | last_name | gender | hire_date | +--------+------------+------------+-----------+--------+------------+ | 99 | 1979-01-29 | Bill | Bixby | M | 2006-06-05 | +--------+------------+------------+-----------+--------+------------+

Identify the EVENT_ID of the statement by querying the events_statements_history_long table. This step is similar to running SHOW PROFILES to identify the Query_ID. The following query produces output similar to SHOW PROFILES:

SELECT EVENT_ID, TRUNCATE(TIMER_WAIT/1000000000000,6) as Duration, SQL_TEXT FROM performance_schema.events_statements_history_long WHERE SQL_TEXT like '%99%'; +----------+----------+--------------------------------------------------------+ | event_id | duration | sql_text | +----------+----------+--------------------------------------------------------+ | 22 | 0.021470 | SELECT * FROM acme.employees WHERE emp_no = 99 | +----------+----------+--------------------------------------------------------+

Query the events_stages_history_long table to retrieve the statement’s stage events. Stages are linked to statements using event nesting. Each stage event record has a NESTING_EVENT_ID column that contains the EVENT_ID of the parent statement.

SELECT event_name AS Stage, TRUNCATE(TIMER_WAIT/1000000000000,6) AS Duration FROM performance_schema.events_stages_history_long WHERE NESTING_EVENT_ID=22; +--------------------------------+----------+ | Stage | Duration | +--------------------------------+----------+ | stage/sql/starting | 0.000080 | | stage/sql/checking permissions | 0.000005 | | stage/sql/Opening tables | 0.027759 | | stage/sql/init | 0.000052 | | stage/sql/System lock | 0.000009 | | stage/sql/optimizing | 0.000006 | | stage/sql/statistics | 0.000082 | | stage/sql/preparing | 0.000008 | | stage/sql/executing | 0.000000 | | stage/sql/Sending data | 0.000017 | | stage/sql/end | 0.000001 | | stage/sql/query end | 0.000004 | | stage/sql/closing tables | 0.000006 | | stage/sql/freeing items | 0.000272 | | stage/sql/cleaning up | 0.000001 | +--------------------------------+----------+ Conclusion

This blog presented a few useful query profiling techniques that employ a couple of MariaDB server’s built-in tools: the Slow Query Log and the Performance Schema.
The Slow Query Log records queries that are deemed to be slow and potentially problematic, that is, queries that take longer than the long_query_time global system variable value to run.
The slow query log may be viewed with any text editor. Alternatively, MariaDB’s mysqldumpslow tool can simplify the process by summarizing the information. The resulting rows are more readable as well as grouped by query.
The Performance Schema is a storage engine that contains a database called performance_schema, which in turn consists of a number of tables that can be queried with regular SQL statements for a wide range of performance information. It may be utilized to view raw data in the summary views as well as review performance over time.

The post Improve MariaDB Performance using Query Profiling appeared first on Monyog Blog.

Categories: Web Technologies

Flow Typecheck React Starter App

Echo JS - Wed, 04/25/2018 - 01:04
Categories: Web Technologies

List of Conferences & Events w/ MySQL, April - June 2018! - continued

Planet MySQL - Wed, 04/25/2018 - 00:34

As an update to the blog posted on April 4, 2018 we would like to update the list of events where you can find MySQL. Please see the four new conferences below: 

  • DevTalks, Cluj-Napoca, Romania, May 16, 2018
    • MySQL became a customized sponsor of this show. We will have  MySQL keynote given by Georgi Kodinov, the MySQL Senior SW Development Manager. We are still working on the topic, please watch the organizers’ website for further updates. 
  • SyntaxCon,  Charleston, SC, US, June 6-8, 2018  
    • MySQL Community team is going to be Bronze sponsor of SyntaxCon conference. This time we are going without booth, but with already approved MySQL talk. Please find the talk in the schedule, David Stokes, the MySQL Community Manager will be talking about “MySQL 8 - A New Beginning”. Talk is scheduled for Thursday, June 7 @1:15pm  
  • PyCon Thailand, Bangkok, Thailand, June 16-17,
    • 2018  MySQL is going to support & attend this conference. This time we are going without booth, but with "on site" staffing by Ronen Baram, the Principal Sales Consultant. Ronen also submitted a MySQL talk and we hope it will be approved. Please watch organizers website for further updates.  
  • DataOps Barcelona, Spain, June 21-22, 2018
    • We are happy to announce that MySQL Community team is going to be Community sponsor of DataOps Barcelona. The MySQL Community Manager, Fred Descamps will be talking about MySQL 8.0, Cluster & Document store. Please do not miss his and others' well known MySQL speakers's talks which will be announced in the schedule section of the conference website.

Please be aware that the list does not have to be final, during the time more events could be added or some of them removed. We will keep you informed!

 

 

Categories: Web Technologies

Percona Live 2018 Sessions: Ghostferry – the Swiss Army Knife of Live Data Migrations with Minimum Downtime

Planet MySQL - Tue, 04/24/2018 - 18:11

In this blog post on Percona Live 2018 sessions, we’ll talk with Shuhoa Wu, Software Developer for Shopify, Inc. about how Ghostferry is the Swiss Army knife of live data migrations.

Existing tools like mysqldump and replication cannot migrate data between GTID-enabled MySQL and non-GTID-enabled MySQL – a common configuration across multiple cloud providers that cannot be changed. These tools are also cumbersome to operate and error-prone, thus requiring a DBA’s attention for each data migration. Shopify’s team introduced a tool that allows for easy migration of data between MySQL databases with constant downtime on the order of seconds.

Inspired by gh-ost, their tool is named Ghostferry and allows application developers at Shopify to migrate data without assistance from DBAs. It has been used to rebalance sharded data across databases. They open sourced Ghostferry at the Percona Live 2018 conference so that anyone can migrate their own data with minimal hassle and downtime. Since Shopify wrote Ghostferry as a library, you can use it to build specialized data movers that move arbitrary subsets of data from one database to another.

Shuhao walked through what data migration is, how it works, and how Ghostferry works to make this process simpler and standard across platforms – especially in systems (like cloud providers such as AWS or Google) where you don’t have control of the instances. Ghostferry also simplifies the replication process and allows someone to copy across instances with a single Ghostferry command, rather than having to understand both the source and target instances.

After the Percona Live 2018 sessions talk, I had a chance to speak with Shuhao about Ghostferry, Check it out below.

The post Percona Live 2018 Sessions: Ghostferry – the Swiss Army Knife of Live Data Migrations with Minimum Downtime appeared first on Percona Database Performance Blog.

Categories: Web Technologies

Percona Live 2018 Sessions: Microsoft Built MySQL, PostgreSQL and MariaDB for the Cloud

Planet MySQL - Tue, 04/24/2018 - 17:53

In this blog post on Percona Live 2018 sessions, we’ll talk with Jun Su, Principal Engineering Manager at Microsoft about how Microsoft built MySQL, PostgreSQL and MariaDB for the cloud.

Offering MySQL, PostgreSQL and MariaDB database services in the cloud is different than doing so on-premise. Latency, connection redirection, optimal performance configuration are just a few challenges. In this session, Jun Su walked us through Microsoft’s journey to not only offer these popular OSS RDBMS in Microsoft Azure, but how they are implemented in Azure as a true DBaaS. We learned about Microsoft’s Azure Database Services platform architecture, and how these services are built to scale.

In Azure, database engine instances are services managed by the Azure Service Fabric, which is a platform for reliable, hyperscale, microservice-based applications. So each database engine gets treated as a microservice. When coupled with Azure’s clustering — a set of machines that the Service Fabric stitches together — you can scale up to 1000+ machines. This provides some pretty impressive scaling opportunities. Jun also walked through some of the issues with multi-tenancy, and how different levels of multi-tenancy have different trade-offs in cost, capacity and density.

After the talk, I spoke briefly with Jun about Microsoft’s efforts to provide the different open source databases on the Azure platform.

The post Percona Live 2018 Sessions: Microsoft Built MySQL, PostgreSQL and MariaDB for the Cloud appeared first on Percona Database Performance Blog.

Categories: Web Technologies

One Giant Leap For SQL: MySQL 8.0 Released

Planet MySQL - Tue, 04/24/2018 - 17:00
One Giant Leap For SQL: MySQL 8.0 Released

“Still using SQL-92?” is the opening question of my “Modern SQL” presentation. When I ask this question, an astonishingly large portion of the audience openly admits to using 25 years old technology. If I ask who is still using Windows 3.1, which was also released in 1992, only a few raise their hand…but they’re joking, of course.

Clearly this comparison is not entirely fair. It nevertheless demonstrates that the know-how surrounding newer SQL standards is pretty lacking. There were actually five updates since SQL-92—many developers have never heard of them. The latest version is SQL:2016.

As a consequence, many developers don’t know that SQL hasn’t been limited to the relational algebra or the relational model since 1999. SQL:1999 introduced operations that don't exist in relational algebra (with recursive, lateral) and types (arrays!) that break the traditional interpretation of the first normal form.0

Since then, so for 19 years, whether or not a SQL feature fits the relational idea isn’t important anymore. What is important is that a feature has well-defined semantics and solves a real problem. The academic approach has given way to a pragmatic one. Today, the SQL standard has a practical solution for almost every data processing problem. Some of them stay within the relational domain, while others do not.

Resolution

Don’t say relational database when referring to SQL databases. SQL is really more than just relational.

It’s really too bad that many developers still use SQL in the same way it was being used 25 years ago. I believe the main reasons are a lack of knowledge and interest1 among developers along with poor support for modern SQL in database products.

Let’s have a look at this argument in the context of MySQL. Considering its market share, I think that MySQL’s lack of modern SQL has contributed more than its fair share to this unfortunate situation. I once touched on that argument in my 2013 blog post “MySQL is as Bad for SQL as MongoDB is to NoSQL”. The key message was that “MongoDB is a popular, yet poor representative of its species—just like MySQL is”. Joe Celko has expressed his opinion about MySQL differently: “MySQL is not SQL, it merely borrows the keywords from SQL”.

You can see some examples of the questionable interpretation of SQL in the MySQL WAT talk on YouTube.2 Note that this video is from 2012 and uses MySQL 5.5 (the current GA version at that time). Since then, MySQL 5.6 and 5.7 came out, which improved the situation substantially. The default settings on a fresh installation are much better now.3

It is particularly nice that they were really thinking about how to mitigate the effects of changing defaults. When they enabled ONLY_FULL_GROUP_BY by default, for example, they went the extra mile to implement the most complete functional dependencies checking among the major SQL databases:

About the same time MySQL 5.7 was released, I stopped bashing MySQL. Of course I'm kidding. I'm still bashing MySQL occasionally…but it has become a bit harder since then.

By the way, did you know MySQL still doesn’t support check constraints? Just as in previous versions, you can use check constraints in the create table statement but they are silently ignored. Yes—ignored without warning. Even MariaDB fixed that a year ago.

Uhm, I’m bashing again! Sorry—old habits die hard.

Nevertheless, the development philosophy of MySQL has visibly changed over the last few releases. What happened? You know the answer already: MySQL is under new management since Oracle bought it through Sun. I must admit: it might have been the best thing that happened to SQL in the past 10 years, and I really mean SQL—not MySQL.

The reason I think a single database release has a dramatic effect on the entire SQL ecosystem is simple: MySQL is the weakest link in the chain. If you strengthen that link, the entire chain becomes stronger. Let me elaborate.

MySQL is very popular. According to db-engines.com, it’s the second most popular SQL database overall. More importantly: it is, by a huge margin, the most popular free SQL database. This has a big effect on anyone who has to cope with more than one specific SQL database. These are often software vendors that make products like content management systems (CRMs), e-commerce software, or object-relational mappers (ORMs). Due to its immense popularity, such vendors often need to support MySQL. Only a few of them bite the bullet and truly support multiple database—Java Object Oriented Querying (jOOQ) really stands out in this regard. Many vendors just limit themselves to the commonly supported SQL dialect, i.e. MySQL.

Another important group affected by MySQL’s omnipresence are people learning SQL. They can reasonably assume that the most popular free SQL database is a good foundation for learning. What they don't know is that MySQL limits their SQL-foo to the weakest SQL dialect among those being widely used. Based loosely on Joe Celko’s statement: these people know the keywords, but don’t understand their real meaning. Worse still, they have not heard anything about modern SQL features.

Last week, that all changed when Oracle finally published a generally available (GA) release of MySQL 8.0. This is a landmark release as MySQL eventually evolved beyond SQL-92 and the purely relational dogma. Among a few other standard SQL features, MySQL now supports window functions (over) and common table expressions (with). Without a doubt, these are the two most important post-SQL-92 features.

The days are numbered in which software vendors claim they cannot use these features because MySQL doesn't support them. Window functions and CTEs are now in the documentation of the most popular free SQL database. Let me therefore boldly claim: MySQL 8.0 is one small step for a database, one giant leap for SQL.4

It gets even better and the future is bright! As a consequence of Oracle getting its hands on MySQL, some of the original MySQL team (among them the original creator) created the MySQL fork MariaDB. Apparently, their strategy is to add many new features to convince MySQL users to consider their competing product. Personally I think they sacrifice quality—very much like they did before with MySQL—but that’s another story. Here it is more relevant that MariaDB has been validating check constraints for a year now. That raises a question: how much longer can MySQL afford to ignore check constraints? Or to put it another way, how much longer can they endure my bashing ;)

Besides check constraints, MariaDB 10.2 also introduced window functions and common table expressions (CTEs). At that time, MySQL had a beta with CTEs but no window functions. MariaDB is moving faster.5

In 10.3, MariaDB is set to release “system versioned tables”. In a nutshell: once activated for a table, system versioning keeps old versions for updated and deleted rows. By default, queries return the current version as usual, but you can use a special syntax (as of) to get older versions. Your can read more about this in MariaDBs announcement.

System versioning was introduced into the SQL standard in 2011. As it looks now, MariaDB will be the first free SQL database supporting it. I hope this an incentive for other vendors—and also for users asking their vendors to support more modern SQL features!

Now that the adoption of modern SQL has finally gained some traction, there is only one problem left: the gory details. The features defined by the standard have many subfeatures, and due to their sheer number, it is common practice to support only some of them. That means it is not enough to say that a database supports window functions. Which window functions does it actually support? Which frame units (rows, range, groups)? The answers to these questions make all the difference between a marketing gag and a powerful feature.

In my mission to make modern SQL more accessible to developers, I’m testing these details so I can highlight the differences between products. The results of these tests are shown in matrices like the ones above. The rest of this article will thus briefly go through the new standard SQL features introduced with MySQL 8.0 and discuss some implementation differences. As you will see, MySQL 8.0 is pretty good in this regard. The notable exception is its JSON functionality.

Window Functions

There is SQL before window functions and SQL after window functions. Without exaggeration, window functions are a game changer. Once you understood window functions, you cannot imagine how you could ever have lived without them. The most common use cases, for example finding the best N rows per group, building running totals or moving averages, and grouping consecutive events, are just the tip of the iceberg. Window functions are one of the most important tools to avoid self-joins. That alone makes many queries less redundant and much faster. Window functions are so powerful that even newcomers like several Apache SQL implementations (Hive, Impala, Spark), NuoDB and Google BigQuery introduced them years ago. It’s really fair to say that MySQL is pretty late to this party.

The following matrix shows the support of the over clause for some major SQL databases. As you can see, MySQL’s implementation actually exceeds the capabilities of “the world’s most advanced open source relational database”, as PostgreSQL claims on its new homepage. However, PostgreSQL 11 is set to recapture the leader position in this area.

The actual set of window functions offered by MySQL 8.0 is also pretty close to the state of the art:

Common Table Expressions (with [recursive])

The next major enhancement for MySQL 8.0 are common table expressions or the with [recursive] clause. Important use cases are traversing graphs with a single query, generating an arbitrary number of rows, converting CSV strings to rows (reversed listagg / group_concat) or just literate SQL.

Again, MySQL’s first implementation closes the gap.

Other Standard SQL Features

Besides window functions and the with clause, MySQL 8.0 also introduces some other standard SQL features. However compared to the previous two, these are by no means killer features.

As you can see, Oracle pushes standard SQL JSON support. The Oracle database and MySQL are currently the leaders in this area (and both are from the same vendor!). The json_objectagg and json_arrayagg functions were even backported to MySQL 5.7.22. However, it’s also notable that MySQL doesn’t follow the standard syntax for these two functions. Modifiers defined in the standard (e.g. an order by clause) are generally not supported. Json_objectagg neither recognizes the keywords key and value nor accepts the colon (:) to separate attribute names and values. It looks like MySQL parses these as regular functions calls—as opposed to syntax described by the standard.

It’s also interesting to see that json_arrayagg handles null values incorrectly, very much like the Oracle database (they don’t default to absent on null6). Seeing the same issue in two supposedly unrelated products is always interesting. Adding the fact that both products come from the same vendor adds another twist.

The two last features in the list, grouping function (related to rollup) and column names in the from clause are solutions to pretty specific problems. Their MySQL 8.0 implementation is basically on par with that of other databases.

Furthermore, MySQL 8.0 also introduced standard SQL roles. The reason this is not listed in the matrix above is simple: the matrices are based on actual tests I run against all these databases. My homegrown testing framework does not yet support test cases that require multiple users—currently all test are run with a default user, so I cannot test access rights yet. However, the time for that will come—stay tuned.

Other Notable Enhancements

I'd like to close this article with MySQL 8.0 fixes and improvements that are not related to the SQL standard.

One of them is about using the desc modifier in index declarations:

CREATE INDEX … ON … (<column> [ASC|DESC], …)

Most—if not all—databases use the same logic in the index creation as for the order by clause, i.e. by default, the order of column values is ascending. Sometimes it is needed to sort some index columns in the opposite direction. That’s when you specify desc in an index. Here’s what the MySQL 5.7 documentation said about this:

An index_col_name specification can end with ASC or DESC. These keywords are permitted for future extensions for specifying ascending or descending index value storage. Currently, they are parsed but ignored; index values are always stored in ascending order.

“They are parsed but ignored”? To be more specific: they are parsed but ignored without warning very much like check constraints mentioned above.

However, this has been fixed with MySQL 8.0. Now there is a warning. Just kidding! Desc is honored now.

There are many other improvements in MySQL 8.0. Please refer to “What’s New in MySQL 8.0?” for a great overview. How about a small appetizer:

One Giant Leap For SQL: MySQL 8.0 Released” by Markus Winand was originally published at modern SQL.

Categories: Web Technologies

Pages