emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Domain Module Postmortem

Echo JS - 18 hours 55 min ago
Categories: Web Technologies

All Things React

Echo JS - 18 hours 55 min ago
Categories: Web Technologies

PHP 7.1.18 Released - PHP: Hypertext Preprocessor

Planet PHP - Thu, 05/24/2018 - 17:00
The PHP development team announces the immediate availability of PHP 7.1.18. All PHP 7.1 users are encouraged to upgrade to this version. For source downloads of PHP 7.1.18 please visit our downloads page, Windows source and binaries can be found on windows.php.net/download/. The list of changes is recorded in the ChangeLog.
Categories: Web Technologies

Using dbdeployer to manage MySQL, Percona Server and MariaDB sandboxes

Planet MySQL - Thu, 05/24/2018 - 14:56

Some years ago, Peter Z wrote a blogpost about using MySQL Sandbox to deploy multiple server versions. Last February, Giuseppe  introduced us to its successor: dbdeployer. In this blogpost we will demonstrate how to use it. There is a lot of information in Giuseppe’s post, so head there if you want a deeper dive.

First step is to install it, which is really easy to do now since it’s developed in Go, and standalone executables are provided. You can get the latest version here.

shell> wget https://github.com/datacharmer/dbdeployer/releases/download/1.5.0/dbdeployer-1.5.0.linux.tar.gz shell> tar xzf dbdeployer-1.5.0.linux.tar.gz shell> mv dbdeployer-1.5.0.linux ~/bin/dbdeployer

If you have your ~/bin/ directory in the path, you should now be able to run dbdeployer commands. Let’s start with deploying a latest version vanilla MySQL sandbox.

In the Support Team, we extensively use MySQL Sandbox (the predecessor to dbdeployer) to easily run different flavours and versions of MySQL so that we can test with the same versions our customers present us with. We store MySQL binaries in /opt/, so we can all share them and avoid wasting disk space on duplicated binaries.

The first step to using dbdeployer is getting the binary we want to run, and then unpacking it into the binaries directory.

shell> wget https://dev.mysql.com/get/Downloads/MySQL-8.0/mysql-8.0.11-linux-glibc2.12-x86_64.tar.gz shell> dbdeployer --sandbox-binary=/opt/mysql/ unpack mysql-8.0.11-linux-glibc2.12-x86_64.tar.gz

This command will extract and move the files to the appropriate directory, which in this case is under /opt/mysql/ as overridden with the --sandbox-binary argument, so we can use them with the deploy command.

Standalone

To create a new standalone MySQL sandbox with the newly extracted binary, we can use the following command.

shell> dbdeployer --sandbox-binary=/opt/mysql/ deploy single 8.0.11 Creating directory /home/vagrant/sandboxes Database installed in $HOME/sandboxes/msb_8_0_11 run 'dbdeployer usage single' for basic instructions' .. sandbox server started

You can read the dbdeployer usage output to have even more information on how the tool works. Next, let’s connect to it.

shell> cd sandboxes/msb_8_0_11/ shell> ./use Welcome to the MySQL monitor.  Commands end with ; or g. Your MySQL connection id is 9 Server version: 8.0.11 MySQL Community Server - GPL Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql [localhost] {msandbox} ((none)) > select @@version, @@port; +-----------+--------+ | @@version | @@port | +-----------+--------+ | 8.0.11    | 8011 | +-----------+--------+ 1 row in set (0.00 sec)

And that was it! When creating the new instance, dbdeployer will try to use the same port as the version numbers concatenated. If that port is in use, it will try another one, or we can manually override it with the --port argument.

Replication

We can also easily setup a replication environment with just one command.

shell> dbdeployer --sandbox-binary=/opt/mariadb/ deploy replication 10.2.15 Installing and starting master . sandbox server started Installing and starting slave1 . sandbox server started Installing and starting slave2 . sandbox server started $HOME/sandboxes/rsandbox_10_2_15/initialize_slaves initializing slave 1 initializing slave 2 Replication directory installed in $HOME/sandboxes/rsandbox_10_2_15 run 'dbdeployer usage multiple' for basic instructions'

Again, you should run the recommended command to get more insight into what can be done. We can use the ./m script to connect to the master, and ./s1 to connect to the first slave. The ./use_all* scripts can come in handy to run commands in many servers at a time.

Multiple sandboxes

Finally, we will see how to create multiple sandboxes with the same version at the same time.

shell> dbdeployer --sandbox-binary=/opt/percona_server/ deploy multiple 5.7.21 Installing and starting node 1 . sandbox server started Installing and starting node 2 . sandbox server started Installing and starting node 3 . sandbox server started multiple directory installed in $HOME/sandboxes/multi_msb_5_7_21 run 'dbdeployer usage multiple' for basic instructions'

This could be useful for setting up environments that are not already covered by the tool, like Galera clusters or semi-sync replication. With this approach, we will at least have a base to start from, and then can use our own custom scripts. dbdeployer now has templates, which would allow extending functionality to support this, if needed. I have not yet tried to do so, but sounds like an interesting project for the future! Let me know if you would be interested in reading more about it.

The post Using dbdeployer to manage MySQL, Percona Server and MariaDB sandboxes appeared first on Percona Database Performance Blog.

Categories: Web Technologies

Setting up PMM on Google Compute Engine in 15 minutes or less

Planet MySQL - Thu, 05/24/2018 - 12:32

In this blog post, I will show you how easy it is to set up a Percona Monitoring and Management server on Google Compute Engine from the command line.

First off you will need to have a Google account and install the Cloud SDK tool. You need to create a GCP (Google Cloud Platform) project and enable billing to proceed. This blog assumes you are able to authenticate and SSH into instances from the command line.

Here are the steps to install PMM server in Google Cloud Platform.

1) Create the Compute engine instance with the following command. The example creates an Ubuntu Xenial 16.04 LTS compute instance in the us-west1-b zone with a 100GB persistent disk. For production systems it would be best to use a 500GB disk instead (size=500GB). This should be enough for default data retention settings, although your needs may vary.

jerichorivera@percona-support:~/GCE$ gcloud compute instances create pmm-server --tags pmmserver --image-family ubuntu-1604-lts --image-project ubuntu-os-cloud --machine-type n1-standard-4 --zone us-west1-b --create-disk=size=100GB,type=pd-ssd,device-name=sdb --description "PMM Server on GCP" --metadata-from-file startup-script=deploy-pmm-xenial64.sh Created [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/zones/us-west1-b/instances/pmm-server]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS pmm-server us-west1-b n1-standard-4 10.138.0.2 35.233.216.225 RUNNING

Notice that we’ve used

--metadata-from-file startup-script=deploy-pmm-xenial64.sh  The file has the following contents:jerichorivera@percona-support:~$ cat GCE/deploy-pmm-xenial64.sh #!/bin/bash set -v sudo apt-get update sudo apt-get upgrade -y sudo apt-get install apt-transport-https ca-certificates curl software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update # Format the persistent disk, mount it then add to /etc/fstab sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb sudo mkdir -p /mnt/disks/pdssd sudo mount -o discard,defaults /dev/sdb /mnt/disks/pdssd/ sudo chmod a+w /mnt/disks/pdssd/ sudo cp /etc/fstab /etc/fstab.backup echo UUID=`sudo blkid -s UUID -o value /dev/sdb` /mnt/disks/pdssd ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab # Change docker’s root directory before installing Docker sudo mkdir /etc/systemd/system/docker.service.d/ cat << EOF > /etc/systemd/system/docker.service.d/docker.root.conf [Service] ExecStart= ExecStart=/usr/bin/dockerd -H fd:// -g /mnt/disks/pdssd/docker/ EOF sudo apt-get install -y docker-ce # Creates the deploy.sh script cat << EOF > /tmp/deploy.sh #!/bin/bash set -v docker pull percona/pmm-server:latest docker create -v /opt/prometheus/data -v /opt/consul-data -v /var/lib/mysql -v /var/lib/grafana --name pmm-data percona/pmm-server:latest /bin/true docker run -d -p 80:80 --volumes-from pmm-data --name pmm-server --restart always percona/pmm-server:latest EOF

This startup script will be executed right after the compute instance is created. The script will format the persistent disk and mount the file system; create a custom Docker unit file for the purpose of creating Docker’s root directory from /var/lib/docker to /mnt/disks/pdssd/docker; install the Docker package; and create the deploy.sh script.

2) Once the compute engine instance is created, SSH into the instance, check that Docker is running and the root directory pointing to the desired folder.

jerichorivera@pmm-server:~$ sudo systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/docker.service.d └─docker.root.conf Active: active (running) since Wed 2018-05-16 12:53:30 UTC; 45s ago Docs: https://docs.docker.com Main PID: 4744 (dockerd) CGroup: /system.slice/docker.service ├─4744 /usr/bin/dockerd -H fd:// -g /mnt/disks/pdssd/docker/ └─4764 docker-containerd --config /var/run/docker/containerd/containerd.toml May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.391566708Z" level=warning msg="Your kernel does not support swap memory limit" May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.391638253Z" level=warning msg="Your kernel does not support cgroup rt period" May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.391680203Z" level=warning msg="Your kernel does not support cgroup rt runtime" May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.392913043Z" level=info msg="Loading containers: start." May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.767048674Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.847907241Z" level=info msg="Loading containers: done." May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.875129963Z" level=info msg="Docker daemon" commit=9ee9f40 graphdriver(s)=overlay2 version=18.03.1-ce May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.875285809Z" level=info msg="Daemon has completed initialization" May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.884566419Z" level=info msg="API listen on /var/run/docker.sock" May 16 12:53:30 pmm-server systemd[1]: Started Docker Application Container Engine.

3) Add your user to the docker group as shown below and change deploy.sh script to executable.

jerichorivera@pmm-server:~$ sudo usermod -aG docker $USER jerichorivera@pmm-server:~$ sudo chmod +x /tmp/deploy.sh

4) Log off from the instance, and then log back in and then execute the deploy.sh script.

jerichorivera@pmm-server:~$ cd /tmp/ jerichorivera@pmm-server:/tmp$ ./deploy.sh docker pull percona/pmm-server:latest latest: Pulling from percona/pmm-server 697841bfe295: Pull complete fa45d21b9629: Pull complete Digest: sha256:98d2717b4f0ae83fbca63330c39590d69a7fca7ae6788f52906253ac75db6838 Status: Downloaded newer image for percona/pmm-server:latest docker create -v /opt/prometheus/data -v /opt/consul-data -v /var/lib/mysql -v /var/lib/grafana --name pmm-data percona/pmm-server:latest /bin/true 8977102d419cf8955fd8bbd0ed2c663c75a39f9fbc635238d56b480ecca8e749 docker run -d -p 80:80 --volumes-from pmm-data --name pmm-server --restart always percona/pmm-server:latest 83c2e6db2efc752a6beeff0559b472f012062d3f163c042e5e0d41cda6481d33

5) Finally, create a firewall rule to allow HTTP port 80 to access the PMM Server. For security reasons, we recommend that you secure your PMM server by adding a password, or limit access to it with a stricter firewall rule to specify which IP addresses can access port 80.

jerichorivera@percona-support:~$ gcloud compute firewall-rules create allow-http-pmm-server --allow tcp:80 --target-tags pmmserver --description "Allow HTTP traffic to PMM Server" Creating firewall...-Created [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/global/firewalls/allow-http-pmm-server]. Creating firewall...done. NAME NETWORK DIRECTION PRIORITY ALLOW DENY allow-http-pmm-server default INGRESS 1000 tcp:80 jerichorivera@percona-support:~/GCE$ gcloud compute firewall-rules list NAME NETWORK DIRECTION PRIORITY ALLOW DENY allow-http-pmm-server default INGRESS 1000 tcp:80 default-allow-icmp default INGRESS 65534 icmp default-allow-internal default INGRESS 65534 tcp:0-65535,udp:0-65535,icmp default-allow-rdp default INGRESS 65534 tcp:3389 default-allow-ssh default INGRESS 65534 tcp:22

At this point you should have a PMM Server in GCP running on a Compute Engine instance.

The next steps is to install pmm-client on the database hosts and add services for monitoring.

Here I’ve launched a single standalone Percona Server 5.6 on another Compute Engine instance in the same project (thematic-acumen-204008).

jerichorivera@percona-support:~/GCE$ gcloud compute instances create mysql1 --tags mysql1 --image-family centos-7 --image-project centos-cloud --machine-type n1-standard-2 --zone us-west1-b --create-disk=size=50GB,type=pd-standard,device-name=sdb --description "MySQL1 on GCP" --metadata-from-file startup-script=compute-instance-deploy.sh Created [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/zones/us-west1-b/instances/mysql1]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS mysql1 us-west1-b n1-standard-2 10.138.0.3 35.233.187.253 RUNNING

Installed Percona Server 5.6 and pmm-client and then added services. Take note that since the PMM Server and the MySQL server is in the same project and same VPC network, we can connect directly through INTERNAL_IP 10.138.0.2, otherwise use the EXTERNAL_IP 35.223.216.225.

[root@mysql1 jerichorivera]# pmm-admin config --server 10.138.0.2 OK, PMM server is alive. PMM Server | 10.138.0.2 Client Name | mysql1 Client Address | 10.138.0.3 [root@mysql1 jerichorivera]# [root@mysql1 jerichorivera]# pmm-admin check-network PMM Network Status Server Address | 10.138.0.2 Client Address | 10.138.0.3 * System Time NTP Server (0.pool.ntp.org) | 2018-05-22 06:45:47 +0000 UTC PMM Server | 2018-05-22 06:45:47 +0000 GMT PMM Client | 2018-05-22 06:45:47 +0000 UTC PMM Server Time Drift | OK PMM Client Time Drift | OK PMM Client to PMM Server Time Drift | OK * Connection: Client --> Server -------------------- ------- SERVER SERVICE STATUS -------------------- ------- Consul API OK Prometheus API OK Query Analytics API OK Connection duration | 408.185µs Request duration | 6.810709ms Full round trip | 7.218894ms No monitoring registered for this node identified as 'mysql1'. [root@mysql1 jerichorivera]# pmm-admin add mysql --create-user [linux:metrics] OK, now monitoring this system. [mysql:metrics] OK, now monitoring MySQL metrics using DSN pmm:***@unix(/mnt/disks/disk1/data/mysql.sock) [mysql:queries] OK, now monitoring MySQL queries from slowlog using DSN pmm:***@unix(/mnt/disks/disk1/data/mysql.sock) [root@mysql1 jerichorivera]# pmm-admin list pmm-admin 1.10.0 PMM Server | 10.138.0.2 Client Name | mysql1 Client Address | 10.138.0.3 Service Manager | linux-systemd -------------- ------- ----------- -------- ----------------------------------------------- ------------------------------------------ SERVICE TYPE NAME LOCAL PORT RUNNING DATA SOURCE OPTIONS -------------- ------- ----------- -------- ----------------------------------------------- ------------------------------------------ mysql:queries mysql1 - YES pmm:***@unix(/mnt/disks/disk1/data/mysql.sock) query_source=slowlog, query_examples=true linux:metrics mysql1 42000 YES - mysql:metrics mysql1 42002 YES pmm:***@unix(/mnt/disks/disk1/data/mysql.sock)

Lastly, in case you need to delete the PMM Server instance. Just execute this delete command below to completely remove the instance and the attached disk. Be aware that you may remove the boot disk and retain the attached persistent disk if you prefer.

jerichorivera@percona-support:~/GCE$ gcloud compute instances delete pmm-server The following instances will be deleted. Any attached disks configured to be auto-deleted will be deleted unless they are attached to any other instances or the `--keep-disks` flag is given and specifies them for keeping. Deleting a disk is irreversible and any data on the disk will be lost. - [pmm-server] in [us-west1-b] Do you want to continue (Y/n)? y Deleted [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/zones/us-west1-b/instances/pmm-server].

The other option is to install PMM on Google Container engine which was explained by Manjot Singh in his blog post.

The post Setting up PMM on Google Compute Engine in 15 minutes or less appeared first on Percona Database Performance Blog.

Categories: Web Technologies

MariaDB TX 3.0 – First to Deliver on the Promise of Enterprise Open Source

Planet MySQL - Thu, 05/24/2018 - 12:28
MariaDB TX 3.0 – First to Deliver on the Promise of Enterprise Open Source Shane Johnson Thu, 05/24/2018 - 15:28

It’s one thing to be open source. It’s another to be enterprise open source.

That begs the question: What does it mean to be enterprise open source?

You have to be 100% committed to the open source community – collaboration, transparency and innovation. You have to be 100% committed to customer success – providing the enterprise features and reliability needed to support mission-critical applications.

However, being committed is not enough. You have to be a leader. You have to challenge proprietary vendors, and that includes vendors who limit open source projects with proprietary extensions and/or plugins.

MariaDB TX 3.0 sets the standard for enterprise open source databases, and as the leader, we’re challenging Oracle, Microsoft and IBM with it. Here’s how.

Oracle Database compatibility

MariaDB TX 3.0 is the first enterprise open source database with Oracle Database compatibility, including support for stored procedures written in PL/SQL. Until now, if you needed Oracle Database compatibility, you needed a proprietary database (IBM DB2 or EnterpriseDB). Until now, if you needed Oracle Database compatibility, you needed a proprietary database (IBM DB2 or EnterpriseDB). Today, you can run those Oracle PL/SQL stored procedures on MariaDB TX!

Temporal features

MariaDB TX 3.0 is the first enterprise open source database with temporal features, including built-in system-versioned tables and standard temporal query syntax. Until now, if you needed the functional equivalent of Oracle Flashback queries or Microsoft SQL Server temporal tables, you needed a proprietary database. Today, you can run those temporal queries on MariaDB TX.

Faster schema changes

MariaDB TX 3.0 is the first enterprise open source database to support invisible columns (like Oracle Database), compressed columns and the ability to add columns (with or without default values) to a table without causing all of the rows to be updated (i.e., a table rebuild) – something you can’t do in MySQL or Postgres. Simply said, life is easier with MariaDB TX.

Purpose-built storage

MariaDB TX 3.0 is the first enterprise open source database to support a variety of workloads, all with the same level of performance, by leveraging multiple, purpose-built storage engines: the default storage engine for mixed or read-mostly workloads (InnoDB), an SSD-optimized storage engine for write-intensive workloads (MyRocks) and a distributed storage engine for workloads requiring extreme scalability and/or concurrency (Spider).

While general-purpose databases are limited to supporting one workload really well, MariaDB TX can support a variety of workloads very well – and at the same time. Would you need a NoSQL database if your relational database supported JSON and distributed storage (i.e., scale out)?

You could deploy multiple specialized databases, but wouldn’t you rather standardize on a single database? Well, you can with MariaDB TX.

Data protection

MariaDB TX 3.0 is the first enterprise open source database to support anonymization via complete data obfuscation and psuedoanonymization via full or partial data masking, necessary features assuming you want to comply with EU GDPR and don’t want your company featured in tomorrow's headlines as the featured security breach of the month. If you’re using Oracle Database, these features are part of Oracle Data Redaction, and require Oracle Advanced Security – an extra $7,500 per core. MariaDB TX database administrators sleep well at night.

Conclusion

We created MariaDB TX 3.0 so you can migrate from Oracle/Microsoft/IBM to the enterprise open source database you want without sacrificing the enterprise features you need. Ready?

Login or Register to post comments

Categories: Web Technologies

Comparing MySQL to Vertica Replication under MemCloud, AWS and Bare Metal

Planet MySQL - Thu, 05/24/2018 - 11:23

Back in December, I did a detailed analysis for getting data into Vertica from MySQL using Tungsten Replicator, all within the Kodiak MemCloud.

I got some good numbers towards the end – 1.9 million rows/minute into Vertica. I did this using a standard replicator deployment, plus some tweaks to the Vertica environment. In particular:

  • Integer hash for a partition for both the staging and base tables
  • Some tweaks to the queries to ensure that we used the partitions in the most efficient manner
  • Optimized the batching within the applier to hit the right numbers for the transaction counts

That last one is a bit of a cheat because in a real-world situation it’s much harder to be able to identify those transaction sizes and row counts, but for testing, we’re trying to get the best performance!

Next what I wanted to do was set up some bare metal and AWS servers that were of an equivalent configuration and see what I could do to repeat and emulate the tests and see what comparable performance we could get.

How I Load Masses of Data

Before I dip into that, however, I thought it would be worth seeing how I generate the information in the first place. With big data testing (mainly when trying to simulate the data that ultimately gets written into your analytics target) the primary concern is one of reproducing the quantity as well as the variety of the data.

It’s application dependent, but for some analytics tasks the inserts are quite high and the updates/deletes relatively low. So I’ve written a test script that generates up to a million rows of data, split to be around 65% inserts, 25% updates and 10% deletes.

I can tweak that of course, but I’ve found it gives a good spread of data. I can also configure whether that happens in one transaction or each row is a transaction of its own. That all gets dumped into an SQL file. A separate wrapper script and tool then load that information into MySQL, either using redirection within the MySQL command line tool or through a different lightweight C++ client I wrote.

The data itself is light, two columns, an auto-incrementing integer ID and a random string. I’m checking for row inserts here, not data sizes.

So, to summarise:

  • Up to 1 million rows (although this is configurable)
  • Single or multiple transactions
  • Single schema/table or numerous schemas/tables
  • Concurrent, multi-threaded inserts

The fundamental result here is that I can predict the number of transactions and rows, which is really important when you are trying to measure rows-per-time period to use as benchmarks with replication because I can also start and stop replication on the transaction count boundaries to get precise performance.

For the main testing that I use for the performance results, what I do is run a multi-threaded, simultaneous insert into 20 schemas/tables and repeat it 40 times with a transaction/row count size of 10,000. That results in 8,000,000 rows of data, first being inserted/updated/deleted into MySQL, then extracted, replicated, and applied to (in this case) Vertica.

For the testing, I then use the start/stop of sequence number controls in the replicator and then monitor the time I start and stop from those numbers.

This gives me stats within about 2 seconds of the probably true result, but over a period of 15-20 minutes, that’s tolerable.

It also means I can do testing in two ways:

  • Start the load test into MySQL and test for completion into Vertica

or

  • Load the data into THL, and test just the target applier (from network transfer to target DB)

For the real-world performance I use the full end-to-end (MySQL insert and target apply) testing

Test Environments

I tested three separate environments, the original MemCloud hosted servers, some bare metal hosts and AWS EC2 hosts:

MemCloud Bare Metal AWS Cores

4

12

16

Threads

4

12

16

RAM

64

192

122

Disk

SSD

SSD

SSD

Networking

10GB

10GB

25GB

It’s always difficult to perfectly match the environments across virtual and bare metal, particularly in AWS, but I did my best.

Results

I could go into all sorts of detailed results here, but I think it’s better to simply look at the final numbers because that is what really matters:

Rows Per Minute Memcloud

1900000

Bare Metal

678222

AWS

492893

Now what’s interesting here is that MemCloud is significantly faster, even though there are fewer CPUs and even lower RAM requirements. It’s perhaps even more surprising to note that MemCloud is more than 4.5x times faster than AWS, even on I/O optimized hosts (probably the limiting factor in Vertica applies).

 

Even against fairly hefty bare metal hosts, MemCloud is almost 3x faster!

I’ve checked in with the engineers on the Bare Metal which seem striking, especially considering these are really beefy hosts, but it may simply be the SSD interface and I/O that becomes a limiting factor. Within Vertica when writing data with the replicator a few things are happening, we write THL to disk, CSV to disk, read CSV from disk into a staging table, then merge the base and staging tables which involves shuffling a lot of blocks in memory (and ultimately disk) around. It may simply be that the high-memory focused environment of MemCloud allows for very much faster performance all round.

I also looked at the performance as I started to increase the number of MySQL sources feeding into the systems, this is to separate schemas, rather than the single, unified schema/table within Vertica.

Sources

1

1

2

3

4

5

Target Schemas

20

40

40

60

80

100

Rows Written

8000000

8000000

16000000

24000000

32000000

40000000

Memcloud

1900057

1972000

3617042

5531460

7353982

9056410

Bare Metal

678222

635753

1051790

1874454

2309055

3168275

AWS

492893

402047

615856

– – –

What is significant here is that with MemCloud I noticed a much more linear ramp up in performance that I didn’t see to the same degree within the Bare metal or AWS. In fact, with AWS I couldn’t even remotely achieve the same levels and by the time I got to three simultaneous sources I got such wildly random results between executions that I gave up trying to test. From experience, I suspect this is due to the networking an IOPS environment, even on a storage optimized host.

The graph version shows the differences more clearly:

 

Bottom line, MemCloud seems really quick, and the statement I made in the original testing still seems to be valid:

The whole thing went so quick I thought it hadn’t executed at all!

Categories: Web Technologies

Learning Gutenberg: Setting up a Custom webpack Config

CSS-Tricks - Thu, 05/24/2018 - 07:22

Gutenberg introduces the modern JavaScript stack into the WordPress ecosystem, which means some new tooling should be learned. Although tools like create-guten-block are incredibly useful, it’s also handy to know what’s going on under the hood.

Article Series:
  1. Series Introduction
  2. What is Gutenberg, Anyway?
  3. A Primer with create-guten-block
  4. Modern JavaScript Syntax
  5. React 101
  6. Setting up a Custom webpack (This Post)
  7. A Custom "Card" Block (Coming Soon!)
The files we will be configuring here should be familiar from what we covered in the Part 2 Primer with create-guten-block. If you’re like me (before reading Andy’s tutorial, that is!) and would rather not dive into the configuration part just yet, the scaffold created by create-guten-block matches what we are about to create here, so you can certainly use that as well.

Let’s jump in!

Getting started

Webpack takes the small, modular aspects of your front-end codebase and smooshes them down into one efficient file. It’s highly extendable and configurable and works as the beating heart of some of the most popular products and projects on the web. It’s very much a JavaScript tool, although it can be used for pretty much whatever you want. For this tutorial, it’s sole focus is JavaScript though.

What we’re going to get webpack doing is watch for our changes on some custom block files and compile them with Babel to generate JavaScript files that can be read by most browsers. It’ll also merge any dependencies that we import.

But first, we need a place to store our actual webpack setup and front-end files. In Part 2, when we poked around the files generated by create-guten-block, we saw that it created an infrastructure for a WordPress plugin that enqueued our front-end files in a WordPress way, and enabled us to activate the plugin through WordPress. I’m going to take this next section to walk us through setting up the infrastructure for a WordPress plugin for our custom block. Setting up a plugin

Hopefully you still have a local WordPress instance running from our primer in Part 2, but if not, you’ll need to have one installed to continue with what we’re about to do. In that install, navigate to wp-content/plugins and create a fresh directory called card-block (spoiler alert: we’re going to make a card block... who doesn’t like cards?).

Then, inside card-block, create a file called card-block.php. This will be the equivalent to plugin.php from create-guten-block. Next, drop in this chunk of comments to tell WordPress to acknowledge this directory as a plugin and display it in the Plugins page of the Dashboard:

<?php /* Plugin Name: Card Block */

Don’t forget the opening PHP tag, but you can leave the closing one off since we’ll be adding more to this file soon enough.

WordPress looks for these comments to register a plugin in the same way it looks for comments at the top of style.css in a theme. This is an abbreviated version of what you’ll find at the top of other plugins’ main files. If you were planning to release it on the WordPress plugin repository, you’d want to add a description and version number as well as license and author information.

Go ahead and activate the plugin through the WordPress Dashboard, and I’ll take the steering wheel back to take us through setting up our Webpack config! Getting started with webpack

The first thing we’re going to do is initialize npm. Run the following at the root of your plugin folder (wp-content/plugins/card-block):

npm init

This will ask you a few questions about your project and ultimately generate you a package.json file, which lists dependencies and stores core information about your project.

Next, let’s install webpack:

npm install webpack --save-dev

You might have noticed that we’re installing webpack locally to our project. This is a good practice to get into with crucial packages that are prone to large, breaking changes. It also means you and your team are all singing off the same song sheet.

Then run this:

npm install extract-text-webpack-plugin@next --save-dev

Then these Sass and CSS dependencies:

npm install node-sass sass-loader css-loader --save-dev

Now, NPX to allow us to use our local dependencies instead of any global ones:

npm install npx -g

Lastly, run this:

npm install webpack-cli --save-dev

That will install the webpack CLI for you.

Now that we have this installed, we should create our config file. Still in the root of your plugin, create a file called webpack.config.js and open that file so we can get coding.

With this webpack file, we’re going to be working with traditional ES5 code for maximum compatibility, because it runs with Node JS. You can use ES6 and Babel, but we’ll keep things as simple as possible for this tutorial.

Alight, let’s add some constants and imports. Add the following, right at the top of your webpack.config.js file:

var ExtractText = require('extract-text-webpack-plugin'); var debug = process.env.NODE_ENV !== 'production'; var webpack = require('webpack');

The debug var is declaring whether or not we’re in debug mode. This is our default mode, but it can be overridden by prepending our webpack commands with NODE_ENV=production. The debug boolean flag will determine whether webpack generates sourcemaps and minifies the code.

As you can see, we’re requiring some dependencies. We know that webpack is needed, so we’ll skip that. Let’s instead focus our attention on ExtractText. Essentially, ExtractText enables us to pull in files other than JavaScript into the mix. We’re going to need this for our Sass files. By default, webpack assumes everything is JavaScript, so ExtractText kind of *translates* the other types of files.

Let’s add some config now. Add the following after the webpack definition:

var extractEditorSCSS = new ExtractText({ filename: './blocks.editor.build.css' }); var extractBlockSCSS = new ExtractText({ filename: './blocks.style.build.css' });

What we’ve done there is instantiate two instances of ExtractText by passing a config object. All that we’ve set is the output of our two block stylesheets. We’ll come to these in the next series, but right now all you need to know is that this is the first step in getting our Sass compiling.

OK, after that last bit of code, add the following:

var plugins = [ extractEditorSCSS, extractBlockSCSS ];

Here we’ve got two arrays of plugins. Our ExtractText instances live in the core plugins set and we’ve got a couple of optimization plugins that are only smooshed into the core plugins set if we’re not in debug mode. We’re running that logic right at the end.

Next up, add this SCSS config object:

var scssConfig = { use: [ { loader: 'css-loader' }, { loader: 'sass-loader', options: { outputStyle: 'compressed' } } ] };

This object is going to tell our webpack instance how to behave when it comes across scss files. We’ve got it in a config object to keep things as DRY as possible.

Last up, the meat and taters of the config:

module.exports = { context: __dirname, devtool: debug ? 'inline-sourcemap' : null, mode: debug ? 'development' : 'production', entry: './blocks/src/blocks.js', output: { path: __dirname + '/blocks/dist/', filename: 'blocks.build.js' }, module: { rules: [ { test: /\.js$/, exclude: /node_modules/, use: [ { loader: 'babel-loader' } ] }, { test: /editor\.scss$/, exclude: /node_modules/, use: extractEditorSCSS.extract(scssConfig) }, { test: /style\.scss$/, exclude: /node_modules/, use: extractBlockSCSS.extract(scssConfig) } ] }, plugins: plugins };

That’s our entire config, so let’s break it down.

The script starts with module.exports which is essentially saying, "when you import or require me, this is what you’re getting." We can determine what we expose to whatever imports our code, which means we could run code above module.exports that do some heavy calculations, for example.

Next, let’s look at some of these following properties:

  • context is our base where paths will resolve from. We’ve passed __dirname, which is the current working directory.
  • devtool is where we define what sort of sourcemap we may or may not want. If we’re not in debug mode, we pass null with a ternary operator.
  • entry is where we tell webpack to start its journey of pack-ery. In our instance, this is the path to our blocks.js file.
  • output is what it says on the tin. We’re passing an object that defines the output path and the filename that we want to call it.

Next is module, which we’ll got into a touch more detail. The module section can contain multiple rules. In our instance, the only rule we have is looking for JavaScript and SCSS files. It’s doing this by searching with a regular expression that’s defined by the test property. The end-goal of the rule is to find the right sort of files and pass them into a loader, which is babel-loader in our case. As we learned in a previous tutorial in this series, Babel is what converts our modern ES6 code into more supported ES5 code.

Lastly is our plugins section. Here, we pass our array of plugin instances. In our project, we have plugins that minify code, remove duplicate code and one that reduces the length of commonly used IDs. This is all to make sure our production code is optimized.

For reference, this is how your full config file should look:

var ExtractText = require('extract-text-webpack-plugin'); var debug = process.env.NODE_ENV !== 'production'; var webpack = require('webpack'); var extractEditorSCSS = new ExtractText({ filename: './blocks.editor.build.css' }); var extractBlockSCSS = new ExtractText({ filename: './blocks.style.build.css' }); var plugins = [extractEditorSCSS, extractBlockSCSS]; var scssConfig = { use: [ { loader: 'css-loader' }, { loader: 'sass-loader', options: { outputStyle: 'compressed' } } ] }; module.exports = { context: __dirname, devtool: debug ? 'inline-sourcemap' : null, mode: debug ? 'development' : 'production', entry: './blocks/src/blocks.js', output: { path: __dirname + '/blocks/dist/', filename: 'blocks.build.js' }, module: { rules: [ { test: /\.js$/, exclude: /node_modules/, use: [ { loader: 'babel-loader' } ] }, { test: /editor\.scss$/, exclude: /node_modules/, use: extractEditorSCSS.extract(scssConfig) }, { test: /style\.scss$/, exclude: /node_modules/, use: extractBlockSCSS.extract(scssConfig) } ] }, plugins: plugins }; That’s webpack configured. I’ll take the mic back to show us how to officially register our block with WordPress. This should feel pretty familiar to those of you who are used to wrangling with actions and filters in WordPress. Registering our block

Back in card-block.php, our main task now is to enqueue the JavaScript and CSS files we will be building with webpack. In a theme, we would do this with call wp_enqueue_script and wp_enqueue_style inside an action added to wp_enqueue_scripts. We essentially do the same thing here, except instead we enqueue the scripts and styles with a function specific to blocks.

Drop this code below the opening comment in card-block.php:

function my_register_gutenberg_card_block() { // Register our block script with WordPress wp_register_script( 'gutenberg-card-block', plugins_url('/blocks/dist/blocks.build.js', __FILE__), array('wp-blocks', 'wp-element') ); // Register our block's base CSS wp_register_style( 'gutenberg-card-block-style', plugins_url( '/blocks/dist/blocks.style.build.css', __FILE__ ), array( 'wp-blocks' ) ); // Register our block's editor-specific CSS wp_register_style( 'gutenberg-card-block-edit-style', plugins_url('/blocks/dist/blocks.editor.build.css', __FILE__), array( 'wp-edit-blocks' ) ); // Enqueue the script in the editor register_block_type('card-block/main', array( 'editor_script' => 'gutenberg-card-block', 'editor_style' => 'gutenberg-card-block-edit-style', 'style' => 'gutenberg-card-block-edit-style' )); } add_action('init', 'my_register_gutenberg_card_block');

As the above comments indicate, we first register our script with WordPress using the handle gutenberg-card-block with two dependencies: wp-blocks and wp-elements. This function only registers a script, it does not enqueue it. We do something similar for out edit and main stylesheets.

Our final function, register_block_type, does the enqueuing for us. It also gives the block a name, card-block/main, which identifies this block as the main block within the namespace card-block, then identifies the script and styles we just registered as the main editor script, editor stylesheet, and primary stylesheet for the block.

If you are familiar with theme development, you’ve probably used get_template_directory() to handle file paths in hooks like the ones above. For plugin development, we use the function plugins_url() which does pretty much the same thing, except instead of concatenating a path like this: get_template_directory() . 'https://cdn.css-tricks.com/script.js', plugins_url() accepts a string path as a parameter and does the concatenation for us. The second parameter, _ FILE _, is one of PHP’s magic constants that equates to the full file path of the current file.

Finally, if you want to add more blocks to this plugin, you’d need a version of this function for each block. You could work out what’s variable and generate some sort of loop to keep it nice and DRY, further down the line. Now, I’ll walk us through getting Babel up and running. Getting Babel running

Babel turns our ES6 code into better-supported ES5 code, so we need to install some dependencies. In the root of your plugin (wp-content/plugins/card-block), run the following:

npm install babel-core babel-loader babel-plugin-add-module-exports babel-plugin-transform-react-jsx babel-preset-env --save-dev

That big ol’ npm install adds all the Babel dependencies. Now we can add our .babelrc file which stores some settings for us. It prevents us from having to repeat them over and over in the command line.

While still in your theme folder, add the following file: .babelrc.

Now open it up and paste the following:

{ "presets": ["env"], "plugins": [ ["transform-react-jsx", { "pragma": "wp.element.createElement" }] ] }

So, what we’ve got there are two things:

"presets": ["env"] is basically magic. It automatically determines which ES features to use to generate your ES5 code. We used to have to add different presets for all the different ES versions (e.g. ES2015), but it’s been simplified.

In the plugins, you’ll notice that there’s a React JSX transformer. That’s sorting out our JSX and turning it into proper JavaScript, but what we’re doing is telling it to generate WordPress elements, rather than React elements, which JSX is more commonly associated with.

Generate stub files

The last thing we’re going to do is generate some stub files and test that our webpack and WordPress setup is all good.

Go into your plugin directory and create a folder called blocks and, within that, create two folders: one called src and one called dist.

Inside the src folder, create the following files. We’ve added the paths, too, so you put them in the right place:

  • blocks.js
  • common.scss
  • block/block.js
  • block/editor.scss
  • block/style.scss

OK, so now we’ve generated the minimum amount of things, let’s run webpack. Open up your terminal and move into your current plugin folder—then we can run the following, which will fire-up webpack:

npx webpack

Pretty dang straightforward, huh? If you go ahead and look in your dist folder, you should see some compiled goodies in there!

Wrapping up

A lot of setup has been done, but all of our ducks are in a row. We’ve set up webpack, Babel and WordPress to all work as a team to build out or custom Gutenberg block (and future blocks). Hopefully now you feel more comfortable working with webpack and feel like you could dive in and make customizations to fit your projects.

Article Series:
  1. Series Introduction
  2. What is Gutenberg, Anyway?
  3. A Primer with create-guten-block
  4. Modern JavaScript Syntax
  5. React 101
  6. Setting up a Custom webpack (This Post)
  7. A Custom "Card" Block (Coming Soon!)

The post Learning Gutenberg: Setting up a Custom webpack Config appeared first on CSS-Tricks.

Categories: Web Technologies

​High Performance Hosting with No Billing Surprises

CSS-Tricks - Thu, 05/24/2018 - 07:13

(This is a sponsored post.)

With DigitalOcean, you can spin up Droplet cloud servers with industry-leading price-performance and predictable costs. Our flexible configurations are sized for any application, and we save you up to 55% when compared to other cloud providers.

Get started today. Receive a free $100/60-day account credit good towards any DigitalOcean services.

Direct Link to ArticlePermalink

The post ​High Performance Hosting with No Billing Surprises appeared first on CSS-Tricks.

Categories: Web Technologies

Proxy MySQL :: HAproxy || ProxySQL & KeepAlived

Planet MySQL - Wed, 05/23/2018 - 17:05
So when it comes to routing your MySQL traffic several options exist.

Now I have seen HAproxy used more often with clients, it is pretty straight forward to set up. Percona has an example for those interested: 
Personally I like ProxySQL. Percona also has  few blogs on this as well Percona also has ProxySQL version available 
I was thinking I would write up some examples but overall Percona has explained it all very well.  I do not want to take anything away from those posts, instead point out that a lot of good information is available via those urls. So instead of rewriting what has already been written, I will create a collection of information for those interested. 
First compare and decide for yourself what you need and want. The following link of course is going to be biased towards ProxySQL but it gives you an overall scope for you to consider.  If you have a cluster or master to master and you do not care which server the writes vs reads go onto, just as long as you have a connection; then HAproxy is likely a simple fast set up for you. 
The bonus with ProxySQL is the ability to sort traffic in a weighted fashion, EASY. So you can have writes go to node 1, and selects pull from node 2 and node 3. Documentation on this can be found here: Yes it can be done with HAproxy but you have to instruct the application accordingly.  This is handled in ProxySQL based on your query rules.
Now the obvious question here: OK so how do you keep ProxySQL from becoming the single point of failure?  
You can invest is a robust load balancer and etc etc etc ... Toss hardware at it.... Or make it easy on yourself and support open source and use KeepAlived.  This is VERY easy to set up and all of it is documented again well here:  If you ever dealt with lua and mysql-proxy, ProxySQL and Keepalived should be very simple for you. If you still want it for some reason: https://launchpad.net/mysql-proxy
Regardless if you choose HAproxy, ProxySQL or another solution, you need to ensure not to replace once single point of failure with another and keepalived is a great for that. So little reason to not do this if you are using a proxy. 
So a few more things on ProxySQL. 




Categories: Web Technologies

Pages