emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Webinar Wednesday, August 1, 2018: Migrating to AWS Aurora, Monitoring AWS Aurora with PMM

Planet MySQL - Tue, 07/31/2018 - 07:56

Please join Autodesk’s Senior DevOps Engineer, Sanjeet Deshpande, Autodesk’s Senior Database Engineer, Vineet Khanna, and Percona’s Sr. MySQL DBA, Tate McDaniel as they present Migrating to AWS Aurora, Monitoring AWS Aurora with PMM on Wednesday, August 1st, 2018, at 5:00 PM PDT (UTC-7) / 8:00 PM EDT (UTC-4).

Amazon Web Services (AWS) Aurora is one of the most popular cloud-based RDBMS solutions. The main reason for Aurora’s success is because it’s based on the InnoDB storage engine.

In this session, we will talk about how you can efficiently plan for migrating to AWS Aurora using Terraform and Percona products and solutions. We will share our Terraform code for launching AWS Aurora clusters, look at tricks for checking data consistency, verify migration paths and effectively monitor the environment using Percona Monitoring and Management (PMM).

The topics in this session include:

  • Why AWS Aurora? What is the future of AWS Aurora?
  • Build Aurora infrastructure
  • Using Terraform (without data)
  • Restore using Terraform and Percona XtraBackup (using AWS S3 bucket)
  • Verify data consistency
  • Aurora migration
  • 1:1 migration
  • Many:1 migration using Percona Server for MySQL multi-source replication
  • Show benchmarks and PMM dashboard
  • Demo

Register for the webinar.

Sanjeet Deshpande, Senior DevOps Engineer

Sanjeet is a Senior DevOps having 10+ years of experience and currently working with Autodesk, Singapore. He is experienced in architecting, deploying and managing the cloud infrastructures/applications and automation experience. Sanjeet has worked extensively on AWS services like Lambda, SQS, RDS, SNS to name a few. He has also worked closely with the engineering team for different applications and suggested changes to improve their application uptime.

Tate Mcdaniel, Sr. MySQL DBA

Tate joined Percona in June 2017 as a Remote MySQL DBA. He holds a Bachelors degree in Information Systems and Decision Strategies from LSU. He has 10+ years of experience working with MySQL and operations management. His great love is application query tuning. In his off time, he races sailboats, travels the Caribbean by sailboat, and drives all over in an RV.

Vineet Khanna, Senior Database Engineer

Vineet Khanna, Senior Database Engineer at Autodesk, has 10+ years of experience as a MySQL DBA. His main professional interests are managing complex database environments, improving database performance, and architecting high availability solutions for MySQL. He has handled database environments for organizations like Chegg, Zendesk, and Adobe.

The post Webinar Wednesday, August 1, 2018: Migrating to AWS Aurora, Monitoring AWS Aurora with PMM appeared first on Percona Database Performance Blog.

Categories: Web Technologies

The trick to viewport units on mobile

CSS-Tricks - Tue, 07/31/2018 - 06:58

Viewport units have always been controversial and some of that is because of how mobile browsers have made things more complicated by having their own opinions about how to implement them.

Case in point: should the scrollbar be taken into account for the vw unit? What about a site's navigation or page controls — should those count in the calculation? Then there are physical attributes of the devices themselves (hello, notch!) that can't be overlooked.

First, a little context

The spec is pretty vague about how viewport units should be calculated. With mobile devices, we're often concerned with the vertical height, so let's look specifically at viewport height (vh):

vh unit
Equal to 1% of the height of the initial containing block.

So yeah, no clear guidance there when it comes to handling device and browser-specific differentiations.

vh was initially calculated by the current viewport of your browser. If you opened your browser and started to load a website, 1vh was equal to 1% of your screen height, minus the browser interface.

But! If you start scrolling, it's a different story. Once you get past a piece of the browser interface, like the address bar, the vh value would update and the result was an awkward jump in the content.

Safari for iOS was one of the first mobile browsers to update their implementation by choosing to define a fixed value for the vh based on the maximum height of the screen. By doing so, the user would not experience jumps on the page once the address bar went out of view. Chrome's mobile browser followed suit around a year ago.

As of this writing, there is a ticket to address this in Firefox Android.

While using a fixed value is nice, it also means that you cannot have a full-height element if the address bar is in view. The bottom of your element will be cropped.

An element gets cropped at the bottom when the address bar is in view (left) but what we want is the full thing (right). CSS Custom Properties: The trick to correct sizing

The idea struck me that CSS Custom Properties and a few lines of JavaScript might be the perfect way to get the consistent and correct sizing I needed.

In JavaScript, you can always get the value of the current viewport by using the global variable window.innerHeight. This value takes the browser's interface into account and is updated when its visibility changes. The trick is to store the viewport value in a CSS variable and apply that to the element instead of the vh unit.

Let's say our CSS custom variable is --vh for this example. That means we will want to apply it in our CSS like this:

.my-element { height: 100vh; /* Fallback for browsers that do not support Custom Properties */ height: calc(var(--vh, 1vh) * 100); }

OK, that sets us up. Now let's get the inner height of the viewport in JavaScript:

// First we get the viewport height and we multiple it by 1% to get a value for a vh unit let vh = window.innerHeight * 0.01; // Then we set the value in the --vh custom property to the root of the document document.documentElement.style.setProperty('--vh', `${vh}px`);

So, we told JS to grab the height of the viewport and then drilled it down into 1/100th of that total so we have a value to assign as our viewport height unit value. Then we politely asked JS to create the CSS variable (--vh) at the :root.

As a result, we can now use --vh as our height value like we would any other vh unit, multiply it by 100 and we have the full height we want.

See the Pen Viewport Height on Mobile (no resize on update) by CSS-Tricks (@css-tricks) on CodePen.

Whoa, there! One more little detail.

While our work might look done at this point, those of you with an astute eye for detail may have caught that the JavaScript fires but never updates the size of our element when the viewport's height changes. Go ahead and try resizing the demo above.

We can update the value of --vh by listening to the window resize event. This is handy in case the user rotates the device screen, like from landscape to portrait, or the navigation moves out of view on scroll.

// We listen to the resize event window.addEventListener('resize', () => { // We execute the same script as before let vh = window.innerHeight * 0.01; document.documentElement.style.setProperty('--vh', `${vh}px`); });

⚠️ Updating the value of --vh will trigger a repaint of the page and the user may experience a jump as a result. Because of this, I’m not advising that this trick should be used for every project or to replace all usage of the vh unit but only when you may need your users to have an exact viewport unit value.

Also, you may want to implement a debounce method for the resize event to avoid triggering to many events while the user is resizing their browser's window. You can learn more about it with this article: Debouncing and Throttling Explained Through Examples

See the Pen Correct Viewport Height on Mobile by CSS-Tricks (@css-tricks) on CodePen.

You can now resize the demo above and notice that the CSS variable is updated accordingly.

While I recently used this technique on a project and it really helped, you should always think twice when replacing the browser’s default behaviors. (For example, this comes up a lot with ::focus.) Also, browsers tend to update very fast these days, so beware that today's solution may not work tomorrow.

In the meantime, I hope this article helps! 👋

The post The trick to viewport units on mobile appeared first on CSS-Tricks.

Categories: Web Technologies

Atlas ORM Integration with Symfony - Paul M. Jones

Planet PHP - Tue, 07/31/2018 - 05:34

Are you using Symfony 4? Do you want to use Atlas with it? We now have a Symfony bundle and Flex recipe that makes installation and integration a breeze. Two commands and one .env file edit, and you’re ready to go:

composer config extra.symfony.allow-contrib true composer require atlas/symfony ~1.0

Build out all your mapper files from your database tables with a single command:

php bin/console atlas:skeleton

Then let Symfony inject the Atlas ORM object in your controller or application service constructors automatically (no further configuration needed):

<?php namespace App; use Atlas\Orm\Atlas; use App\DataSource\Thread\Thread use App\DataSource\Thread\ThreadRecord; class ApplicationService { public function __construct(Atlas $atlas) { $this->atlas = $atlas; } public function fetchThreadById($thread_id) : ThreadRecord { return $this->atlas->fetchRecord(Thread::class, $thread_id); } }

That’s it – you can now use Atlas for all the heavy lifting of your database work:

If you’re looking for a good persistence model data mapper, give Atlas a try!

Categories: Web Technologies

A beginner’s guide to database multitenancy

Planet MySQL - Tue, 07/31/2018 - 00:30

Introduction In software terminology, multitenancy is an architectural pattern which allows you to isolate customers even if they are using the same hardware or software components. Multitenancy has become even more attractive with the widespread adoption of cloud computing. A relational database system provides a hierarchy structure of objects which, typically, looks like this: catalog … Continue reading A beginner’s guide to database multitenancy →

The post A beginner’s guide to database multitenancy appeared first on Vlad Mihalcea.

Categories: Web Technologies

MySQL to Amazon Redshift Replication.

Planet MySQL - Mon, 07/30/2018 - 19:30

In our work, We used to get a lot of requirements for replicating data from one data source to another. Our team provided solutions to replicate data from MySQL to Vertica, Amazon Redshift, Hadoop. Out of which Amazon Redshift replication is a bit complicated as Amazon Redshift is a Database as a service (DBaaS) and the process is not straightforward.

So, I take this opportunity to guide on how to replicate the specific set of tables from MySQL to AWS Redshift using Tungsten replicator.

1.0. Tungsten Replicator:

Tungsten Replicator is an open source replication engine supports data extract from MySQL, MySQL Variants such as RDS, Percona Server, MariaDB and Oracle and allows the data extracted to be applied on other data sources such as Vertica, Cassandra, Redshift etc.

Tungsten Replicator includes support for parallel replication, and advanced topologies such as fan-in and multi-master, and can be used efficiently in cross-site deployments.

1.1.0. General Architecture:

There are three major components in tungsten replicator
1. Extractor / Master Service
2. Transaction History Log (THL)
3. Applier / Slave Service

1.1.1. Extractor / Master Service:

The extractor component reads data from MySQL’s binary log and writes that information into the Transaction History Log (THL).

1.1.2. Transaction History Log (THL):

The Transaction History Log (THL) acts as a translator between two different data sources. It stores transactional data from different data servers in a universal format using the replicator service acting as a master, That could then be processed Applier / Slave service.

1.1.3. Applier / Slave Service:

All the raw row-data recorded on the THL logs is re-assembled or constructed into another format such as JSON or BSON, or external CSV formats that enable the data to be loaded in bulk batches into a variety of different targets.

Therefore Statement information is not supported for heterogeneous deployments. So It’s mandatory that Binary log format on MySQL is ROW.

1.2.0. Pre Requisites: 1.2.1. Server Packages:
  • JDK 7 or higher
  • Ant 1.8 or higher
  • Ruby
  • Net-SSH
1.2.2. MySQL:
  • All the tables to be replicated must have a primary key.
  • Following MySQL configuration should be enabled on MySQL
    binlog-format             = row
    binlog-row-image      = full
    collation-server          = utf8_general_ci
    character-set-server  = utf8
1.2.3. Redshift:
  • Database name, Schema_name should be same as MySQL Database name of the tables to be replicated.

1.2.4. S3 Bucket:

  •  Read & write access to an AWS S3 Bucket. (Access key, Secret key is required)
2.0. Requirement:
  • Consider the servers with below details are used for Demo.

AWS EC2 MySQL Server  – 172.19.12.234
AWS Redshift                     – 172.19.12.116 (Database as a Service)
AWS S3 bucket                  – s3://mydbops-migration

As Redshift is a database as a service, We just have an endpoint to connect. Therefore We will be installing both the tungsten Master / Slave service on the MySQL server itself.

  • We would need to replicate the tables empemp_records from new_year database on the MySQL server to Redshift. Structures of the table are given below.
CREATE TABLE `emp` ( `no` int(11) NOT NULL, `city` varchar(50) DEFAULT NULL, `state` varchar(50) DEFAULT NULL, PRIMARY KEY (`no`) ) ENGINE=InnoDB; CREATE TABLE `emp_records` ( `no` int(11) NOT NULL, `name` varchar(50) DEFAULT NULL, `address` varchar(50) DEFAULT NULL, PRIMARY KEY (`no`) ) ENGINE=InnoDB;

 

3.0. Implementation:

The implementation consists of following steps.

  1. Installation / Building tungsten from source
  2. Preparing equivalent schema for Redshift
  3. Configuring Master service
  4. Configuring Slave service
  5. Generating worker tables (temp tables used by tungsten) for replication to be created on redshift
  6. Start the replication

3.1. Installation / Building From Source:
  • Download the source package from the GIT.
#git clone https://github.com/continuent/tungsten-replicator.git
  • Compile this package it will generate the tungsten-replicator.tar file.
#sh tungsten-replicator/builder/build.sh #mkdir -p tungsten
  • Once the tar file is generated extract the file to the folder created named tungsten and remove the old tungsten replicator package.
#tar --strip-components 1 -zxvf tungsten-replicator/builder/build/tungsten-replicator-5.2.1.tar.gz -C tungsten/
  • Now we have got tungsten binaries, Clean up source packages unless required.
#rm -rf tungsten-replicator

 

3.2. Preparing equivalent schema for Redshift:
  • Create database new_year on Redshift.
dev=# create database new_year; CREATE DATABASE
  • The new database was created. Now I am going to create a new schema.
  • Before creating schema first you have to switch to new_year database.
dev=# \c new_year psql (9.2.24, server 8.0.2)
  • Then create tables in new_year schema.
new_year=# create table new_year.emp(no int primary key, city varchar(50),state varchar(50)); CREATE TABLE new_year=# create table new_year.emp_records(no int primary key, name varchar(50),address varchar(50) ); CREATE TABLE

Note:

  • If you do not mention schema name while creating the table, the table will create inside the public schema.
  • To check tables are created inside the correct new_year schema.
new_year=# \dt new_year.*; List of relationsList of relations schema   | name | type  |   owner ----------+--------------+-------+----------- new_year | emp     | table | redshift-usr new_year | emp_records  | table | redshift-usr (2 rows)

 

3.3. Configuring Master Service:
  • Create a replication user on MySQL with Replication Slave privilege to stream binlog from MySQL to Tungsten Master service.
mysql> grant replication slave on *.* to 'tungsten'@'localhost' identified by 'secret';
  • Switch to tungsten directory and Reset the defaults configuration file.
#cd ~/tungsten #./tools/tpm configure defaults --reset
  • Configure the Master service on the directory of your choice, We have used /opt/master
  • Following commands will prepare the configuration file for Master service.
#./tools/tpm configure master \ --install-directory=/opt/master \ --enable-heterogeneous-service=true \ --members=mysql-db-master \ --master=mysql-db-master #./tools/tpm configure master --hosts=mysql-db-master \ --replication-user=tungsten \ --replication-password=tungsten \ --skip-validation-check=MySQLUnsupportedDataTypesCheck \ --property=replicator.filter.pkey.addColumnsToDeletes=true \ --property=replicator.filter.pkey.addPkeyToInserts=true
  • Once the configuration is prepared, Then we can install it using tpm.
#./tools/tpm install Configuration is now complete.  For further information, please consult Tungsten documentation, which is available at docs.continuent.com. NOTE  >> Command successfully completed
  • Now Master service will be configured under /opt/master/
  • Start the tungsten Master service.
#/opt/master/tungsten/cluster-home/bin/startall Starting Tungsten Replicator Service... Waiting for Tungsten Replicator Service. running: PID:22291
  • Verify it’s working by checking the master status.
#/opt/master/tungsten/tungsten-replicator/bin/trepctl services Processing services command... NAME              VALUE ----              ----- appliedLastSeqno: 0 appliedLatency  : 1.667 role            : master serviceName     : master serviceType     : local started         : true state           : ONLINE Finished services command... #/opt/master/tungsten/tungsten-replicator/bin/trepctl status Processing status command... NAME                     VALUE ----                     ----- appliedLastEventId     : mysql-bin.000011:0000000000000510;-1 appliedLastSeqno       : 0 appliedLatency         : 1.667 autoRecoveryEnabled    : false autoRecoveryTotal      : 0 channels               : 1 clusterName            : master currentEventId         : mysql-bin.000011:0000000000000510 currentTimeMillis      : 1525355498784 dataServerHost         : mysql-db-master extensions             :  host                   : mysql-db-master latestEpochNumber      : 0 masterConnectUri       : thl://localhost:/ masterListenUri        : thl://mysql-db-master:2112/ maximumStoredSeqNo     : 0 minimumStoredSeqNo     : 0 offlineRequests        : NONE pendingError           : NONE pendingErrorCode       : NONE pendingErrorEventId    : NONE pendingErrorSeqno      : -1 pendingExceptionMessage: NONE pipelineSource         : jdbc:mysql:thin://mysql-db-master:3306/tungsten_master?noPrepStmtCache=true relativeLatency        : 21.784 resourcePrecedence     : 99 rmiPort                : 10000 role                   : master seqnoType              : java.lang.Long serviceName            : master serviceType            : local simpleServiceName      : master siteName               : default sourceId               : mysql-db-master state                  : ONLINE timeInStateSeconds     : 21.219 timezone               : GMT transitioningTo        :  uptimeSeconds          : 21.741 useSSLConnection       : false version                : Tungsten Replicator 5.2.1 Finished status command...
  • If the master did not start properly refer to this (/opt/master/service_logs/trepsvc.log) error log.
3.4. Configuring Slave service:
  • Switch to tungsten directory and Reset the defaults configuration file.
#cd ~/tungsten #./tools/tpm configure defaults --reset
  • Create JSON file with name s3-config-slave.json in the format below, Fill in your AWS S3 Bucket details like Access key, Secrect key, S3 bucket path.
{ "awsS3Path" : "s3://mydbops-migration", "awsAccessKey" : "XXXXXX", "awsSecretKey" : "YYYYYYY", "gzipS3Files" : "false", "cleanUpS3Files" : "true" }
  • Configure the Slave service on the directory of your choice, We have used /opt/slave
  • Following commands will prepare the configuration file for Slave service.
#./tools/tpm configure slave \ --install-directory=/opt/slave \ --enable-heterogeneous-service=true \ --members=mysql-db-master
  • Add the replication filter to only replicate those two tables. Use Redshift host, user, password to configure the slave service.
#./tools/tpm configure slave --hosts=mysql-db-master \ --replication-host=172.19.12.116 \ --replication-user=redshift-usr \ --replication-password='redshift-pass' --datasource-type=redshift \ --batch-enabled=true \ --batch-load-template=redshift \ --redshift-dbname=new_year \ --svc-applier-filters=dropstatementdata,replicate \ --property=replicator.filter.replicate.do=new_year.emp,new_year.emp_records \ --svc-applier-block-commit-interval=10s \ --svc-applier-block-commit-size=5 \ --rmi-port=10002 \ --thl-port=2113 \ --master-thl-port=2112 \ --master-thl-host=mysql-db-master
  • Once the configuration is prepared, Then we can install it using tpm.
#./tools/tpm install Configuration is now complete.  For further information, please consult Tungsten documentation, which is available at docs.continuent.com. NOTE  >> Command successfully completed Once it complete copy the s3-config-slave.json file to slave (share) directory. #cp s3-config-slave.json /opt/slave/share/
  • Now the slave is configured, Before starting we need to create worker/stage table used by tungsten to replicate data on Redshift.
3.5. Generating Worker / Stage tables To Be Created On Redshift:
  • Tungsten provides a utility named ddlscan to generate the Worker  / Stage tables required for the replication functionality to work.
#/opt/slave/tungsten/tungsten-replicator/bin/ddlscan -db new_year -template ddl-mysql-redshift-staging.vm > staging_ddl
  • Apply the schema generated from the above operation on Redshift.
  • Now we have Worker / Stage tables created on redshift.
new_year=# \dt new_year.*; List of relations   schema  |       name       | type  |   owner    ----------+-----------------------+-------+-----------  new_year | emp                 | table | redshift-usr  new_year | emp_records        | table | redshift-usr  new_year | stage_xxx_emp       | table | redshift-usr  new_year | stage_xxx_emp_pkey | table | redshift-usr  new_year | stage_xxx_emp_records | table | redshift-usr  new_year | stage_xxx_emp_pkey | table | redshift-usr (6 rows)

 

3.6. Starting Replication:
  • Once the slave is configured and the stage tables are created in Redshift, then start the slave
#/opt/slave/tungsten/cluster-home/bin/startall Starting Tungsten Replicator Service... Waiting for Tungsten Replicator Service. running: PID:23968
  • Verify it’s working by checking the slave status.
#/opt/slave/tungsten/tungsten-replicator/bin/trepctl services NAME              VALUE ----              ----- appliedLastSeqno: -1 appliedLatency  : -1.0 role            : slave serviceName     : slave serviceType     : local started         : true state           : ONLINE Finished services command... # /opt/slave/tungsten/tungsten-replicator/bin/trepctl status Processing status command... NAME                     VALUE ----                     ----- appliedLastEventId     : mysql-bin.000011:0000000000000510;-1 appliedLastSeqno       : 0 appliedLatency         : 251.018 autoRecoveryEnabled    : false autoRecoveryTotal      : 0 channels               : 1 clusterName            : slave currentEventId         : NONE currentTimeMillis      : 1525355728202 dataServerHost         : 172.19.12.116 extensions             :  host                   : 172.19.12.116 latestEpochNumber      : 0 masterConnectUri       : thl://mysql-db-master:2112/ masterListenUri        : null maximumStoredSeqNo     : 0 minimumStoredSeqNo     : 0 offlineRequests        : NONE pendingError           : NONE pendingErrorCode       : NONE pendingErrorEventId    : NONE pendingErrorSeqno      : -1 pendingExceptionMessage: NONE pipelineSource         : thl://mysql-db-master:2112/ relativeLatency        : 251.202 resourcePrecedence     : 99 rmiPort                : 10002 role                   : slave seqnoType              : java.lang.Long serviceName            : slave serviceType            : local simpleServiceName      : slave siteName               : default sourceId               : 172.19.12.116 state                  : ONLINE timeInStateSeconds     : 12.558 timezone               : GMT transitioningTo        :  uptimeSeconds          : 24.407 useSSLConnection       : false version                : Tungsten Replicator 5.2.1 Finished status command...
  • If the slave did not start properly refer to this (/opt/slave/service_logs/trepsvc.log) error log.
4.0. Testing:
  • Now both master & slave are in sync. Now I am going to insert a few record on MySQL server in emp and emp_records table.
insert into emp values(1,'chennai','tamilnadu'); insert into emp values (2,'Banglore','Karnataka'); insert into emp_records values(1,'suresh','Noth car street'); insert into emp_records values(2,'John','South car street');
  • Above these records are inserted in the master server. At the same I have checked redshift these records are replicated or not.
new_year=# select * from new_year.emp; no | city | state ----+-----------+---------- 1 | chennai   | tamilnadu 2 | Banglore  | Karnataka (2 rows) new_year=# select * from new_year.emp_records; no | name   | address ----+----------+--------- 1 | suresh   | Noth car street 2 | John     | South car street (2 rows)

 

5.0. Troubleshooting:

Replication can be broken due to incorrect data types. During such scenarios, We should analyze the issue and fix the datatype and resume replication.

Sample Error :

# /opt/slave/tungsten/tungsten-replicator/bin/trepctl status Processing status command... NAME                     VALUE ----                     ----- appliedLastEventId     : NONE appliedLastSeqno       : -1 appliedLatency         : -1.0 autoRecoveryEnabled    : false autoRecoveryTotal      : 0 channels               : -1 clusterName            : slave currentEventId         : NONE currentTimeMillis      : 1526577299571 dataServerHost         : 172.25.12.119 extensions             :  host                   : 172.25.12.119 latestEpochNumber      : -1 masterConnectUri       : thl://mysql-db-master:2112/ masterListenUri        : null maximumStoredSeqNo     : -1 minimumStoredSeqNo     : -1 offlineRequests        : NONE pendingError           : Stage task failed: stage=q-to-dbms seqno=75 fragno=0 pendingErrorCode       : NONE pendingErrorEventId    : mysql-bin.000027:0000000000072461;-1 pendingErrorSeqno      : 75 pendingExceptionMessage: CSV loading failed: schema=new table=doc_content CSV file=/opt/slave/tmp/staging/slave/staging0/yp-yp_user_doc_content-69.csv message=Wrapped org.postgresql.util.PSQLException: ERROR: Value too long for character type                            Detail:                             -----------------------------------------------                            error:  Value too long for character type                            code:      8001                            context:   Value too long for type character varying(256)                            query:     1566568                            location:  funcs_string.hpp:395                            process:   query0_75_1566568 [pid=10475]                            -----------------------------------------------                           (/opt/slave/tungsten/tungsten-replicator/appliers/batch/redshift.js#256) pipelineSource         : UNKNOWN relativeLatency        : -1.0

This error info explains that value is too long for character data type for table doc_content on new database in Redshift.

  • In MySQL, the table doc_content consists of column “context” with TEXT data type.
  • Even in Redshift, context is a TEXT column.
  • Here the catch, In Redshift, the text datatype is equivalent to varchar(256).
  • So writing anything above 256 on MySQL will break replication.

So the solution is to increase the varchar length from 255 to varchar of 1000. In Redshift changing the datatype will not work.

yp=# alter table new.doc_content ALTER COLUMN content TYPE varchar(2000); ERROR:  ALTER COLUMN TYPE is not supported
  • We can’t increase the column size in Redshift without recreating the table.
  • The alternate solution is to add a new column with the required changes and move the data and then the old column can be dropped.
ALTER TABLE yp.yp_user_doc_content ADD COLUMN content_new VARCHAR(2000); UPDATE yp.yp_user_doc_content SET content_new = content; ALTER TABLE yp.yp_user_doc_content DROP COLUMN content; ALTER TABLE yp.yp_user_doc_content RENAME COLUMN content_new TO content;
  • Now we’re good to restart the replication again.
6.0. Conclusion:

Tungsten replicator is a great tool when it comes to replication of data with heterogeneous data sources. If we understand it’s working, It’s easy to configure and operate.

Categories: Web Technologies

abc to SVG

CSS-Tricks - Mon, 07/30/2018 - 14:56

Here's a little example that Jeremy Keith used to use in his talks. It's stuck with me as one of the coolest examples of progressive enhancement and Technology Being CoolTM around.

There is this musical notation format called abc. They don't capitalize it. Kinda like npm, which I guess makes sense as it isn't an acronym. But it is the name of something so it's super awkward. Ughakdhk. Anyway.

The format itself is really cool. It's super simple and text-based. Here's the example from their homepage:

X:1 T:Speed the Plough M:4/4 C:Trad. K:G |:GABc dedB|dedB dedB|c2ec B2dB|c2A2 A2BA| GABc dedB|dedB dedB|c2ec B2dB|A2F2 G4:| |:g2gf gdBd|g2f2 e2d2|c2ec B2dB|c2A2 A2df| g2gf g2Bd|g2f2 e2d2|c2ec B2dB|A2F2 G4:|

A little like YAML, I suppose. That's the music for that whole song. It's not loaded with all the possibilities of sheet music; it's just notes and timing. Enough to communicate a folk/traditional song, which is primarily what it's used for.

You could probably get used to reading it as-is, but I don't think that's what it's really for. My guess is that the format is more about being:

  1. Standardized (it is, and there are 570,000 tunes in it, so I guess that worked)
  2. Text (I bet that entire database of songs is surprisingly tiny)
  3. Ready to be converted into whatever

The conversion part being the fun part! You can see it happening right on the homepage for the abc format.

Lonesome Moonlight Waltz, stored in abc format on the site, is being displayed as sheet music in PNG format and being converted into a an audio format playable on the web. Basically MIDI barfed out as MP3.

PNG is fine, but wouldn't that be so much nicer as SVG? Of course it would.

All of this is relevant to Jeremy because he has a website called The Session, which is dedicated to Irish music. He stores tunes on the site, and thus appropriately uses the abc format. On The Session, you can click a Sheet Music button and it will convert abc into SVG and display that.

That sheet music looks great in the crisp SVG format.

The SVG conversion is made possible entirely in JavaScript by an open source library. That's the progressive enhancement part. Store and ship the basic format, and let the browser enhance the experience, if it can (it can).

That's all. Just cool.

The JavaScript library that The Session actually uses is abcjs by Paul Rosen and Gregory Dyke. The other one linked to above is Jef Moine's similar abc2svg.

The post abc to SVG appeared first on CSS-Tricks.

Categories: Web Technologies

MySQL Connector/ODBC 5.3.11 has been released

Planet MySQL - Mon, 07/30/2018 - 11:57

Dear MySQL users,

MySQL Connector/ODBC 5.3.11, a new version of the ODBC driver for the
MySQL database management system, has been released.

The available downloads include both a Unicode driver and an ANSI
driver based on the same modern codebase. Please select the driver
type you need based on the type of your application – Unicode or ANSI.
Server-side prepared statements are enabled by default. It is suitable
for use with any MySQL version from 5.5.

This is the fifth release of the MySQL ODBC driver conforming to the
ODBC 3.8 specification. It contains implementations of key 3.8
features, including self-identification as a ODBC 3.8 driver,
streaming of output parameters (supported for binary types only), and
support of the SQL_ATTR_RESET_CONNECTION connection attribute (for the
Unicode driver only).

The release is now available in source and binary form for a number of
platforms from our download pages at

http://dev.mysql.com/downloads/connector/odbc/5.3.html

For information on installing, please see the documentation at

http://dev.mysql.com/doc/connector-odbc/en/connector-odbc-installation.html

Changes in MySQL Connector/ODBC 5.3.11 (2018-07-30, General Availability) Functionality Added or Changed * Based on MySQL Client library 5.7.23 it has the support for the new caching_sha2_password authentication method, which allows connecting to MySQL Server 8.0. Bugs Fixed * The Unicode version now uses the UTF8MB4 character set as a transport character set between the MySQL server and the ODBC driver. Because text conversions requested by the ODBC user with the CHARSET connection option are done inside the ODBC Driver after the data is received from the server, it now converts text data as UTF8MB4 to USER_CHARSET instead of UTF8 to USER_CHARSET. (Bug #28204756) * Connections to MySQL Server 8.0 could yield "Source character set not supported by client" errors during sorting and case-sensitive operations. Now known UTF8MB4 collations are used for these operations, such as utf8mb4_general_ci (known to libmysqlclient 5.7) instead of utf8mb4_tolower_ci. (Bug #28116892, Bug #90984) * MySQL Server 8.0 connections could fail with unsupported character sets and collations as utf8mb4_xxxxxx collations reported by MySQL Server 8.0 with numbers greater than 250 were unknown to libmysqlclient 5.7. Because collations affect sorting but not the data conversion, they can be substituted with known collations such as utf8mb4_general_ci (45). (Bug #28116892, Bug #90984) * Connector/ODBC is now built with MySQL client library 5.7.23, and commercial builds also use an updated OpenSSL version (v1.0.2o). Issues fixed in the new OpenSSL version are described at http://www.openssl.org/news/vulnerabilities.html. (Bug #27829777) * Reported errors that occurred while executing multiple statements with a single query were generic and without context. For example, SQLMoreResults might return "unhandled error from mysql_next_result()" instead of the error reported by MySQL Server. (Bug #11757423, Bug #49466)

On Behalf of Oracle/MySQL Release Engineering Team,

Kent Boortz

Categories: Web Technologies

Upcoming Webinar Tuesday, 7/31: Using MySQL for Distributed Database Architectures

Planet MySQL - Mon, 07/30/2018 - 09:13

Please join Percona’s CEO, Peter Zaitsev as he presents Using MySQL for Distributed Database Architectures on Tuesday, July 31st, 2018 at 7:00 AM PDT (UTC-7) / 10:00 AM EDT (UTC-4).

Register Now

 

In modern data architectures, we’re increasingly moving from single-node design systems to distributed architectures using multiple nodes – often spread across multiple databases and multiple continents. Such architectures bring many benefits (such as scalability and resiliency), but can also bring a lot of pain if incorrectly designed and executed.

In this presentation, we will look at how we can use MySQL to engineer distributed multi-node systems.

Register for the webinar.

Peter Zaitsev, CEO and Co-Founder Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 140 professionals in 30 plus countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Inc. 5000 named Percona to their list in 2013, 2014, 2015 and 2016. Peter was an early employee at MySQL AB, eventually leading the company’s High-Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High-Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Database Performance Blog. He has also been tapped as a contributor to Fortune and DZone, and his ebook Practical MySQL Performance Optimization is one of Percona’s most popular downloads.

 

The post Upcoming Webinar Tuesday, 7/31: Using MySQL for Distributed Database Architectures appeared first on Percona Database Performance Blog.

Categories: Web Technologies

Mastering Continuent Clustering Series: Tungsten and SELinux, a Case Study

Planet MySQL - Mon, 07/30/2018 - 08:23

In this blog post, we talk about what happened during an installation of the Tungsten Cluster into an environment with SELinux running and mis-configured.

An attempt to execute `tpm install` on v5.3.2 recently failed with the below error:

ERROR >> node3_production_customer_com >> Unable to run 'sudo systemctl status mysqld.service' or the database server is not running (DatasourceBootScriptCheck) Update the /etc/sudoers file or disable sudo by adding --enable-sudo-access=false

Worse, this customer reported that this appeared as a WARNING only in Dev and Staging tests. So we checked, and it seemed we were able to access systemctl properly:

shell> sudo systemctl status mysqld.service ● mysqld.service - MySQL Percona Server Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled) Active: activating (start-post) since Tue 2018-06-19 17:46:19 BST; 1min 15s ago Process: 448996 ExecStart=/usr/bin/mysqld_safe (code=exited, status=0/SUCCESS) Process: 448964 ExecStartPre=/usr/bin/mysql-systemd pre (code=exited, status=0/SUCCESS) Main PID: 448996 (code=exited, status=0/SUCCESS); : 448997 (mysql-systemd) CGroup: /system.slice/mysqld.service └─control ├─448997 /bin/bash /usr/bin/mysql-systemd post └─450505 sleep 1 Jun 19 17:46:19 node3.production.customer.com systemd[1]: Starting MySQL Percona Server... Jun 19 17:46:19 node3.production.customer.com mysqld_safe[448996]: 180619 17:46:19 mysqld_safe Logging to '/var/log/mysqld.log'. Jun 19 17:46:19 node3.production.customer.com mysqld_safe[448996]: 180619 17:46:19 mysqld_safe Starting mysqld daemon with databases from /dbdata/data01 Jun 19 17:46:20 node3.production.customer.com mysqld_safe[448996]: 180619 17:46:20 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended

The problem with the systemctl start-up for MySQL turned out to be related to SELinux, with the /data filesystem and the non-standard port having the wrong contexts.

The solution was to inform SELinux about the additional disk and port resources:

  • Ensure the file contexts are set correctly for SELinux, in this case allow MySQL data to be stored in a non-standard location (/data): semanage fcontext -a -t etc_runtime_t /data restorecon -Rv /data/   semanage fcontext -a -t mysqld_db_t "/data(/.*)?" restorecon -Rv /data/*
  • By default, MySQL uses port 3306, but our best practice is to change it to 13306 to prevent accidental connection directly to the database, bypassing the required Connector. If using a non-standard PORT for MySQL and SELinux is enabled, you must also change the port context: semanage port -a -t mysqld_port_t -p tcp 13306

In future articles, we will continue to cover more advanced subjects of interest!

Questions? Contact Continuent

Categories: Web Technologies

On xlink:href being deprecated in SVG

CSS-Tricks - Mon, 07/30/2018 - 07:08

A reader wrote in to tell me we should update our articles about SVG <use> elements. The attribute we always use for them, xlink:href, is deprecated. Indeed, MDN says:

That's pretty strong language, hence the reader's warning. This is a bit surprising to me, as the SVG 2 thing got a little weird. It looks like it did become a Candidate Recommendation though.

So...

<!-- This is old --> <svg> <use xlink:href="#whatever" /> </svg> <!-- This is new --> <svg> <use href="#whatever" /> </svg>

I like it. But does it actually work? Lemme fork my little old demo and change all the references. In a quick run through of what I have easy access to:

Chrome 67 Firefox 61 Safari 11 Edge 17 IE 11 iOS 11 ✅ ✅ &#x1f645;‍♂️ ✅ ✅ &#x1f645;‍♂️

Better than I thought! But enough &#x1f645;‍♂️ no-go's there that up and switching everything seems far too dangerous, especially when xlink:href has support across the board.

Will browsers actually pull support? I'll bet ya fifty bucks none of them ever do. There are tons of websites that use the attribute in the old format that will never update, and we know that browsers take breaking old sites very seriously (yay).

It feels like the same thing with, for example, :before and ::before. Yeah, ::before is the new and more correct syntax. But no browser will ever pull support for :before (I'll bet ya another 50 bucks) because it just needlessly breaks sites. And because of that, it almost makes the most sense to keep using :before as you get the widest swath of support that way.

I probably won't be updating older articles using <use xlink:href="" /> unless something bizarre happens and some browser actually does pull support.

The post On xlink:href being deprecated in SVG appeared first on CSS-Tricks.

Categories: Web Technologies

How to Manage Multiple MySQL Binary Installations with SYSTEMD

Planet MySQL - Mon, 07/30/2018 - 06:23

This blog will go into how to manage multiple MySQL binary installations with SYSTEMD using the systemctl command.  With package installations of MySQL using YUM or APT, it’s quick and easy to manage your server’s state by executing systemctl commands to stop, start, restart, and status.  But what do you do when you want to install MySQL using the binary installation with a single or with multiple MySQL instances? You can still use SYSTEMD to easily manage the MySQL instances. All commands and testing have been done on Debian, and some details may change in other distro’s.

MySQL preparation

These are the steps to set up MySQL with multiple instances. If you currently have a MySQL server package installation using YUM or APT, it will need to be removed first. Make sure you keep your client. I also had to install some base packages for MySQL on Debian

apt install libaio1 libaio-dev numactl Download MySQL binary installation

Download the compressed tar file binary installation and extract to /usr/local, and create a soft link for mysql to the extracted binaries.

Example :

wget https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.22-linux-glibc2.12-x86_64.tar.gz tar zxvf mysql-5.7.22-linux-glibc2.12-x86_64.tar.gz -C /usr/local ln -s /usr/local/mysql-5.7.22-linux-glibc2.12-x86_64/ /usr/local/mysql Example result root@binary:/usr/local# ls -al total 44 drwxrwsr-x 11 root staff 4096 Jun 19 17:53 . drwxr-xr-x 10 root root 4096 Apr 17 18:09 .. drwxrwsr-x 2 root staff 4096 Apr 17 18:09 bin drwxrwsr-x 2 root staff 4096 Apr 17 18:09 etc drwxrwsr-x 2 root staff 4096 Apr 17 18:09 games drwxrwsr-x 2 root staff 4096 Apr 17 18:09 include drwxrwsr-x 4 root staff 4096 Apr 17 18:22 lib lrwxrwxrwx 1 root staff 9 Apr 17 18:09 man -> share/man lrwxrwxrwx 1 root staff 47 Jun 19 17:53 mysql -> /usr/local/mysql-5.7.22-linux-glibc2.12-x86_64/ drwxr-sr-x 9 root staff 4096 Jun 19 17:52 mysql-5.7.22-linux-glibc2.12-x86_64 drwxrwsr-x 2 root staff 4096 Apr 17 18:09 sbin drwxrwsr-x 7 root staff 4096 Apr 17 18:22 share drwxrwsr-x 2 root staff 4096 Apr 17 18:09 src Export path and aliases

Create an export of the MySQL path and aliases to log in to the MySQL instances using pre-made client config files. The password doesn’t matter right now as it will get updated in a couple of steps. Update the socket for each config file so they are unique because this needs to be different for each MySQL instance. Reboot your server to ensure that the configuration is loaded during boot time correctly. Run “echo $PATH” after reboot and validate that the new path is configured to include /usr/local/mysql:/usr/local/mysql/bin.

Example :

echo "export PATH=$PATH:/usr/local/mysql:/usr/local/mysql/bin" >> /etc/profile.d/mysql.sh echo "alias mysql1='mysql --defaults-file=/etc/instance1_client.cnf'" >> /etc/profile.d/mysql.sh echo "alias mysql2='mysql --defaults-file=/etc/instance2_client.cnf'" >> /etc/profile.d/mysql.sh

Example client config : /etc/instance1_client.cnf

[client] user=root password='mysqlpass' socket=/var/run/mysql/mysqld_instance1.sock

Example path :

root@binary:~# echo $PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/mysql:/usr/local/mysql/bin Create user/group, paths, and MySQL permissions

Next, create the user and group that will be used by the MySQL services. Then create the paths and set the proper permissions.

Example :

groupadd mysql useradd -r -g mysql -s /bin/false mysql mkdir -p /mysql/data/instance1 mkdir -p /mysql/data/instance2 mkdir -p /mysql/logs/instance1 mkdir -p /mysql/logs/instance2 mkdir /var/run/mysql/ chown mysql:mysql /var/run/mysql chown -R mysql:mysql /mysql Create MySQL configuration for each instance

Below is an example of the first instance I placed in /etc/my.instance1.cnf. My naming convention is instanceX. As an example, my first instance is instance1, and my second instance is instance2. I then place that naming convention in the configuration filename my.instance1.cnf. I could have done my.cnf.instance1 or instance1.my.cnf.

Having the naming convention in the configuration files is very important as it will come into effect with the configuration of SYSTEMD. I also set my naming convention in the PID file because this will also be used by configuration of SYSTEMD. Make sure the socket you have configured in your configuration files matches what was in your client configuration files in the previous step.

Example :

[mysqld] ## Server basedir = /usr/local/mysql datadir = /mysql/data/instance1 binlog_format = MIXED log_slave_updates = 1 log-bin = /mysql/logs/instance1/mysql-bin relay-log = /mysql/logs/instance1/relay-bin log_error = /mysql/logs/instance1/mysql_error.log slow_query_log_file = /mysql/logs/instance1/slow_query.log socket = /var/run/mysql/mysqld_instance1.sock pid-file = /var/run/mysql/mysqld_instance1.pid port = 3306 user = mysql server-id = 1 Initialize MySQL

Initialize your database and get the temporary password for the database from the error log file so you can log in and update the passwords after the MySQL instances are started. Next, update the MySQL client configuration files (/etc/instance1_client.cnf and /etc/instance2_client.cnf in my example) with the temporary password. This will make it simpler to log in and change the initial password. Repeat this for each instance.

Example :

root@binary:/usr/local# /usr/local/mysql/bin/mysqld --defaults-file=/etc/my.instance1.cnf --initialize Database files are present in the data directory root@binary:/usr/local# ls -al /mysql/data/instance1 total 110628 drwxr-xr-x 5 mysql mysql 4096 Jun 22 13:19 . drwxr-xr-x 4 mysql mysql 4096 Jun 19 18:04 .. -rw-r----- 1 mysql mysql 56 Jun 22 13:18 auto.cnf -rw-r----- 1 mysql mysql 417 Jun 22 13:19 ib_buffer_pool -rw-r----- 1 mysql mysql 12582912 Jun 22 13:19 ibdata1 -rw-r----- 1 mysql mysql 50331648 Jun 22 13:19 ib_logfile0 -rw-r----- 1 mysql mysql 50331648 Jun 22 13:18 ib_logfile1 drwxr-x--- 2 mysql mysql 4096 Jun 22 13:18 mysql drwxr-x--- 2 mysql mysql 4096 Jun 22 13:18 performance_schema drwxr-x--- 2 mysql mysql 12288 Jun 22 13:19 sys Capture the temporary root password root@binary:/usr/local# cat /mysql/logs/instance1/mysql_error.log 2018-06-22T17:18:50.464555Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2018-06-22T17:18:50.978714Z 0 [Warning] InnoDB: New log files created, LSN=45790 2018-06-22T17:18:51.040350Z 0 [Warning] InnoDB: Creating foreign key constraint system tables. 2018-06-22T17:18:51.129954Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 5506e36e-7640-11e8-9b0f-0800276bf3cb. 2018-06-22T17:18:51.132700Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened. 2018-06-22T17:18:51.315917Z 1 [Note] A temporary password is generated for root@localhost: ptraRbBy<6Wm SYSTEMD configuration

Now that the MySQL instances are prepared and ready to be started. We will now configure SYSTEMD so that systemctl can manage the MySQL instances.

SYSTEMD MySQL service

Create the SYSTEMD base configuration at /etc/systemd/system/mysql@.service and place the following contents inside. This is where the naming convention of the MySQL instances comes into effect. In the SYSTEMD configuration file, %I will be replaced with the naming convention that you use. You want to make sure that the PIDfile and the MySQL configuration file in the ExecStart will match up with your previous configurations. You only need to create one SYSTEMD configuration file. As you enable each service in the next step, SYSTEMD will make copies of the configuration for you and replace the %I accordingly with your naming convention.

Example /etc/systemd/system/mysql@.service :

[Unit] Description=Oracle MySQL After=network.target [Service] Type=forking User=mysql Group=mysql PIDFile=/var/run/mysql/mysqld_prd_%I.pid ExecStart= ExecStart=/usr/cd --defaults-file=/etc/my.%I.cnf --daemonize Restart=on-failure RestartPreventExitStatus=1 [Install] WantedBy=multi-user.target Enable and start the MySQL instances

Enable the service, placing the naming convention after the @ symbol using the systemctl command. SYSTEMD will make a copy of the configuration file in the previous step and replace the %I with the text after the @. When viewing the status of the service, you will see that the process is using the correct configuration file based upon the naming convention. Repeat for each instance.

Example :

systemctl enable mysql@instance1 systemctl start mysql@instance1 root@binary:~# systemctl status mysql@instance1 ● mysql@instance1.service - Oracle MySQL Loaded: loaded (/etc/systemd/system/mysql@.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2018-06-22 14:51:48 EDT; 10min ago Process: 11372 ExecStart=/usr/local/mysql/bin/mysqld --defaults-file=/etc/my.instance1.cnf --daemonize (code=exited, status=0/SUCCESS) Main PID: 11374 (mysqld) Tasks: 28 (limit: 4915) CGroup: /system.slice/system-mysql.slice/mysql@instance1.service └─11374 /usr/local/mysql/bin/mysqld --defaults-file=/etc/my.instance1.cnf --daemonize Jun 22 14:51:48 binary systemd[1]: Starting Oracle MySQL... Jun 22 14:51:48 binary systemd[1]: Started Oracle MySQL.

Example PID and Socket files :

root@binary:/var/log# ls -al /var/run/mysql total 16 drwxr-xr-x 2 mysql mysql 160 Jul 20 10:33 . drwxr-xr-x 19 root root 640 Jul 20 10:33 .. -rw-r----- 1 mysql mysql 6 Jul 20 10:33 mysqld_instance1.pid srwxrwxrwx 1 mysql mysql 0 Jul 20 10:33 mysqld_instance1.sock -rw------- 1 mysql mysql 6 Jul 20 10:33 mysqld_instance1.sock.lock -rw-r----- 1 mysql mysql 6 Jul 20 10:33 mysqld_instance2.pid srwxrwxrwx 1 mysql mysql 0 Jul 20 10:33 mysqld_instance2.sock -rw------- 1 mysql mysql 6 Jul 20 10:33 mysqld_instance2.sock.lock Managing MySQL

Now that we have started the two MySQL instances, we can log in to them using the aliases that we created pointing to the client configuration files that we updated to use the temporary root password. Next, we can log in and change the initial root password, and then update the configuration files accordingly with the new credentials.

Change root password

Log in to MySQL using the alias mysql1 and mysql2 which we configured previously and change the root password. Repeat for each instance.

Example :

mysql1 mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass'; mysql> exit Update MySQL client configuration

Update the MySQL client configuration files (/etc/instance1_client.cnf and /etc/instance2_client.cnf in my example) with the new passwords. Repeat for each instance.

Example client config /etc/instance1_client.cnf :

[client] user=root password='MyNewPass' socket=/var/run/mysql/mysqld_instance1.sock Conclusion

Configuring MySQL to be controlled by systemctl makes it much easier to manage your MySQL instances. This process also allows for easy configuration of multiple instances, even beyond two. But keep in mind when configuring multiple MySQL instances on a single server, you allocate the memory for each of the MySQL instances accordingly to allow for overhead.

Categories: Web Technologies

5 Reasons and 101 Bugfixes – Why You Should Use Hibernate 5.3

Planet MySQL - Sun, 07/29/2018 - 23:00

The post 5 Reasons and 101 Bugfixes – Why You Should Use Hibernate 5.3 appeared first on Thoughts on Java.


Hibernate 5.3 is available for a little more than 3 months now, and last week, the team released the 3rd maintenance release. So, it’s about time to take a closer look at the new version.

In addition to more than 100 bug fixes, Hibernate 5.3 includes a few features and improvements that might motivate you to update your current project.

 

Improved Memory consumption

Let’s start with probably the best reason to update to Hibernate 5.3: It consumes less memory than the previous versions.

The improvement was triggered by an interesting discussion in the Hibernate forum. A user reported that he tried to migrate his application from Hibernate 3.6 to 5.3. During the migration, he recognized that the memory consumption of Hibernate’s SessionFactory went up to 950MB.

The issue was caused by the size and number of EntityLoaders that Hibernate instantiated. It was fixed in 2 steps:

  1. HHH-12556 – Similar EntityLoaders now share some internal data structures. This reduced the memory consumption of the application by ~50%.
    The fix was backported to Hibernate 5.2.18. So, if you’re using Hibernate 5.2 and don’t want to upgrade to 5.3, you should at least update to 5.2.18.
  2. HHH-12558 – Hibernate supports a bunch of different lock modes with specific loaders. In the past, it instantiated all loaders eagerly. Hibernate 5.3 only instantiates the 2 most common ones and loads all others lazily.

At the end of the discussion, the user who reported the issue wrote that the improved Hibernate version only used ~ 250MB. So, for his application, these two changes reduced the memory consumption by ~70%.

I obviously can’t promise that it will be equally effective for your project. However, the reported improvement is so enormous that you should at least give it a try.

 

JPA 2.2 compliance

Hibernate 5.3.0 is the first version that’s fully compliant with JPA 2.2. However, because the support for all the interesting features was already added in Hibernate 5.0, 5.1 and 5.2, and I already wrote extensive tutorials about all of them, I will not dive any deeper into this topic.

If you’re not already familiar with JPA 2.2, you can read more about it in the following articles:

 

Small Improvements Learn more about primary keys

Follow me on YouTube to not miss any new videos. Implicit ID generator definitions

If you generate your primary key values, you’re probably aware of Hibernate’s @SequenceGenerator and @TableGenerator annotations. You can use them to define which sequence or table Hibernate shall use to generate the primary key value.

Here is a typical example of a primary key mapping that tells Hibernate to use the database sequence author_seq to generate the unique primary key values.

@Entity public class Author { @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "author_seq") @SequenceGenerator(name="author_seq", sequenceName = "author_seq") private Long id; ... }

The mapping definition consists of 2 parts:

  1. The @GeneratedValue annotation defines which generation strategy Hibernate shall use and references a custom generator.
  2. The @SquenceGenerator annotation specifies the generator and tells Hibernate the name of the sequence you want to use.

As you can see in the code snippet, I use the same name for the generator and the database sequence. That’s a pretty common and verbose mapping.

If your mapping looks the same, I have good news for you: You no longer need to define the generator if your database sequence or table has the same name as your @SequenceGenerator or @TableGenerator.

That enables me to shorten the previous mapping and to remove the @SequenceGenerator definition.

@Entity public class Author { @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "author_seq") private Long id; ... }

 

Support for MySQL 8 SKIP LOCKED and NOWAIT

MySQL 8.0.1 added the SKIP LOCKED and NOWAIT feature to provide different options to handle locked rows during read-operations. The MySQL Server team explained them in great detail on their blog: MySQL 8.0.1: Using SKIP LOCKED and NOWAIT to handle hot rows.

Here’s a quick description of both features:

  • SKIP LOCKED enables you to perform a non-deterministic read that skips all locked rows. That means that locked rows are missing in your result set.
  • If you require a deterministic read but don’t want to wait for the locks to be released, you can use the NOWAIT feature. It causes your query to fail immediately if any records in your result set are locked.

The MySQL8Dialect included in Hibernate 5.3 supports both these features so that you can use them in your queries.

 

Learn more about AttributeConverter

Follow me on YouTube to not miss any new videos. Apply AttributeConverter when calling a function

AttributeConverters provide a standardized, easy way to define the mapping of a Java type. You can either use them to add support for unsupported classes or to customize the mapping of an already supported Java type.

The following code snippet shows an example from my JPA Tip: How to map a Duration attribute. JPA 2.2 doesn’t provide a mapping for java.time.Duration objects. If you want to map such an object, you can use this AttributeConverter to map it to a Long.

@Converter(autoApply = true) public class DurationConverter implements AttributeConverter<Duration, Long> { Logger log = Logger.getLogger(DurationConverter.class.getSimpleName()); @Override public Long convertToDatabaseColumn(Duration attribute) { log.info("Convert to Long"); return attribute.toNanos(); } @Override public Duration convertToEntityAttribute(Long duration) { log.info("Convert to Duration"); return Duration.of(duration, ChronoUnit.NANOS); } }

After you applied the AttributeConverter to an attribute or automatically applied them to all attributes of a specific type, it gets transparently used:

  • during all lifecycle state transitions,
  • to map the result of a query and
  • when used in path expressions in a JPQL or CriteriaQuery.

However, for whatever reason, the usage of converted attribute values as function parameters is explicitly undefined.

Previous Hibernate versions didn’t convert the attribute value if you referenced it as a function parameter. This changed with Hibernate 5.3.2. It now converts the attribute value before it provides it as a function parameter.

 

Support for Java 9 and preparations for Java 11

Beginning with Hibernate 5.3.0, all Hibernate modules specify a Java 9 module name following the pattern org.hibernate.orm.${module-name}, e.g., the hibernate-core module defines the name org.hibernate.orm.core.

The Hibernate team also updated the dependencies, prepared the build process and run their test suite with the latest JDK 11 build. So, we can hope for a smooth transition to JDK11.

 

Conclusion

Should you update to Hibernate 5.3?

You probably already know my answer. It’s a clear yes! Especially the improved memory consumption is a great reason to do the update.

However, please keep in mind that every update has its risks and that you obviously shouldn’t update any project dependency without testing it carefully.

So, what about you? Did you already update your application to use Hibernate 5.3 or will you update it in the near future?

The post 5 Reasons and 101 Bugfixes – Why You Should Use Hibernate 5.3 appeared first on Thoughts on Java.

Categories: Web Technologies

32-bit ODBC Driver 8.0 for Windows is back!

Planet MySQL - Sun, 07/29/2018 - 22:41

The previous release of MySQL Connector 8.0.11 caused confusion for many users because its lack of support for Windows 32-bit platform, which is still very popular.

The main reason for it was the change in the MySQL Server 8.0 and MySQL client 8.0 library design, which regarded the Windows 32-bit platform as obsolete. Unfortunately, it had a far reaching adverse impact on the family of MySQL Connectors based on libmysqlclient including MySQL Connector/ODBC.

We listened to the voice of the community and did our best to bring back the 32-bit ODBC Driver for Windows to allow using it in ASP, VB, 32-bit MS Office applications etc.

Now we are happy to announce the return of the fully featured 32-bit version of ODBC Driver 8.0 for Windows with the full support for MySQL Server 8.0. The community version of the driver will soon be available for public at the following address:

https://dev.mysql.com/downloads/connector/odbc/

On behalf of the entire MySQL Team I would like to thank everyone who expressed their concerns in the forums and blogs.

Categories: Web Technologies

A Kind Introduction MySQL Windowing Functions Part I

Planet MySQL - Sun, 07/29/2018 - 08:31

Windowing functions are a critical tool for grouping rows of data that are related to other rows. But they go far beyond the regular aggregate functions found in MySQL 5.7 and earlier. In MySQL 8 you do not have to collapse all the information down into a single output row. Each row can retain its individual identity but the server can analyze the data as a unit.
Statistics and Damned Lies Finding the total Population of the District Texas from the world.city table is simple. 

SQL> select District, sum(Population)  
from city where district = 'Texas';
+----------+-----------------+
| District | sum(Population) |
+----------+-----------------+
| Texas    |         9208281 |
+----------+-----------------+
1 row in set (0.0068 sec)





 Simple.  But try to expand it to the entire USA and you get problems.

SQL> select District, sum(Population)  
from city where CountryCode = 'USA';
+----------+-----------------+
| District | sum(Population) |
+----------+-----------------+
| New York |        78625774 |
+----------+-----------------+
1 row in set (0.0046 sec)

The results only give out the results for all the cities and lumps them under New York. This is not the desired answer. By the way the only time New York (and New York city in particular) has 78 million people is when they are all on the road in front of me when I am trying to take a taxi to an airport.

With a windowing function it is easy to iterate over a subset of the entire data.  Imagine reading the data through a page with a section cut out to form a window that is just the right size to read only the group of rows desired!

SQL> select name, District, Population, sum(Population) over() as p0, 
     District, sum(population) over( partition by District) as p1
     from city where CountryCode = 'USA' limit 10;
+------------+----------+------------+----------+----------+---------+
| name       | District | Population | p0       | District | p1      |
+------------+----------+------------+----------+----------+---------+
| Birmingham | Alabama  |     242820 | 78625774 | Alabama  |  801519 |
| Montgomery | Alabama  |     201568 | 78625774 | Alabama  |  801519 |
| Mobile     | Alabama  |     198915 | 78625774 | Alabama  |  801519 |
| Huntsville | Alabama  |     158216 | 78625774 | Alabama  |  801519 |
| Anchorage  | Alaska   |     260283 | 78625774 | Alaska   |  260283 |
| Phoenix    | Arizona  |    1321045 | 78625774 | Arizona  | 3178903 |
| Tucson     | Arizona  |     486699 | 78625774 | Arizona  | 3178903 |
| Mesa       | Arizona  |     396375 | 78625774 | Arizona  | 3178903 |
| Glendale   | Arizona  |     218812 | 78625774 | Arizona  | 3178903 |
| Scottsdale | Arizona  |     202705 | 78625774 | Arizona  | 3178903 |
+------------+----------+------------+----------+----------+---------+
10 rows in set (0.0075 sec)

The above query has two windows.  The keyword to notice is OVER().  The window defined is OVER() with nothing within the parenthesis as this should be under stood to mean 'the widow is open wide enough to see all the data'. So sum(Population) over() as p0 will give us the sum of all the Population columns and name the column p0.

The second window is defined as sum(population) over (partition by District) as p1 will provide all of the Population of each district summed in a column named p1.  

Indianapolis 500 winning Lotus driven by Jimmy Clark. Before this car all other winners of the race had the engine in front of the driver.  What does this have to do with Windowing Functions, databases, SQL, or anything else in this blog post. Well, nothing but while at Detroit PHP I stopped by the Henry Ford Museum and took this picture of a very icon car that changed Indy forever.  Windowing Functions can change your SQL skills.
Different Types of Windows The OVER clause has two forms - window_spec and window_name.  The window_spec option declares the specified window with the parenthesis.  While window_name is a window defined elsewhere in the query.  The send window in the above query  (where the output is named p1) is a window_spec.

So here is a window_name example:

SQL> SELECT District, Sum(Population) OVER w 
    FROM city where CountryCode ='USA' 
    WINDOW w AS (partition by District) limit 10;
+----------+------------------------+
| District | Sum(Population) OVER w |
+----------+------------------------+
| Alabama  |                 801519 |
| Alabama  |                 801519 |
| Alabama  |                 801519 |
| Alabama  |                 801519 |
| Alaska   |                 260283 |
| Arizona  |                3178903 |
| Arizona  |                3178903 |
| Arizona  |                3178903 |
| Arizona  |                3178903 |
| Arizona  |                3178903 |
+----------+------------------------+
10 rows in set (0.0032 sec)

The window was given the name w and then defined as WINDOW w AS (partition by District).   By the way the declaration within the window_name is itself as window_spec.

So what is a window_spec??   The definition from the manual (https://dev.mysql.com/doc/refman/8.0/en/window-functions-usage.html) informs that a window_spec is defined as:

window_spec: [window_name] [partition_clause] [order_clause] [frame_clause]

The window_name is an alias that can be used elsewhere in the query.

The partition_clause is how the data is to be divided up into groups. If this is unspecified it makes one big group.  

The order_clause provides the sort_order. Remember the ORDER BY from regular SQL queries?  This is how you can order the groups.

And the frame_clause determines sub groups within the big group.

And as it so often happens, there is a lot of material to be covered under the frame_clause and that will be in a future blog.





Categories: Web Technologies

MySQL 8.0.12: Instant ALTER TABLE

Planet MySQL - Sun, 07/29/2018 - 01:26

There are many nice changes included in the MySQL 8.0.12 release that were published a couple of days ago. One of the most exciting is the ability to make instant schema changes to tables. This blog will look into why I think that that is a stand-out change.

I will not go into details with the implementation other than noting that the new data dictionary in MySQL 8.0 has allowed for a nice implementation of the patch that was contributed by the Tencent Games DBA Team. If you are interested in learning more about the implementation, I will recommend you to read the blog by Bin Su (MySQL 8.0: InnoDB now supports Instant ADD COLUMN) and the worklog implementing the feature (WL#11250: Support Instant Add Column).

Thanks to the Tencent Games DBA Team who contributed the patch for this feature. Why Do We Need Instant ALTER TABLE?

As you may know, InnoDB has since version 5.6 supported online schema changes. So, a fair thought may be why that is no longer good enough. The thing is that while online schema changes are very nice and allows you to make changes to the data (DML statements) while the schema change is made, there are still some problems:

  • Online ALTER TABLE still requires a meta data lock for the duration of the operation. That is, it is not possible to make other schema changes until the ALTER TABLE has completed.
  • In replication setups, the SQL thread handling the online schema change will still block other changes coming through the replication channel from occurring. This means that an online schema change that takes an hour suddenly makes the replication lag an hour.
  • This is compounded in chained replication setups where the delay on the last instances in the topology is multiplied with the number of times the change has been replication. Suddenly this schema change that did not really affect the replication master becomes a major pain and delays the deployment of the application upgrade that relies on the schema changes.
  • Even though the change is online, it still is heavy on resources: disk for storing the extra copy of the table when the change cannot be made in-place, CPU and disk activity that may cause other queries to become slow, etc.
  • There is a limit to the amount of DML changes that can be made during the schema change. If you make too many changes, the schema change will fail.

So, in short, online schema changes are only the first stop. Instance schema changes is the future.

Which Changes Can Be Made Instantly?

While it would be great if all schema changes could be made instantly, unfortunately that is not the case. Just as not all schema changes can be made online, there are limitations to the new instant feature. In short, the changes that can be made instantly must only affect the metadata of the table. The metadata is stored in the data dictionary. The changes that can be made with the instant ALTER TABLE feature as per 8.0.12 are:

  • Adding a new column as the last column in the table.
  • Adding a generated virtual column.
  • Dropping a generated virtual column.
  • Setting a default value for an existing column.
  • Dropping the default value for an existing column.
  • Changing the list of values allowed for a column with the ENUM or SET data types. A requirement is that the storage size does not change for the column.
  • Change whether the index type is set explicitly for an existing index.

And who knows, maybe later the feature can be extended to cover more changes. There are also a few limitations that are good to be aware of:

  • The row format cannot be COMPRESSED.
  • The table cannot have a fulltext index.
  • Tables in the data dictionary cannot use the instant algorithm.
  • Temporary tables are not supported.
How to Ensure You are Using the Expected Algorithm?

One problem with schema changes is that here are different algorithms depending on the schema change. Currently there are three different algorithms:

  • INSTANT: the change completes very quickly (yes not quite instantly) as only the metadata in the data dictionary needs to be updated.
  • INPLACE: the changes are made within the existing table, i.e. a complete table copy is avoided.
  • COPY: the table is copied into a new copy with the new definition.

By default, MySQL chooses the algorithm doing the least work. That is, INSTANT if that is supported, otherwise INPLACE if that is supported, and finally COPY. Additionally, there is the concept of locking which can be set to either NONE, SHARED, or EXCLUSIVE.

So, how do you ensure you are not ending up with a copying ALTER TABLE taking exclusive locks when you thought the operation was going to be instant? The answer is to explicitly set the algorithm and lock type. That way, MySQL will throw an error if your schema change is not compatible with the requested algorithm. The same principle can be used to force a full rebuild of the table if you for example have a small table and don’t see a reason to worry about instantly added columns (see also more later).

I will give some examples of specifying the ALGORITHM and LOCK options to ALTER TABLE later. However, we first need an example table to play with.

Tip: Always specify the ALGORITHM and LOCK options explicitly to avoid unpleasant surprises. Creating an Example Table

For the purpose of the upcoming examples of instant schema changes, it is necessary to have a table to play with. The table will be the testtbl table in the my_schema schema. The table can be created and populated with sample data by using MySQL Shell – this allows us to take advantage of the support for scripting. The table will have one million rows.

Note: To limit the amount of scrolling, the prompt in MySQL Shell has been changed to just show the mode (JS, PY, SQL).

JS> \connect root@localhost Creating a session to 'root@localhost' Please provide the password for 'root@localhost': ******** Save password for 'root@localhost'? [Y]es/[N]o/Ne[v]er (default No): Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 17 (X protocol) Server version: 8.0.12 MySQL Community Server - GPL No default schema selected; type \use <schema> to set one. JS> \sql Switching to SQL mode... Commands end with ; SQL> CREATE SCHEMA my_schema; Query OK, 1 row affected (0.0509 sec) SQL> \use my_schema Default schema set to `my_schema`. Fetching table and column names from `my_schema` for auto-completion... Press ^C to stop. SQL> CREATE TABLE testtbl ( ... id int unsigned NOT NULL auto_increment, ... val varchar(36) NOT NULL, ... PRIMARY KEY (id) ... ) ENGINE=InnoDB; Query OK, 0 rows affected (0.0497 sec) SQL> \py Switching to Python mode... Py> \use my_schema Default schema `my_schema` accessible through db. Py> db.testtbl <Table:testtbl> Py> import uuid Py> for i in range(1000): ... session.start_transaction() ... stmt = db.testtbl.insert("val") ... for j in range(1000): ... stmt = stmt.values(uuid.uuid1().hex) ... stmt.execute() ... session.commit() ... Query OK, 0 rows affected (0.0029 sec) Py> db.testtbl.select("COUNT(*)") +----------+ | COUNT(*) | +----------+ | 1000000 | +----------+ 1 row in set (0.2476 sec)

Note how this takes advantage of the feature in MySQL Shell where the db object in JavaScript and Python mode has the tables as properties. Thus it is possible to refer to the testtbl table in the my_schema schema as db.testtbl after using \use my_schema to read the database objects. The data is inserted in 1000 batches of 1000 rows using a double loop.

Now it is time to play with schema changes.

Schema Changes Examples

For comparison, let’s first add a column using the old algorithms. First, add the column val2 using the copying algorithm – note this is not supported online, so a shared lock is taken:

SQL> ALTER TABLE testtbl ... ADD COLUMN val2 varchar(36) DEFAULT NULL, ... ALGORITHM=COPY, LOCK=SHARED; Query OK, 1000000 rows affected (5.3952 sec)

Note: the test is made on a laptop – the timings themselves are not relevant, but they can be used to compare the time taken for the three algorithms.

So that took around 5.4 seconds. That is not too bad, but we can do much better than that. Let’s try the in-place algorithm. Not only will that require less work (though still a fair bit), it is also possible to perform the change online.

SQL> ALTER TABLE testtbl ... ADD COLUMN val3 varchar(36) DEFAULT NULL, ... ALGORITHM=INPLACE, LOCK=NONE; Query OK, 0 rows affected (1.7511 sec)

So, this is a factor three faster despite the table having an extra column compared to the original table. And it is possible to execute DML statements. That is a big win. But let’s move on to the finale: instant ALTER TABLE:

SQL> ALTER TABLE testtbl ... ADD COLUMN val4 varchar(36) DEFAULT NULL, ... ALGORITHM=INSTANT; Query OK, 0 rows affected (0.0490 sec)

Wow – that’s less than 5/100 of a second – that’s instant enough for me. Note hos the LOCK option is not specified here. The instant algorithm is always as lock free as possible, so the LOCK option is not allowed.

There is a little more to it than this. As mentioned by Bin Su, there are some trade offs such as the row size if not checked upfront. So, in some cases it can be useful to know whether a table contains instantly added columns. This can be seen through the innodb_tables view in the Information Schema on the data dictionary:

Py> schema = session.get_schema("information_schema") Query OK, 1000 items affected (0.0091 sec) Py> inno_tables = schema.get_table("INNODB_TABLES") Query OK, 0 rows affected (0.0002 sec) Py> inno_tables.select("NAME", "INSTANT_COLS").where("INSTANT_COLS > 0") +-------------------+--------------+ | NAME | INSTANT_COLS | +-------------------+--------------+ | my_schema/testtbl | 4 | +-------------------+--------------+ 1 row in set (0.0405 sec)

Now what is that? 4 columns? But we only added one of the columns using the instant algorithm. What the INSTANT_COLS column shows is how many columns existed before the first instant column was added. In the example the columns id, val, val2, and val3 existed, then val4 was added using the instant algorithm. For tables that have never had any columns added instantly, the value of INSTANT_COLS is 0.

Want to Read More?

This blog has just been an introduction to the new MySQL 8.0.12 feature of instant schema changes. I will recommend you to read Bin Xu’s blog as well as the documentation in the MySQL reference manual to fully understand the feature:

Conclusion

The new feature allowing instant schema changes is a great way to avoid time consuming operations. Particularly in replication setups where the replicated statement will block while it applies, the different between an online and an instant schema change is like night and day.

MySQL 8.0.12 can be downloaded from MySQL Community Downloads or you can install it through one of the “native installers” such as MySQL Installer for Microsoft Windows or one of our Linux repositories (Yum, APT, SUSE). If you are a customer and need the commercial version of MySQL Server, I will recommend you to download it from Patches & Updates in My Oracle Support (MOS); there is also a 30 days trial version available from Oracle Software Delivery Cloud.

Please do not hesitate to provide feedback. Bugs and feature requests can be logged in the MySQL Bugs database.

Categories: Web Technologies

js-libp2p 0.23 has been released

Echo JS - Sat, 07/28/2018 - 18:32
Categories: Web Technologies

Pages