emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Identifying High Load Spots in MySQL Using Slow Query Log and pt-query-digest

Planet MySQL - Mon, 10/15/2018 - 06:24

pt-query-digest is one of the most commonly used tool when it comes to query auditing in MySQL®. By default, pt-query-digest reports the top ten queries consuming the most amount of time inside MySQL. A query that takes more time than the set threshold for completion is considered slow but it’s not always true that tuning such queries makes them faster. Sometimes, when resources on server are busy, it will impact every other operation on the server, and so will impact queries too. In such cases, you will see the proportion of slow queries goes up. That can also include queries that work fine in general.

This article explains a small trick to identify such spots using pt-query-digest and the slow query log. pt-query-digest is a component of Percona Toolkit, open source software that is free to download and use.

Some sample data

Let’s have a look at sample data in Percona Server 5.7. Slow query log is configured to capture queries longer than ten seconds with no limit on rate of logging, which is generally considered to throttle the IO that comes while writing slow queries to the log file.

mysql> show variables like 'log_slow_rate%' ; +---------------------+---------+ | Variable_name | Value | +---------------------+---------+ | log_slow_rate_limit | 1 | --> Log all queries | log_slow_rate_type | session | +---------------------+---------+ 2 rows in set (0.00 sec) mysql> show variables like 'long_query_time' ; +-----------------+-----------+ | Variable_name | Value | +-----------------+-----------+ | long_query_time | 10.000000 | --> 10 seconds +-----------------+-----------+ 1 row in set (0.01 sec)

When I run pt-query-digest, I see in the summary report that 80% of the queries have come from just three query patterns.

# Profile # Rank Query ID Response time Calls R/Call V/M # ==== ============================= ================ ===== ======== ===== # 1 0x7B92A64478A4499516F46891... 13446.3083 56.1% 102 131.8266 3.83 SELECT performance_schema.events_statements_history # 2 0x752E6264A9E73B741D3DC04F... 4185.0857 17.5% 30 139.5029 0.00 SELECT table1 # 3 0xAFB5110D2C576F3700EE3F7B... 1688.7549 7.0% 13 129.9042 8.20 SELECT table2 # 4 0x6CE1C4E763245AF56911E983... 1401.7309 5.8% 12 116.8109 13.45 SELECT table4 # 5 0x85325FDF75CD6F1C91DFBB85... 989.5446 4.1% 15 65.9696 55.42 SELECT tbl1 tbl2 tbl3 tbl4 # 6 0xB30E9CB844F2F14648B182D0... 420.2127 1.8% 4 105.0532 12.91 SELECT tbl5 # 7 0x7F7C6EE1D23493B5D6234382... 382.1407 1.6% 12 31.8451 70.36 INSERT UPDATE tbl6 # 8 0xBC1EE70ABAE1D17CD8F177D7... 320.5010 1.3% 6 53.4168 67.01 REPLACE tbl7 # 10 0xA2A385D3A76D492144DD219B... 183.9891 0.8% 18 10.2216 0.00 UPDATE tbl8 # MISC 0xMISC 948.6902 4.0% 14 67.7636 0.0 <10 ITEMS>

Query #1 is generated by the qan-agent from PMM and runs approximately once a minute. These results will be handed over to PMM Server. Similarly queries #2 & #3 are pretty simple. I mean, they scan just one row and will return either zero or one rows. They also use indexing, which makes me think that this is not because of something just with in MySQL. I wanted to know if I could find any common aspect of all these occurrences.

Let’s take a closer look at the queries recorded in slow query log.

# grep -B3 DIGEST mysql-slow_Oct2nd_4th.log .... .... # User@Host: ztrend[ztrend] @ localhost [] Id: 6431601021 # Query_time: 139.279651 Lock_time: 64.502959 Rows_sent: 0 Rows_examined: 0 SET timestamp=1538524947; SELECT DIGEST, CURRENT_SCHEMA, SQL_TEXT FROM performance_schema.events_statements_history; # User@Host: ztrend[ztrend] @ localhost [] Id: 6431601029 # Query_time: 139.282594 Lock_time: 83.140413 Rows_sent: 0 Rows_examined: 0 SET timestamp=1538524947; SELECT DIGEST, CURRENT_SCHEMA, SQL_TEXT FROM performance_schema.events_statements_history; # User@Host: ztrend[ztrend] @ localhost [] Id: 6431601031 # Query_time: 139.314228 Lock_time: 96.679563 Rows_sent: 0 Rows_examined: 0 SET timestamp=1538524947; SELECT DIGEST, CURRENT_SCHEMA, SQL_TEXT FROM performance_schema.events_statements_history; .... ....

Now you can see two things.

  • All of them have same Unix timestamp
  • All of them were spending more than 70% of their execution time waiting for some lock.
Analyzing the data from pt-query-digest

Now I want to check if I can group the count of queries based on their time of execution. If there are multiple queries at a given time captured into the slow query log, time will be printed for the first query but not all. Fortunately, in this case I can rely on the Unix timestamp to compute the counts. The timestamp is gets captured for every query. Luckily, without a long struggle, a combination of grep and awk utilities have displayed what I wanted to display.

# grep -A1 Query_time mysql-slow_Oct2nd_4th.log | grep SET | awk -F "=" '{ print $2 }' | uniq -c 2 1538450797; 1 1538524822; 3 1538524846; 7 1538524857; 167 1538524947; ---> 72% of queries have happened at this timestamp. 1 1538551813; 3 1538551815; 6 1538602215; 1 1538617599; 33 1538631015; 1 1538631016; 1 1538631017;

You can use the command below to check the regular date time format of a given timestamp. So, Oct 3, 05:32 is when there was something wrong on the server:

# date -d @1538524947 Wed Oct 3 05:32:27 IST 2018

Query tuning can be carried out alongside this, but identifying such spots helps avoiding spending time on query tuning where badly written queries are not the problem. Having said that, from this point, further troubleshooting may take different sub paths such as checking log files at that particular time, looking at CPU reports, reviewing past pt-stalk reports if set up to run in the background, and dmesg etc. This approach is useful for identifying at what time (or time range) MySQL was more stressed just using slow query log when no robust monitoring tools, like Percona Monitoring and Management (PMM), are deployed.

Using PMM to monitor queries

If you have PMM, you can review Query Analytics to see the topmost slow queries, along with details like execution counts, load etc. Below is a sample screen copy for your reference:

NOTE: If you use Percona Server for MySQL, slow query log can report time in micro seconds. It also supports extended logging of  other statistics about query execution. These provide extra power to see the insights of query processing. You can see more information about these options here.

Categories: Web Technologies

Invitation to meet Galera Cluster developers at Oracle OpenWorld San Francisco

Planet MySQL - Mon, 10/15/2018 - 06:10

We will have a kiosk at Moscone Center, south exhibition hall, Oracle’s Data Management area number 123, close to high availability area and exits 16 and 18. Our kiosk number is DBA-P1.

Our CEO and Co-Founder Seppo Jaakola will host a presentation highlighting the main features of our upcoming Galera Cluster 4.0. The presentation will take place Monday, Oct 22, 1:00 p.m. – 1:20 p.m at the Exchange @ Moscone South – Theater 2. Seats are limited!

Come and meet us! Let’s discuss your MySQL high availability plan or your Galera Cluster deployment. If you want to set up a meeting with us please email to ínfo@galeracluster.com for a meeting request.

Categories: Web Technologies

POSTing an Indeterminate Checkbox Value

CSS-Tricks - Fri, 10/12/2018 - 12:02

There is a such thing as an indeterminate checkbox value. It's a checkbox (<input type="checkbox">) that isn't checked. Nor is it not checked. It's indeterminate.

We can even select a checkbox in that state and style it with CSS!

Some curious points though:

  1. It's only possible to set via JavaScript. There is no HTML attribute or value for it.
  2. It doesn't POST (or GET or whatever else) or have a value. It's like being unchecked.

So, say you had a form like this:

<form action="" method="POST" id="form"> <input name="name" type="text" value="Chris" /> <input name="vegetarian" type="checkbox" class="veg"> <input type="submit" value="Submit"> </form>

And, for whatever reason, you make that checkbox indeterminate:

let veg = document.querySelector(".veg"); veg.indeterminate = true;

If you serialize that form and take a look at what will POST, you'll get "name=Chris". No value for the checkbox. Conversely, had you checked the checkbox in the HTML and didn't touch it in JavaScript, you'd get "name=Chris&vegetarian=on".

Apparently, this is by design. Checkboxes are meant to be boolean, and the indeterminate value is just an aesthetic thing meant to indicate that visual "child" checkboxes are in a mixed state (some checked, some not). That's fine. Can't change it now without serious breakage of websites.

But say you really need to know on the server if a checkbox is in that indeterminate state. The only way I can think of is to have a buddy hidden input that you keep in sync.

<input name="vegetarian" type="checkbox" class="veg"> <input name="vegetarian-value" type="hidden" class="veg-value"> let veg = document.querySelector(".veg"); let veg_value = document.querySelector(".veg-value"); veg.indeterminate = true; veg_value.value = "indeterminate";

I've set the indeterminate value of one input and I've set another hidden input value to "indeterminate", which I can POST. Serialized means it looks like "name=Chris&vegetarian-value=indeterminate". Good enough.

See the Pen Can you POST an intermediate value? by Chris Coyier (@chriscoyier) on CodePen.

The post POSTing an Indeterminate Checkbox Value appeared first on CSS-Tricks.

Categories: Web Technologies

The Way We Talk About CSS

CSS-Tricks - Fri, 10/12/2018 - 12:01

There’s a ton of very quotable stuff from Rachel Andrew’s latest post all about CSS and how we talk about it in the community:

CSS has been seen as this fragile language that we stumble around, trying things out and seeing what works. In particular for layout, rather than using the system as specified, we have so often exploited things about the language in order to achieve far more complex layouts than it was ever designed for. We had to, or resign ourselves to very simple looking web pages.

Rachel goes on to argue that we probably shouldn’t disparage CSS for being so weird when there are very good reasons for why and how it works — not to mention that it’s getting exponentially more predictable and powerful as time goes by:

There is frequently talk about how developers whose main area of expertise is CSS feel that their skills are underrated. I do not think we help our cause by talking about CSS as this whacky, quirky language. CSS is unlike anything else, because it exists to serve an environment that is unlike anything else. However we can start to understand it as a designed language, with much consistency. It has codified rules and we can develop ways to explain and teach it, just as we can teach our teams to use Bootstrap, or the latest JavaScript framework.

I tend to feel the same way and I’ve been spending a lot of time thinking about how best to reply to folks that argue that “CSS is dumb and weird.” It can sometimes be a demoralizing challenge, attempting to explain why your career and area of expertise is a useful one.

I guess the best way to start doing that is to stand up and say, “No, CSS is not dumb and weird. CSS is awesome!”

Direct Link to ArticlePermalink

The post The Way We Talk About CSS appeared first on CSS-Tricks.

Categories: Web Technologies

MySQL Adventures: GTID Replication In AWS RDS

Planet MySQL - Fri, 10/12/2018 - 11:04

You all heard about that today AWS announced that RDS is started to support GTID Transactions. I’m a great fan of RDS but not for GTID. Since RDS has better settings and configurations to perform well. Many of you people read about the AWS What’s new page regarding GTID. But here we are going to talk about the actual benefits and drawbacks.

RDS supports GTID on MySQL 5.7.23 or later. But AWS released this version on Oct10 (two days before). So, for now, this is the only version which supports GTID.

NOTE: GTID supports only for RDS, its not available for Aurora. It may support in future)

Before configuring the GTID, lets have a look at what is GTID?

  • GTID stands for Global Transaction Identifier.
  • It’ll generate a unique ID for each committed transaction.
  • The GTID referred as server_UUID:transaction_id
  • GTID replication is a better solution in a multi-master environment.
  • To learn more about GTID, hit here.
GTID in RDS:
  1. You can use GTID only on RDS, not in Aurora.

2. There are 4 types of GTID modes in RDS.

From AWS Docs,

  • OFF — No GTID. Anonymous transactions are replicated.
  • OFF_PREMISSIVE — New transactions are anonymous transactions, but all transactions can be replicated.
  • ON_PERMISSIVE specifies that new transactions are GTID transactions, but all transactions can be replicated.
  • ON specifies that new transactions are GTID transactions, and a transaction must be a GTID transaction to be replicated.

3. The default GTID mode in RDS is OFF_PREMISSIVE.

4. RDS support 3 Consistency levels for GTID.

  • OFF allows transactions to violate GTID consistency.
  • ON prevents transactions from violating GTID consistency.
  • WARN allows transactions to violate GTID consistency but generates a warning when a violation occurs.
Replication between RDS to EC2 with GTID:

I have launched an RDS and enabled the below parameters in the Parameter group.

gtid-mode = ON
enforce_gtid_consistency = ON #From RDS Console
Backup Retention Period = 2 Days (you can set this as you need) Create a database with some data: CREATE DATABASE searcedb;

USE searcedb;

CREATE TABLE dba_profile
(
id INT auto_increment PRIMARY KEY,
name VARCHAR(10),
fav_db VARCHAR(10)
);

INSERT INTO dba_profile (name, fav_db) VALUES ('sqladmin', 'MSSQL');
INSERT INTO dba_profile (name, fav_db) VALUES ('mac', 'MySQL'); Create the user for replication: CREATE USER 'rep_user'@'%' IDENTIFIED BY 'rep_user';

GRANT REPLICATION slave ON *.* TO 'rep_user'@'%' IDENTIFIED BY 'rep_user'; FLUSH PRIVILEGES; Take DUMP on RDS: mysqldump \
-h sqladmin-mysql-rds.xxxxx.rds.amazonaws.com \
-u sqladmin -p \
--routines \
--events \
--triggers \
--databases searcedb > dbdump.sql

The above command will dump the searcedb with stored procedures, triggers, and events. If you have multiple databases the use --databases db1 db2 db3. Generally, for replicating the database we use --master-data=2 to get the binlog file and position. But this is GTID replication. So it has the last executed GTID information in the dump file.

$ grep PURGED dbdump.sql SET @@GLOBAL.GTID_PURGED='eac87cf0-cdfe-11e8-9275-0aecd3b2835c:1-13';

You may get this warning message during the dump. It just saying that the dump file contains the Purge GTID command.

Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events.

We can’t restore the dump on the MySQL where GTID is not enabled.

Restore the dump on EC2:

Enable the GTID before restoring the DB. RDS will replicate all the databases. In MySQL there some RDS related tables which are not available in EC2 MySQL. There is a table called heartbeat will keep inserting data about the RDS health. This statements also will be replicated to Slave. replicate-ignore-db=mysql will not work here. Because the statements are specifically mentioned the DB name. So we need to ignore these tables on Slave by replicate-ignore-tables.

Enable GTID on EC2: vi /etc/mysql/mysql.conf.d/mysqld.cnf server-id = 1234
gtid_mode = ON
enforce_gtid_consistency = ON
log-bin
log-slave-updates #Ignore tables
replicate_ignore_table = mysql.rds_configuration,mysql.rds_global_status_history_old,mysql.rds_heartbeat2,mysql.rds_history,mysql.rds_replication_status,mysql.rds_sysinfo
#Restart MySQL
service mysql restart Restore the DB: mysql -u root -p < dbdump.sql Establish the replication: CHANGE MASTER TO MASTER_HOST="sqladmin-mysql-rds.xxxxx.rds.amazonaws.com", MASTER_USER="rep_user", MASTER_PASSWORD="rep_user", MASTER_PORT=3306, MASTER_AUTO_POSITION = 1; START SLAVE; Check the Replication: show slave status\G
Slave_IO_State: Waiting for master to send event Master_Host: sqladmin-mysql-rds.xxxx.rds.amazonaws.com Master_Log_File: mysql-bin-changelog.000030 Read_Master_Log_Pos: 551 Relay_Log_File: ip-172-31-29-127-relay-bin.000002 Relay_Log_Pos: 444 Relay_Master_Log_File: mysql-bin-changelog.000030 Slave_IO_Running: Yes Slave_SQL_Running: Yes ........... Executed_Gtid_Set: eac87cf0-cdfe-11e8-9275-0aecd3b2835c:1-22 Auto_Position: 1 Lets insert some rows: On Master: INSERT INTO dba_profile (name, fav_db) VALUES ('surface', 'PostgresqlSQL'); On Slave: mysql -u root -p
Enter Password: mysql> select * from searcedb.dba_profile;
+----+----------+------------+ | id | name | fav_db | +----+----------+------------+ | 1 | sqladmin | MSSQL | | 2 | mac | MySQL | | 3 | surface | PostgreSQL | +----+----------+------------+ 3 rows in set (0.00 sec) Enable GTID Replication on existing RDS master and slave/RDS Read Replica: On Mater RDS:
  1. Make sure we are running MySQL 5.7.23 or later.

2. Use the custom parameter group.

3. In the Parameter group,

gtid-mode = ON
enforce_gtid_consistency = ON #From RDS Console
Backup Retention Period = 2 Days (you can set this as you need)

4.Need to reboot the RDS to apply these changes.

On Slave:
  1. Use the custom parameter group (it's a good practice to have separate parameter group for Mater and Slave)
  2. If you are using RDS Read Replica, then in the Parameter group,
gtid-mode = ON
enforce_gtid_consistency = ON

3. If you are using EC2 as a Replica then in my.cnf

gtid_mode = ON
enforce_gtid_consistency = ON

4. Reboot the Read Replica.

5. Still, your read replica will use Binlog Position based replication. Run the below command to Start Replication with GTID.

CALL mysql.rds_set_master_auto_position(1); How to Disable GTID in RDS: Caution: You need to follow these step as it is. Else it’ll break your replication and you may lose some transactions. Caution: You need to follow these step as it is. Else it’ll break your replication and you may lose some transactions.
  1. Disable the get auto position for replication.
CALL mysql.rds_set_master_auto_position(0);

2. In the parameter group, set gtid-mode = ON_PREMISSIVE

3. Reboot the Replica.

4. Again in the parameter group, set gtid-mode = OFF_PREMISSIVE

5. Make sure all GTID transactions are applied on the Replica. To check this follow the below steps.

  • On Master, get the current binlog file name and its position.
show master status\G; *************************** 1. row *************************** File: mysql-bin-changelog.000039 Position: 827 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 2f5e8f57-ce2a-11e8-8874-0a1b0ee9b48e:1-27
  • Make a note of the output.
  • On each Read Replica, Run the below command. (replace the binlog filename and position with your output)
SELECT MASTER_POS_WAIT('mysql-bin-changelog.000039', 827);
  • If you see the result is 0, then we are fine. Else the need to wait for some time and again the run the same command.
+----------------------------------------------------+ | MASTER_POS_WAIT('mysql-bin-changelog.000039', 827) | +----------------------------------------------------+ | 0 | +----------------------------------------------------+
  • Once we confirmed that all GTID replications are applied then, in the Read Replica Parameter group we can disable the GTID permanently.
gtid_mode – OFF
enforce_gtid_consistency – OFF
  • Then do the same on the Master RDS parameter group.
Disable GTID on EC2 Slave:
  1. Switch replication from auto position to binlog position.
STOP SLAVE;
change master to master_auto_position = 0;

2. Verify all the GTID transactions are applied. Run the below command. (replace the binlog filename and position with your output)

SELECT MASTER_POS_WAIT('mysql-bin-changelog.000040', 194);
  • If you see the result is 0, then we are fine. Else the need to wait for some time and again the run the same command.
+----------------------------------------------------+ | MASTER_POS_WAIT('mysql-bin-changelog.000040', 194) | +----------------------------------------------------+ | 0 | +----------------------------------------------------+
  • Now, GTID parameters in my.cnf
vi /etc/mysql/mysql.conf.d/mysqld.cnf # Remove the below lines:
gtid_mode = ON
enforce_gtid_consistency = ON Best Practice (from my personal thoughts):
  • On Master: use ON_PERMISSIVE GTID mode. Since this will replicate both GTID and anonymous transactions.
  • On Slave: Use GTID = ON, Because we need strong consistency.
  • Finally, use GTID if it is necessary. Because I tried to change the GTID mode frequently on the Master Node, it breaks the replication.
  • Don’t try to replicate MariaDB to MySQL. MariaDB has different GTID implementation.
  • A few months back I read a blog which is written by Jean-François Gagné. He had done the anonymous transaction replication using a patched version of MySQL.

MySQL Adventures: GTID Replication In AWS RDS was originally published in Searce Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categories: Web Technologies

MySQL 2018 Community Reception

Planet MySQL - Fri, 10/12/2018 - 10:08
The 2018 MySQL Community Reception is October 23rd in a new venue at Samovar Tea, 730 Howard Street in San Francisco at 7:00 PM.   Right in the heart of the Moscone Center activities for Oracle OpenWorld and Oracle Code one activities.

The MySQL Community Reception is not part of Oracle OpenWorld or Oracle Code One (you do not need a badge for either event) but you do need to RSVP.  Food, drinks, and a really amazing group of attendees!   And there will be more than tea to drink.

Plus we have a few new surprises this year! 
Categories: Web Technologies

Styling the Gutenberg Columns Block

CSS-Tricks - Fri, 10/12/2018 - 07:25

WordPress 5.0 is quickly approaching, and the new Gutenberg editor is coming with it. There’s been a lot of discussion in the WordPress community over what exactly that means for users, designers, and developers. And while Gutenberg is sure to improve the writing experience, it can cause a bit of a headache for developers who now need to ensure their plugins and themes are updated and compatible.

One of the clearest ways you can make sure your theme is compatible with WordPress 5.0 and Gutenberg is to add some basic styles for the new blocks Gutenberg introduces. Aside from the basic HTML blocks (like paragraphs, headings, lists, and images) that likely already have styles, you’ll now have some complex blocks that you probably haven’t accounted for, like pull quotes, cover images, buttons, and columns. In this article, we’re going to take a look at some styling conventions for Gutenberg blocks, and then add our own styles for Gutenberg’s Columns block.

Block naming conventions

First things first: how are Gutenberg blocks named? If you’re familiar with the code inspector, you can open that up on a page using the block you want to style, and check it for yourself:

The Gutenberg Pull Quote block has a class of wp-block-pullquote.

Now, it could get cumbersome to do that for each and every block you want to style, and luckily, there is a method to the madness. Gutenberg blocks use a form of the Block, Element, Modifier (BEM) naming convention. The main difference is that the top level for each of the blocks is wp . So, for our pull quote, the name is wp-block-pullquote. Columns would be wp-block-columns, and so on. You can read more about it in the WordPress Development Handbook.

Class name caveat

There is a small caveat here in that the block name may not be the only class name you’re dealing with. In the example above, we see that the class alignright is also applied. And Gutenberg comes with two new classes: alignfull and alignwide. You’ll see in our columns that there’s also a class to tell us how many there are. But we’ll get to that soon.

Applying your own class names

Gutenberg blocks also give us a way to apply our own classes:

The class added to the options panel in the Gutenberg editor (left). It gets applied to the element, as seen in DevTools (right).

This is great if you want to have a common set of classes for blocks across different themes, want to apply previously existing classes to blocks where it makes sense, or want to have variations on blocks.

Much like the current (or “Classic") WordPress post editor, Gutenberg makes as few choices as possible for the front end, leaving most of the heavy lifting to us. This includes the columns, which basically only include enough styles to make them form columns. So we need to add the padding, margins, and responsive styles.

Styling columns

Time to get to the crux of the matter: let’s style some columns! The first thing we’ll need to do is find a theme that we can use. There aren’t too many that have extensive Gutenberg support yet, but that’s actually good in our case. Instead, we’re going to use a theme that’s flexible enough to give us a good starting point: Astra.

Astra is available for free in the WordPress Theme Directory. (Source)

Astra is a free, fast, and flexible theme that has been designed to work with page builders. That means that it can give us a really good starting template for our columns. Speaking of which, we need some content. Here’s what we’ll be working with:

Our columns inside the Gutenberg editor.

We have a three-column layout with images, headings, and text. The image above is what the columns look like inside the Gutenberg editor. Here’s what they look like on the front end:

Our columns on the front end.

You can see there are a few differences between what we see in the editor and what we see on the front end. Most notably, there is no spacing in between the columns on the front end. The left end of the heading on the front end is also lined up with the left edge of the first column. In the editor, it is not because we’re using the alignfull class.

Note: For the sake of this tutorial, we're going to treat .alignfull, .alignwide, and no alignment the same, since the Astra theme does not support the new classes yet.

How Gutenberg columns work

Now that we have a theme, we to answer the question: “how do columns in Gutenberg work?"

Until recently, they were actually using CSS grid, but then switched to flexbox. (The reasoning was that flexbox offers wider browser support.) That said, the styles are super light:

.wp-block-columns { display: flex; } .wp-block-column { flex: 1; }

We’ve got a pen with the final styles if you want to see the result we are aiming for. You can see in it that Gutenberg is only defining the flexbox and then stating each column should be the same length. But you’ll also notice a couple of other things:

  • The parent container is wp-block-columns.
  • There’s also the class has-3-columns, noting the number of columns for us. Gutenberg supports anywhere from two to six columns.
  • The individual columns have the class wp-block-column.

This information is enough for us to get started.

Styling the parents

Since we have flexbox applied by default, the best action to take is to make sure these columns look good on the front end in a larger screen context like we saw earlier.

First and foremost, let’s add some margins to these so they aren’t running into each other, or other elements:

/* Add vertical breathing room to the full row of columns. */ .wp-block-columns { margin: 20px 0; } /* Add horiztonal breathing room between individual columns. */ .wp-block-column { margin: 0 20px; }

Since it’s reasonable to assume the columns won’t be the only blocks on the page, we added top and bottom margins to the whole parent container so there’s some separation between the columns and other blocks on the page. Then, so the columns don’t run up against each other, we apply left and right margins to each individual column.

Columns with some margins added.

These are starting to look better already! If you want them to look more uniform, you can always throw text-align: justify; on the columns, too.

Making the columns responsive

The layout starts to fall apart when we move to smaller screen widths. Astra does a nice job with reducing font sizes as we shrink down, but when we start to get around 764px, things start to get a little cramped:

Our columns at 764px wide.

At this point, since we have three columns, we can explicitly style the columns using the .has-3-columns class. The simplest solution would be to remove flexbox altogether:

@media (max-width: 764px) { .wp-block-columns.has-3-columns { display: block; } }

This would automatically convert our columns into blocks. All we’d need to do now is adjust the padding and we’re good to go — it’s not the prettiest solution, but it’s readable. I’d like to get a little more creative, though. Instead, we’ll make the first column the widest, and then the other two will remain columns under the first one.

This will only work depending on the content. I think here it’s forgivable to give Yoda priority as the most notable Jedi Master.

Let’s see what that looks like:

@media screen and (max-width: 764px) { .wp-block-columns.has-3-columns { flex-flow: row wrap; } .has-3-columns .wp-block-column:first-child { flex-basis: 100%; } }

In the first few lines after the media query, we’re targeting .has-3-columns to change the flex-flow to row wrap. This will tell the browser to allow the columns to fill the container but wrap when needed.

Then, we target the first column with .wp-block-column:first-child and we tell the browser to make the flex-basis 100%. This says, “make the first column fill all available space." And since we’re wrapping columns, the other two will automatically move to the next line. Our result is this:

Our newly responsive columns.

The nice part about this layout is that with row wrap, the columns all become full-width on the smallest screens. Still, as they start to get hard to read before that, we should find a good breakpoint and set the styles ourselves. Around 478px should do nicely:

@media (max-width: 478px) { .wp-block-columns.has-3-columns { display: block; } .wp-block-column { margin: 20px 0; } }

This removes the flex layout, and reverses the margins on the individual columns, maintaining the spacing between them as they move to a stacked layout.

Our small screen layout.

Again, you can see all these concepts come together in the following demo:

See the Pen Gutenberg Columns by Joe Casabona (@jcasabona) on CodePen.

If you want to see a different live example, you can find one here.

Wrapping up

So, there you have it! In this tutorial, we examined how Gutenberg’s Columns block works, it’s class naming conventions, and then applied basic styles to make the columns look good at every screen size on the front end. From here, you can take this code and run with it — we’ve barely scratched the surface and you can do tons more with the CSS alone. For example, I recently made this pricing table using only Gutenberg Columns:

(Live Demo)

And, of course, there are the other blocks. Gutenberg puts a lot of power into the hands of content editors, but even more into the hands of theme developers. We no longer need to build the infrastructure for doing more complex layouts in the WordPress editor, and we no longer need to instruct users to insert shortcodes or HTML to get what need on a page. We can add a little CSS to our themes and let content creators do the rest.

If you want to get more in-depth into preparing your theme for Gutenberg, you can check out my course, Theming with Gutenberg. We go over how to style lots of different blocks, set custom color palettes, block templates, and more.

The post Styling the Gutenberg Columns Block appeared first on CSS-Tricks.

Categories: Web Technologies

Generating Identifiers – from AUTO_INCREMENT to Sequence

Planet MySQL - Fri, 10/12/2018 - 04:00

There are a number of options for generating ID values for your tables. In this post, Alexey Mikotkin of Devart explores your choices for generating identifiers with a look at auto_increment, triggers, UUID and sequences.

AUTO_INCREMENT

Frequently, we happen to need to fill tables with unique identifiers. Naturally, the first example of such identifiers is PRIMARY KEY data. These are usually integer values hidden from the user since their specific values are unimportant.

When adding a row to a table, you need to take this new key value from somewhere. You can set up your own process of generating a new identifier, but MySQL comes to the aid of the user with the AUTO_INCREMENT column setting. It is set as a column attribute and allows you to generate unique integer identifiers. As an example, consider the users table, the primary key includes an id column of type INT:

CREATE TABLE users ( id int NOT NULL AUTO_INCREMENT, first_name varchar(100) NOT NULL, last_name varchar(100) NOT NULL, email varchar(254) NOT NULL, PRIMARY KEY (id) );

Inserting a NULL value into the id field leads to the generation of a unique value; inserting 0 value is also possible unless the NO_AUTO_VALUE_ON_ZERO Server SQL Mode is enabled::

INSERT INTO users(id, first_name, last_name, email) VALUES (NULL, 'Simon', 'Wood', 'simon@testhost.com'); INSERT INTO users(id, first_name, last_name, email) VALUES (0, 'Peter', 'Hopper', 'peter@testhost.com');

It is possible to omit the id column. The same result is obtained with:

INSERT INTO users(first_name, last_name, email) VALUES ('Simon', 'Wood', 'simon@testhost.com'); INSERT INTO users(first_name, last_name, email) VALUES ('Peter', 'Hopper', 'peter@testhost.com');

The selection will provide the following result:

Select from users table shown in dbForge Studio

You can get the automatically generated value using the LAST_INSERT_ID() session function. This value can be used to insert a new row into a related table.

There are aspects to consider when using AUTO_INCREMENT, here are some:

  • In the case of rollback of a data insertion transaction, no data will be added to a table. However, the AUTO_INCREMENT counter will increase, and the next time you insert a row in the table, holes will appear in the table.
  • In the case of multiple data inserts with a single INSERT command, the LAST_INSERT_ID() function will return an automatically generated value for the first row.
  • The problem with the AUTO_INCREMENT counter value is described in Bug #199 – Innodb autoincrement stats los on restart.

For example, let’s consider several cases of using AUTO_INCREMENT for table1:

CREATE TABLE table1 ( id int NOT NULL AUTO_INCREMENT, PRIMARY KEY (id) ) ENGINE = INNODB; -- transactional table -- Insert operations. INSERT INTO table1 VALUES (NULL); -- 1 INSERT INTO table1 VALUES (NULL); -- 2 INSERT INTO table1 VALUES (NULL); -- 3 SELECT LAST_INSERT_ID() INTO @p1; -- 3 -- Insert operations within commited transaction. START TRANSACTION; INSERT INTO table1 VALUES (NULL); -- 4 INSERT INTO table1 VALUES (NULL); -- 5 INSERT INTO table1 VALUES (NULL); -- 6 COMMIT; SELECT LAST_INSERT_ID() INTO @p3; -- 6 -- Insert operations within rolled back transaction. START TRANSACTION; INSERT INTO table1 VALUES (NULL); -- 7 won't be inserted (hole) INSERT INTO table1 VALUES (NULL); -- 8 won't be inserted (hole) INSERT INTO table1 VALUES (NULL); -- 9 won't be inserted (hole) ROLLBACK; SELECT LAST_INSERT_ID() INTO @p2; -- 9 -- Insert multiple rows operation. INSERT INTO table1 VALUES (NULL), (NULL), (NULL); -- 10, 11, 12 SELECT LAST_INSERT_ID() INTO @p4; -- 10 -- Let’s check which LAST_INSERT_ID() values were at different stages of the script execution: SELECT @p1, @p2, @p3, @p4; +------+------+------+------+ | @p1 | @p2 | @p3 | @p4 | +------+------+------+------+ | 3 | 9 | 6 | 10 | +------+------+------+------+ -- The data selection from the table shows that there are holes in the table in the values of identifiers: SELECT * FROM table1; +----+ | id | +----+ | 1 | | 2 | | 3 | | 4 | | 5 | | 6 | | 10 | | 11 | | 12 | +----+

Note: The next AUTO_INCREMENT value for the table can be parsed from the SHOW CREATE TABLE result or read from the AUTO_INCREMENT field of the INFORMATION_SCHEMA TABLES table.

The rarer case is when the primary key is surrogate — it consists of two columns. The MyISAM engine has an interesting solution that provides the possibility of generating values for such keys. Let’s consider the example:

CREATE TABLE roomdetails ( room char(30) NOT NULL, id int NOT NULL AUTO_INCREMENT, PRIMARY KEY (room, id) ) ENGINE = MYISAM; INSERT INTO roomdetails VALUES ('ManClothing', NULL); INSERT INTO roomdetails VALUES ('WomanClothing', NULL); INSERT INTO roomdetails VALUES ('WomanClothing', NULL); INSERT INTO roomdetails VALUES ('WomanClothing', NULL); INSERT INTO roomdetails VALUES ('Fitting', NULL); INSERT INTO roomdetails VALUES ('ManClothing', NULL);

It is quite a convenient solution:

Special values auto generation

The possibilities of the AUTO_INCREMENT attribute are limited because it can be used only for generating simple integer values. But what about complex identifier values? For example, depending on the date/time or [A0001, A0002, B0150…]). To be sure, such values should not be used in primary keys, but they might be used for some auxiliary identifiers.

The generation of such unique values can be automated, but it will be necessary to write code for such purposes. We can use the BEFORE INSERT trigger to perform the actions we need.

Let’s consider a simple example. We have the sensors table for sensors registration. Each sensor in the table has its own name, location, and type: 1 –analog, 2 –discrete, 3 –valve. Moreover, each sensor should be marked with a unique label like [symbolic representation of the sensor type + a unique 4-digit number] where the symbolic representation corresponds to such values [AN, DS, VL].

In our case, it is necessary to form values like these [DS0001, DS0002…] and insert them into the label column.

When the trigger is executed, it is necessary to understand if any sensors of this type exist in the table. It is enough to assign number “1” to the first sensor of a certain type when it is added to the table.

In case such sensors already exist, it is necessary to find the maximum value of the identifier in this group and form a new one by incrementing the value by 1. Naturally, it is necessary to take into account that the label should start with the desired symbol and the number should be 4-digit.

So, here is the table and the trigger creation script:

CREATE TABLE sensors ( id int NOT NULL AUTO_INCREMENT, type int NOT NULL, name varchar(255) DEFAULT NULL, `position` int DEFAULT NULL, label char(6) NOT NULL, PRIMARY KEY (id) ); DELIMITER $$ CREATE TRIGGER trigger_sensors BEFORE INSERT ON sensors FOR EACH ROW BEGIN IF (NEW.label IS NULL) THEN -- Find max existed label for specified sensor type SELECT MAX(label) INTO @max_label FROM sensors WHERE type = NEW.type; IF (@max_label IS NULL) THEN SET @label = CASE NEW.type WHEN 1 THEN 'AN' WHEN 2 THEN 'DS' WHEN 3 THEN 'VL' ELSE 'UNKNOWN' END; -- Set first sensor label SET NEW.label = CONCAT(@label, '0001'); ELSE -- Set next sensor label SET NEW.label = CONCAT(SUBSTR(@max_label, 1, 2), LPAD(SUBSTR(@max_label, 3) + 1, 4, '0')); END IF; END IF; END$$ DELIMITER;

The code for generating a new identifier can, of course, be more complex. In this case, it is desirable to implement some of the code as a stored procedure/function. Let’s try to add several sensors to the table and look at the result of the labels generation:

INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 1, 'temperature 1', 10, 'AN0025'); -- Set exact label value 'AN0025' INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 1, 'temperature 2', 11, NULL); INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 1, 'pressure 1', 15, NULL); INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 2, 'door 1', 10, NULL); INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 2, 'door 2', 11, NULL); INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 3, 'valve 1', 20, NULL); INSERT INTO sensors (id, type, name, `position`, label) VALUES (NULL, 3, 'valve 2', 21, NULL);

Using UUID

Another version of the identification data is worth mentioning – Universal Unique Identifier (UUID), also known as GUID. This is a 128-bit number suitable for use in primary keys.

A UUUI value can be represented as a string – CHAR(36)/VARCHAR(36) or a binary value – BINARY(16).

Benefits:

  • Ability to generate values ​​from the outside, for example from an application.
  • UUID values ​​are unique across tables and databases since the standard assumes uniqueness in space and time.
  • There is a specification – A Universally Unique IDentifier (UUID) URN Namespace.

Disadvantages:

  • Possible performance problems.
  • Data increase.
  • More complex data analysis (debugging).

To generate this value, MySQL function UUID() is used. New functions have been added to Oracle MySQL 8.0 server to work with UUID values ​​- UUID_TO_BIN, BIN_TO_UUID, IS_UUID. Learn more about it at the Oracle MySQL website – UUID()

The code shows the use of UUID values:

CREATE TABLE table_uuid (id binary(16) PRIMARY KEY); INSERT INTO table_uuid VALUES(UUID_TO_BIN(UUID())); INSERT INTO table_uuid VALUES(UUID_TO_BIN(UUID())); INSERT INTO table_uuid VALUES(UUID_TO_BIN(UUID())); SELECT BIN_TO_UUID(id) FROM table_uuid; +--------------------------------------+ | BIN_TO_UUID(id) | +--------------------------------------+ | d9008d47-cdf4-11e8-8d6f-0242ac11001b | | d900e2b2-cdf4-11e8-8d6f-0242ac11001b | | d9015ce9-cdf4-11e8-8d6f-0242ac11001b | +--------------------------------------+

You may also find useful the following article – Store UUID in an optimized way.

Using sequences

Some databases support the object type called Sequence that allows generating sequences of numbers. The Oracle MySQL server does not support this object type yet but the MariaDB 10.3 server has the Sequence engine that allows working with the Sequence object.

The Sequence engine provides DDL commands for creating and modifying sequences as well as several auxiliary functions for working with the values. It is possible to specify the following parameters while creating a named sequence: START – a start value, INCREMENT – a step, MINVALUE/MAXVALUE – the minimum and maximum value; CACHE – the size of the cache values; CYCLE/NOCYCLE – the sequence cyclicity. For more information, see the CREATE SEQUENCE documentation.

Moreover, the sequence can be used to generate unique numeric values.  This possibility can be considered as an alternative to AUTO_INCREMENT but the sequence additionally provides an opportunity to specify a step of the values. Let’s take a look at this example by using the users table. The sequence object users_seq will be used to fill the values of the primary key. It is enough to specify the NEXT VALUE FOR function in the DEFAULT property of the column:

CREATE SEQUENCE users_seq; CREATE TABLE users ( id int NOT NULL DEFAULT (NEXT VALUE FOR users_seq), first_name varchar(100) NOT NULL, last_name varchar(100) NOT NULL, email varchar(254) NOT NULL, PRIMARY KEY (id) ); INSERT INTO users (first_name, last_name, email) VALUES ('Simon', 'Wood', 'simon@testhost.com'); INSERT INTO users (first_name, last_name, email) VALUES ('Peter', 'Hopper', 'peter@testhost.com');

Table content output:

Information

The images for this article were produced while using dbForge Studio for MySQL Express Edition, a download is available from https://www.devart.com/dbforge/mysql/studio/dbforgemysql80exp.exe

It’s free!

 

Thank you to community reviewer Jean-François Gagné for his review and suggestions for this post.

The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up.

The post Generating Identifiers – from AUTO_INCREMENT to Sequence appeared first on Percona Community Blog.

Categories: Web Technologies

What’s new in Angular: Version 7 release candidate arrives

InfoWorld JavaScript - Fri, 10/12/2018 - 03:00

The release candidate of Version 7 of Angular, Google’s popular JavaScript framework for building mobile and desktop applications, is now here. The production version’s release is slated for October 17, 2018.

To read this article in full, please click here

(Insider Story)
Categories: Web Technologies

Rundeck Series: Install And Configure RunDeck 3.0 On CentOS 7

Planet MySQL - Fri, 10/12/2018 - 01:07

Credit: RunDeck Rundeck is one of my favorite Automation tools. Here we are going to see how can we install and configure rundek on a CentOS server with mysql as a backend. Even I like Jenkins, but as a SYSadmin, I like the Rundeck a lot.You may think like both can do automation. But as …

The post Rundeck Series: Install And Configure RunDeck 3.0 On CentOS 7 appeared first on SQLgossip.

Categories: Web Technologies

Mydbops Delighted to be part of Open Source India -2018

Planet MySQL - Thu, 10/11/2018 - 23:20

Mydbops has partnered with OSI days for the second consecutive year. OSI days is one of the Asia’s leading  open source conference.

Presentations on MySQL 

Topic        : Evolution of DBA’s in Cloud

Presenter : Manosh Malai ,Senior Devops / DB Consultant Mydbops

Kabilesh P R. Co-Founder / DB Consultant Mydbops

 

As Cloud is more widely adopted by industry now DBA’s should focus on ramping up their Skills on core optimisation and designing more scalable database. Our consultants emphasis the role of DBA in cloud environment and share their experience in handling large scale systems.

Topic : MySQL 8.0 = NoSQL + SQL

Presenter : Tomas Ulin, Vice President MySQL Engineering Oracle

Topic : High Availability framework for MySQL wth Semi-Synchronous replication

Presenter : Prasad Nagaraj,VP, Engineering Scalegrid

 

 

Categories: Web Technologies

Essential Cluster Monitoring Using Nagios and NRPE

Planet MySQL - Thu, 10/11/2018 - 14:21

In a previous post we went into detail about how to implement Tungsten-specific checks. In this post we will focus on the other standard Nagios checks that would help keep your cluster nodes healthy.

Your database cluster contains your most business-critical data. The slave nodes must be online, healthy and in sync with the master in order to be viable failover candidates.

This means keeping a close watch on the health of the databases nodes from many perspectives, from ensuring sufficient disk space to testing that replication traffic is flowing.

A robust monitoring setup is essential for cluster health and viability – if your replicator goes offline and you do not know about it, then that slave becomes effectively useless because it has stale data.

Nagios Checks The Power of Persistence

One of the best (and also the worst) things about Nagios is the built-in nagging – it just screams for attention until you pay attention to it.

Nagios server uses services.cfg which defines a service that calls the check_nrpe binary with at least one argument – the name of the check to execute on the remote host.

Once on the remote host, the NRPE daemon processes the request from the Nagios server, comparing the check name sent by the Nagios server request with the list of defined commands in the /etc/nagios/nrpe.cfg file. If a match is found, the command is executed by the nrpe user. If different privileges are needed, then sudo must be employed.

Prerequisites Before you can use these examples

This is NOT a Nagios tutorial as such, although we present configuration examples for the Nagios framework. You will need to already have the following:

  • Nagios server installed and fully functional
  • NRPE installed and fully functional on each cluster node you wish to monitor

Please note that installing and configuring Nagios and NRPE in your environment is not covered in this article.

Teach the Targets Tell NRPE on the Database Nodes What To Do

The NRPE commands are defined in the /etc/nagios/nrpe.cfg file on each monitored database node. We will discuss three NRPE plugins called by the defined commands: check_disk, check_mysql and check_mysql_query.

First, let’s ensure that we do not fill up our disk space using the check_disk plugin by defining two custom commands, each calling check_disk to monitor a different disk partition:

command[check_root]=/usr/lib64/nagios/plugins/check_disk -w 20 -c 10 -p / command[check_disk_data]=/usr/lib64/nagios/plugins/check_disk -w 20 -c 10 -p /volumes/data

Next, let’s validate that we are able to login to mysql directly, bypassing the connector by using port 13306, and using the check_mysql plugin by defining a custom command also called check_mysql:

command[check_mysql]=/usr/lib64/nagios/plugins/check_mysql -H localhost -u nagios -p secret -P 13306

If there is a connector running on that node, you may run the same test to validate that we are able to login through the connector by using port 3306 and the check_mysql plugin by defining a custom command called check_mysql_connector:

command[check_mysql_connector]=/usr/lib64/nagios/plugins/check_mysql -H localhost -u nagios -p secret -P 3306

Finally, you may run any MySQL query you wish to validate further, normally via the local MySQL port 13306 to ensure that the check is testing the local host:

command[check_mysql_query]=/usr/lib64/nagios/plugins/check_mysql_query -q 'select mydatacolumn from nagios.test_data' -H localhost -u nagios -p secret -P 13306

Here are some other example commands you may define that are not Tungsten-specific:

command[check_total_procs]=/usr/lib64/nagios/plugins/check_procs -w 150 -c 200 command[check_users]=/usr/lib64/nagios/plugins/check_users -w 15 -c 25 command[check_load]=/usr/lib64/nagios/plugins/check_load -w 5,4,3 -c 6,5,4 command[check_procs]=/usr/lib64/nagios/plugins/check_procs -w 150 -c 200 command[check_zombie_procs]=/usr/lib64/nagios/plugins/check_procs -w 5 -c 10 -s Z

Additionally, there is no harm in defining commands that may not be called, which allows for simple administration – keep the master copy in one place and then just push updates to all nodes as needed then restart nrpe.

Big Brother Sees You Tell the Nagios server to begin watching

Here are the service check definitions for the /opt/local/etc/nagios/objects/services.cfg file:

# Service definition define service{ service_description Root partition - Continuent Clustering servicegroups myclusters host_name db1,db2,db3,db4,db5,db6,db7,db8,db9 check_command check_nrpe!check_root contact_groups admin use generic-service } # Service definition define service{ service_description Data partition - Continuent Clustering servicegroups myclusters host_name db1,db2,db3,db4,db5,db6,db7,db8,db9 check_command check_nrpe!check_disk_data contact_groups admin use generic-service } # Service definition define service{ service_description mysql local login - Continuent Clustering servicegroups myclusters host_name db1,db2,db3,db4,db5,db6,db7,db8,db9 contact_groups admin check_command check_nrpe!check_mysql use generic-service } # Service definition define service{ service_description mysql login via connector - Continuent Clustering servicegroups myclusters host_name db1,db2,db3,db4,db5,db6,db7,db8,db9 contact_groups admin check_command check_nrpe!check_mysql_connector use generic-service } # Service definition define service{ service_description mysql local query - Continuent Clustering servicegroups myclusters host_name db1,db2,db3,db4,db5,db6,db7,db8,db9 contact_groups admin check_command check_nrpe!check_mysql_query use generic-service }

NOTE: You must also add all of the hosts into the /opt/local/etc/nagios/objects/hosts.cfg file.

Let’s Get Practical How to test the remote NRPE calls from the command line

The best way to ensure things are working well is to divide and conquer. My favorite approach is to use the check_nrpe binary on the command line from the Nagios server to make sure that the call(s) to the remote monitored node(s) succeed long before I configure the Nagios server daemon and start getting those evil text messages and emails.

To test a remote NRPE client command from a nagios server via the command line, use the check_nrpe command:

shell> /opt/local/libexec/nagios/check_nrpe -H db1 -c check_disk_data DISK OK - free space: /volumes/data 40234 MB (78% inode=99%);| /volumes/data=10955MB;51170;51180;0;51190

The above command calls the NRPE daemon running on host db1 and executes the NRPE command “check_disk_data” as defined in the db1:/etc/nagios/nrpe.cfg file.

The Wrap-Up Put it all together and sleep better knowing your Continuent Cluster is under constant surveillance

Once your tests are working and your Nagios server config files have been updated, just restart the Nagios server daemon and you are on your way!

Tuning the values in the nrpe.cfg file may be required for optimal performance, as always, YMMV.

To learn about Continuent solutions in general, check out https://www.continuent.com/solutions

For more information about monitoring Continuent clusters, please visit https://docs.continuent.com/tungsten-clustering-6.0/ecosystem-nagios.html.

Continuent Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

For more information, please visit https://www.continuent.com/solutions

Want to learn more or run a POC? Contact us.

Categories: Web Technologies

Deploying MySQL on Kubernetes with a Percona-based Operator

Planet MySQL - Thu, 10/11/2018 - 10:03

In the context of providing managed WordPress hosting services, at Presslabs we operate with lots of small to medium-sized databases, in a DB-per-service model, as we call it. The workloads are mostly reads, so we need to efficiently scale that. The MySQL® asynchronous replication model fits the bill very well, allowing us to scale horizontally from one server—with the obvious availability pitfalls—to tens of nodes. The next release of the stack is going to be open-sourced.

As we were already using Kubernetes, we were looking for an operator that could automate our DB deployments and auto-scaling. Those available were doing synchronous replication using MySQL group replication or Galera-based replication. Therefore, we decided to write our own operator.

Solution architecture

The MySQL operator, released under Apache 2.0 license, is based on Percona Server for MySQL for its operational improvements —like utility user and backup locks—and relies on the tried and tested Orchestrator to do the automatic failovers. We’ve been using Percona Server in production for about four years, with very good results, thus encouraging us to continue implementing it in the operator as well.

The MySQL Operator-Orchestrator integration is highly important for topology, as well as for cluster healing and system failover. Orchestrator is a MySQL high availability and replication management tool that was coded and opened by GitHub.

As we’re writing this, the operator is undergoing a full rewrite to implement the operator using the Kubebuilder framework, which is a pretty logical step to simplify and standardize the operator to make it more readable to contributors and users.

Aims for the project

We’ve built the MySQL operator with several considerations in mind, generated by the needs that no other operator could satisfy at the time we started working on it, last year.

Here are some of them:

  • Easily deployable MySQL clusters in Kubernetes, following the cluster-per-service model
  • DevOps-friendly, critical to basic operations such as monitoring, availability, scalability, and backup stories
  • Out-of-the-box backups, scheduled or on-demand, and point-in-time recovery
  • Support for cloning, both inside a cluster and across clusters

It’s good to know that the MySQL operator is now in beta version, and can be tested in production workloads. However, you can take a spin and decide for yourself—we’re already successfully using it for a part of our production workloads at Presslabs, for our customer dashboard services.

Going further to some more practical info, we’ve successfully installed and tested the operator on AWS, Google Cloud Platform, and Microsoft Azure and covered the step by step process in three tutorials here.

Set up and configuration

It’s fairly simple to use the operator. Prerequisites would be the ubiquitous Helm and Kubectl.

The first step is to install the controller. Two commands should be run, to make use of the Helm chart bundled in the operator:

$ helm repo add presslabs https://presslabs.github.io/charts $ helm install presslabs/mysql-operator --name mysql-operator

These commands will deploy the controller together with an Orchestrator cluster.

The configuration parameters of the Helm chart for the operator and its default values are as follows:

Parameter Description Default value replicaCount replicas for controller 1 image controller container image quay.io/presslabs/mysql-operator:v0.1.5 imagePullPolicy controller image pull policy IfNotPresent helperImage mysql helper image quay.io/presslabs/mysql-helper:v0.1.5 installCRDs whether or not to install CRDS true resources controller pod resources {} nodeSelector controller pod nodeSelector {} tolerations controller pod tolerations {} affinity controller pod affinity {} extraArgs args that are passed to controller [] rbac.create whether or not to create rbac service account, role and roleBinding true rbac.serviceAccountName If rbac.create is false then this service account is used default orchestrator.replicas Control Orchestrator replicas 3 orchestrator.image Orchestrator container image quay.io/presslabs/orchestrator:latest

 

Further Orchestrator values can be tuned by checking the values.yaml config file.

Cluster deployment

The next step is to deploy a cluster. For this, you need to create a Kubernetes secret that contains MySQL credentials (root password, database name, user name, user password), to initialize the cluster and a custom resource MySQL cluster as you can see below:

An example of a secret (example-cluster-secret.yaml):

apiVersion: v1 kind: Secret metadata:  name: my-secret type: Opaque data:  ROOT_PASSWORD: # root password, base_64 encoded

An example of simple cluster (example-cluster.yaml):

apiVersion: mysql.presslabs.org/v1alpha1 kind: MysqlCluster metadata:  name: my-cluster spec:  replicas: 2  secretName: my-secret

The usual kubectl commands can be used to do various operations, such as a basic listing:

$ kubectl get mysql

or detailed cluster information:

$ kubectl describe mysql my-cluster

Backups

A further step could be setting up the backups on an object storage service. To create a backup is as simple as creating a MySQL Backup resource that can be seen in this example (example-backup.yaml):

apiVersion: mysql.presslabs.org/v1alpha1 kind: MysqlBackup metadata:  name: my-cluster-backup spec:  clusterName: my-cluster  backupUri: gs://bucket_name/path/to/backup.xtrabackup.gz  backupSecretName: my-cluster-backup-secret

To provide credentials for a storage service, you have to create a secret and specify your credentials to your provider; we currently support AWS, GCS or HTTP as in this example (example-backup-secret.yaml):

apiVersion: v1 kind: Secret metadata:  name: my-cluster-backup-secret type: Opaque Data:  # AWS  AWS_ACCESS_KEY_ID: #add here your key, base_64 encoded  AWS_SECRET_KEY: #and your secret, base_64 encoded  # or Google Cloud base_64 encoded  # GCS_SERVICE_ACCOUNT_JSON_KEY: #your key, base_64 encoded  # GCS_PROJECT_ID: #your ID, base_64 encoded

Also, recurrent cluster backups and cluster initialization from a backup are some additional operations you can opt for. For more details head for our documentation page.

Further operations and new usage information are kept up-to-date on the project homepage.

Our future plans include developing the MySQL operator and integrating it with Percona Management & Monitoring for better exposing the internals of the Kubernetes DB cluster.

Open source community

Community contributions are highly appreciated; we should mention the pull requests from Platform9, so far, but also the sharp questions on the channel we’ve opened on Gitter, for which we do the best to answer in detail, as well as issue reports from early users of the operator.

Come and talk to us about the project

Along with my colleague Calin Don, I’ll be talking about this at Percona Live Europe in November. It would be great to have the chance to meet other enthusiasts and talk about what we’ve discovered so far!

The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up.

The post Deploying MySQL on Kubernetes with a Percona-based Operator appeared first on Percona Community Blog.

Categories: Web Technologies

How to Fix ProxySQL Configuration When it Won’t Start

Planet MySQL - Thu, 10/11/2018 - 09:07

With the exception of the three configuration variables described here, ProxySQL will only parse the configuration files the first time it is started, or if the proxysql.db file is missing for some other reason.

If we want to change any of this data we need to do so via ProxySQL’s admin interface and then save them to disk. That’s fine if ProxySQL is running, but what if it won’t start because of these values?

For example, perhaps we accidentally configured ProxySQL to run on port 3306 and restarted it, but there’s already a production MySQL instance running on this port. ProxySQL won’t start, so we can’t edit the value that way:

2018-10-02 09:18:33 network.cpp:53:listen_on_port(): [ERROR] bind(): Address already in use

We could delete proxysql.db and have it reload the configuration files, but that would mean any changes we didn’t mirror into the configuration files will be lost.

Another option is to edit ProxySQL’s database file using sqlite3:

[root@centos7-pxc57-4 ~]# cd /var/lib/proxysql/ [root@centos7-pxc57-4 proxysql]# sqlite3 proxysql.db sqlite> SELECT * FROM global_variables WHERE variable_name='mysql-interfaces'; mysql-interfaces|127.0.0.1:3306 sqlite> UPDATE global_variables SET variable_value='127.0.0.1:6033' WHERE variable_name='mysql-interfaces'; sqlite> SELECT * FROM global_variables WHERE variable_name='mysql-interfaces'; mysql-interfaces|127.0.0.1:6033

Or if we have a few edits to make we may prefer to do so with a text editor:

[root@centos7-pxc57-4 ~]# cd /var/lib/proxysql/ [root@centos7-pxc57-4 proxysql]# sqlite3 proxysql.db sqlite> .output /tmp/global_variables sqlite> .dump global_variables sqlite> .exit

The above commands will dump the global_variables table into a file in SQL format, which we can then edit:

[root@centos7-pxc57-4 proxysql]# grep mysql-interfaces /tmp/global_variables INSERT INTO “global_variables” VALUES(‘mysql-interfaces’,’127.0.0.1:3306’); [root@centos7-pxc57-4 proxysql]# vim /tmp/global_variables [root@centos7-pxc57-4 proxysql]# grep mysql-interfaces /tmp/global_variables INSERT INTO “global_variables” VALUES(‘mysql-interfaces’,’127.0.0.1:6033’);

Now we need to restore this data. We’ll use the restore command to empty the table (as we’re restoring from a missing backup):

[root@centos7-pxc57-4 proxysql]# sqlite3 proxysql.db sqlite> .restore global_variables sqlite> .read /tmp/global_variables sqlite> .exit

Once we’ve made the change, we should be able to start ProxySQL again:

[root@centos7-pxc57-4 proxysql]# /etc/init.d/proxysql start Starting ProxySQL: DONE! [root@centos7-pxc57-4 proxysql]# lsof -I | grep proxysql proxysql 15171 proxysql 19u IPv4 265881 0t0 TCP localhost:6033 (LISTEN) proxysql 15171 proxysql 20u IPv4 265882 0t0 TCP localhost:6033 (LISTEN) proxysql 15171 proxysql 21u IPv4 265883 0t0 TCP localhost:6033 (LISTEN) proxysql 15171 proxysql 22u IPv4 265884 0t0 TCP localhost:6033 (LISTEN) proxysql 15171 proxysql 23u IPv4 266635 0t0 TCP *:6032 (LISTEN)

While you are here

You might enjoy my recent post Using ProxySQL to connect to IPV6-only databases over IPV4

You can download ProxySQL from Percona repositories, and you might also want to check out our recorded webinars that feature ProxySQL too.

Categories: Web Technologies

Ketting 2.3 release - Evert Pot

Planet PHP - Thu, 10/11/2018 - 09:00

I just released Ketting 2.3, the missing HATEOAS client for Javascript.

I last blogged about this project in June, so I thought it was worth listing the most interesting recent changes.

Content-Type and Accept header improvements

In the past, the Ketting library used a configurable set of mime-types for the Accept header, and some heuristics for the Content-Type headers. This has been greatly improved.

If you’re following links in this format:

{ "_links": { "next": { "href": "/next-hop", "type": "application/vnd.some-vendor+json"} } } <link rel="next" href="/next-hop" type="application/vnd.some-vendor+json" /> HTTP/1.1 200 OK Link: </next-hop>; rel="next" type="application/vnd.some-vendor+json"

In each of those cases, the link embeds a hint for the content-type at this new location.

When running the following on a resource, the GET request will now automatically use that value fo the Accept header:

const resource = "..."; // Pretend this is a Ketting resource. const nextResource = await resource.follow('next'); console.log(await nextResource.get()); // Will get a application/vnd.some-vendor+json Accept header. Support for OAuth2 client_credentials grant

The library supported some OAuth2, specifically:

  • Simply supplying a Bearer token.
  • Using the password grant_type.

Now, the library also supports the client_credentials grant. The library now also detects if no refresh_token was given, and will automatically re-authenticate using the original grant_type, if this was the case.

No longer ships with a fetch() polyfill

When using the web-packed file, I noticed that a large part of the size of the Ketting library was attributable to a polyfill for the Fetch API.

Every modern browser ships the Fetch API, so this no longer seemed needed. If you do need to run Ketting on an older browser, you can simply provide your own polyfill, such as the whatwg-fetch package.

Updating

For well-behaved servers, these changes should not have a negative impact. Don’t forget to test.

To update, this should usually do it:

npm install ketting@2.3.0
Categories: Web Technologies

MySQL TDE: Online key store migration

Planet MySQL - Thu, 10/11/2018 - 08:57

So, if we’re applying GDPR to our system, and we’re already making use of MySQL Transparent Data Encryption / keyring, then here’s an example on how to migrate from filed-based keyring to the encrypted keyring. Online.

If you’re looking to go deeper into the TDE then I suggest reading the MySQL Server Team’s InnoDB Transparent Tablespace Encryption blog.

You’d already have your environment running, whereas I have to create one.. give me a minute please, 8.0.12 here we come:

mysqld --defaults-file=my_okv.cnf --initialize-insecure --user=khollman mysqld --defaults-file=my_okv.cnf --user=khollman & mysql --defaults-file=my_okv.cnf -uroot show plugins; show variables like 'keyring%'; alter user 'root'@'localhost' identified by 'oracle'; create database nexus; create table nexus.replicant (id INT(11) NOT NULL AUTO_INCREMENT , `First name` varchar(40) not null default '', `Last name` varchar(40) not null default '', `Replicant` enum('Yes','No') not null default 'Yes', PRIMARY KEY (id)) engine=InnoDB row_format=COMPACT ENCRYPTION = 'Y'; INSERT INTO nexus.`replicant` (`First name`,`Last name`,`Replicant`) VALUES ('Roy','Hauer','Yes'), ('Rutger','Batty','Yes'), ('Voight','Kampff','Yes'), ('Pris','Hannah','Yes'), ('Daryl','Stratton','Yes'), ('Rachael','Young','Yes'), ('Sean','Tyrell','Yes'), ('Rick','Ford','No'), ('Harrison','Deckard','Yes'); select * from nexus.replicant;

Now we have an environment using the keyring file-based TDE.

Before migrating the key store, there are a few things we need to be aware of, as well as reading the manual on this topic:

  • mysqld. Yes, we start up another mysqld process, but it’s not a fully functioning server, far from it. It is just a means to migrate the keys from the old file-based  to the new encrypted file. So don’t worry about the defaults-file, the innodb_xxxx params nor anything else. We actually need to reuse the existing datadir.
  • datadir. As just mentioned, don’t try and use another datadir as it won’t find any files there to encrypt with the new key and the process won’t be successful. Use the existing online server datadir. (of course, I recommend this process be run in a non-production test environment first!)
  • -source & -destination. I think this is quite obvious.  The plugin we’re coming from, and going to.
  • keyring_file_data is the existing file-based keyring being used.
  • keyring_encrypted_file_data & _password is the new encrypted password being stored in its file in this location.
  • keyring-migration- params. We need to connect to the existing instance with super user privs. As it’s locally to the instance, we can use -socket.

 

mysqld --basedir=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64 \ --plugin-dir=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/lib/plugin \ --lc_messages_dir=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/share \ --datadir=/opt/mysql/okv/data \ --keyring-migration-source=keyring_file.so \ --keyring_file_data=/opt/mysql/okv/keyring \ --keyring-migration-destination=keyring_encrypted_file.so \ --keyring_encrypted_file_data=/opt/mysql/okv/keyring_enc \ --keyring_encrypted_file_password=oracle2018 \ --keyring-migration-socket=/opt/mysql/okv/mysql.socket \ --keyring-migration-user=root \ --keyring-migration-password=oracle

And if, only if, the migration is successful, you should see output like the following. Anything else, i.e. if no output comes back, or some of the lines don’t appear in your scenario, double check the parameters in the previous command as it’s more than likely impeding a successful key migration somewhere:

2018-10-08T11:26:22.227161Z 0 [Note] [MY-010098] [Server] --secure-file-priv is set to NULL. Operations related to importing and exporting data are disabled 2018-10-08T11:26:22.227219Z 0 [Note] [MY-010949] [Server] Basedir set to /usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/. 2018-10-08T11:26:22.227226Z 0 [System] [MY-010116] [Server] mysqld (mysqld 8.0.12-commercial) starting as process 13758 2018-10-08T11:26:22.254234Z 0 [Note] [MY-011085] [Server] Keyring migration successful. 2018-10-08T11:26:22.254381Z 0 [Note] [MY-010120] [Server] Binlog end 2018-10-08T11:26:22.254465Z 0 [Note] [MY-010733] [Server] Shutting down plugin 'keyring_encrypted_file' 2018-10-08T11:26:22.254642Z 0 [Note] [MY-010733] [Server] Shutting down plugin 'keyring_file' 2018-10-08T11:26:22.255757Z 0 [System] [MY-010910] [Server] mysqld: Shutdown complete (mysqld 8.0.12-commercial) MySQL Enterprise Server - Commercial.

Migrated.

 

To make sure the instance has the new parameters in the defaults file, and before any risk of restarting the instance, we’ll need to add the new ‘encrypted’ params to the my.cnf:

[mysqld] plugin_dir =/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/lib/plugin #early-plugin-load =keyring_file.so #keyring_file_data =/opt/mysql/okv/keyring early-plugin-load =keyring_encrypted_file.so keyring_encrypted_file_data =/opt/mysql/okv/keyring_enc keyring_encrypted_file_password =oracle2018 ...

 

And upon the next most convenient / least inconvenient moment, restart the instance:

mysqladmin --defaults-file=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/my_okv.cnf -uroot -poracle shutdown mysqld --defaults-file=/usr/local/mysql/mysql-commercial-8.0.12-linux-glibc2.12-x86_64/my_okv.cnf --user=khollman &

And let’s double check which keyring plugin we’re using:

select * from information_schema.plugins where plugin_name like '%keyring%' \G *************************** 1. row *************************** PLUGIN_NAME: keyring_encrypted_file PLUGIN_VERSION: 1.0 PLUGIN_STATUS: ACTIVE PLUGIN_TYPE: KEYRING PLUGIN_TYPE_VERSION: 1.1 PLUGIN_LIBRARY: keyring_encrypted_file.so PLUGIN_LIBRARY_VERSION: 1.9 PLUGIN_AUTHOR: Oracle Corporation PLUGIN_DESCRIPTION: store/fetch authentication data to/from an encrypted file PLUGIN_LICENSE: PROPRIETARY LOAD_OPTION: ON 1 row in set (0,00 sec)

And also that we can select the data from the encrypted tablespace:

select * from nexus.replicant; +----+------------+-----------+-----------+ | id | First name | Last name | Replicant | +----+------------+-----------+-----------+ | 1 | Roy | Hauer | Yes | | 2 | Rutger | Batty | Yes | | 3 | Voight | Kampff | Yes | | 4 | Pris | Hannah | Yes | | 5 | Daryl | Stratton | Yes | | 6 | Rachael | Young | Yes | | 7 | Sean | Tyrell | Yes | | 8 | Rick | Ford | No | | 9 | Harrison | Deckard | Yes | +----+------------+-----------+-----------+ 9 rows in set (0,00 sec)

 

Seems quite straight forward.

Well how about, in a test environment, changing the keyring_encrypted_file_password value to something different now, and restart the instance, and run the same select on the same table.

Hey presto:

select * from nexus.replicant; ERROR 3185 (HY000): Can't find master key from keyring, please check in the server log if a keyring plugin is loaded and initialized successfully. Error (Code 3185): Can't find master key from keyring, please check in the server log if a keyring plugin is loaded and initialized successfully. Error (Code 1877): Operation cannot be performed. The table 'nexus.replicant' is missing, corrupt or contains bad data.

 

Hope this helps someone out there. Enjoy encrypting!

Now we can run encrypted backups safely and not worry about moving those files around different systems now.

Advertisements
Categories: Web Technologies

Valid CSS Content

CSS-Tricks - Thu, 10/11/2018 - 07:03

There is a content property in CSS that's made to use in tandem with the ::before and ::after pseudo elements. It injects content into the element.

Here's an example:

<div data-done="&#x2705;" class="email"> chriscoyier@gmail.com </div> .email::before { content: attr(data-done) " Email: "; /* This gets inserted before the email address */ }

The property generally takes anything you drop in there. However, there are some invalid values it won't accept. I heard from someone recently who was confused by this, so I had a little play with it myself and learned a few things.

This works fine:

/* Valid */ ::after { content: "1"; }

...but this does not:

/* Invalid, not a string */ ::after { content: 1; }

I'm not entirely sure why, but I imagine it's because 1 is a unit-less number (i.e. 1 vs. 1px) and not a string. You can't trick it either! I tried to be clever like this:

/* Invalid, no tricks */ ::after { content: "" 1; }

You can output numbers from attributes though, as you might suspect:

<div data-price="4">Coffee</div> /* This "works" */ div::after { content: " $" attr(data-price); }

But of course, you'd never use generated content for important information like a price, right?! (Please don't. It's not very accessible, nor is the text selectable.)

Even though you can get and display that number, it's just a string. You can't really do anything with it.

<div data-price="4" data-sale-modifier="0.9">Coffee</div> /* Not gonna happen */ div::after { content: " $" calc(attr(data-price) * attr(data-sale-modifier)); }

You can't use numbers, period:

/* Nope */ ::after { content: calc(2 + 2); }

Heads up! Don't try concatenating strings like you might in PHP or JavaScript:

/* These will break */ ::after { content: "1" . "2" . "3"; content: "1" + "2" + "3"; /* Use spaces */ content: "1" "2" "3"; /* Or nothing */ content: "1 2 3"; /* The type of quote (single or double) doesn't matter, but content not coming back from attr() does need to be quoted. */ }

There is a thing in the spec for converting attributes into the actual type rather than treating them all like strings...

<wood length="12" /> wood { width: attr(length em); /* or other values like "number", "px", or "url" */ }

...but I'm fairly sure that isn't working anywhere yet. Plus, it doesn't help us with pseudo elements anyway, since strings already work and numbers don't.

The person who reached out to me over email was specifically confused why they were unable to use calc() on content. I'm not sure I can help you do math in this situation, but it's worth knowing that pseudo elements can be counters, and those counters can do their own limited form of math. For example, here's a counter that starts at 12 and increments by -2 for each element at that level in the DOM.

See the Pen Backwards Double Countdown by Chris Coyier (@chriscoyier) on CodePen.

The only other thing we haven't mentioned here is that a pseudo element can be an image. For example:

p:before { content: url(image.jpg); }

...but it's weirdly limited. You can't even resize the image. ¯\_(ツ)_/¯

Much more common is using an empty string for the value (content: "";) which can do things like clear floats but also be positioned, sized and have a background of its own.

The post Valid CSS Content appeared first on CSS-Tricks.

Categories: Web Technologies

Quick Tip: Debug iOS Safari on a true local emulator (or your actual iPhone/iPad)

CSS-Tricks - Thu, 10/11/2018 - 07:02

We've been able to do this for years, largely for free (ignoring the costs of the computer and devices), but I'm not sure as many people know about it as they should.

TL;DR: XCode comes with a "Simulator" program you can pop open to test in virtual iOS devices. If you then open Safari's Develop/Debug menu, you can use its DevTools to inspect right there — also true if you plug in your real iOS device.

Direct Link to ArticlePermalink

The post Quick Tip: Debug iOS Safari on a true local emulator (or your actual iPhone/iPad) appeared first on CSS-Tricks.

Categories: Web Technologies

Deliver exceptional customer experiences in your product

CSS-Tricks - Thu, 10/11/2018 - 06:58

(This is a sponsored post.)

​Pendo is a product cloud that helps create lovable products that customers can’t live without. Pendo enables product teams to understand product usage, collect user feedback, measure NPS, assist users in their apps and promote new features in product — all without requiring any engineering resources. This unique combination of capabilities is all built on a common infrastructure of product data and results in better onboarding, increased user engagement, improved customer satisfaction, reduced churn, and increased revenue.

Pendo is the proven choice of innovative product leaders at Salesforce, Marketo, Zendesk, Citrix, BMC and many more leading companies.

Request a demo of Pendo today.​

Direct Link to ArticlePermalink

The post Deliver exceptional customer experiences in your product appeared first on CSS-Tricks.

Categories: Web Technologies

Avoiding Setter Injection - Brandon Savage

Planet PHP - Thu, 10/11/2018 - 06:00

PHP more or less has two kinds of dependency injection available: constructor injection, and setter injection. Constructor injection is the process of injecting dependencies through the constructor arguments, like so: The dependencies are injected via the constructor, on object creation, and the object has them from the very beginning. Setter injection is different; instead of […]

The post Avoiding Setter Injection appeared first on BrandonSavage.net.

Categories: Web Technologies

Pages