emGee Software Solutions Custom Database Applications

Share this

Web Technologies

What's New in MariaDB Server 10.3

Planet MySQL - 3 hours 40 min ago
What's New in MariaDB Server 10.3 maxmether Fri, 05/25/2018 - 18:52

We are happy to announce the general availability (GA) of MariaDB Server 10.3! This release is a big milestone for the development of MariaDB Server and is the result of a huge effort by the development team and contributors – thanks to everyone involved! With our previous major release of MariaDB Server 10.2 last year, we started a journey of adding more enterprise-grade features to better close the gap with proprietary databases. With MariaDB Server 10.3 we take a huge leap on that journey by being the first enterprise open source database to add features like temporal data processing (through system versioning) as well as compatibility with Oracle sequences and Oracle PL/SQL. At the same time, we want to stay true to our open source and innovative roots by adding support for new storage engines to be able to more easily adapt to different workloads and different hardware available to users. This path allows us to adapt quickly to an ever-changing landscape where new innovations are being created at a constantly accelerated pace. This is our greatest release yet and, with this release, we want to put our stake in the Enterprise database category.

The key enhancements of MariaDB Server 10.3 can be put in the following categories:

  • Temporal data processing (system-versioned tables)

  • Oracle compatibility features

  • Purpose-built storage engines

 

Temporal Data Processing

Temporal data processing through our system versioning feature is one of the more exciting additions in the MariaDB Server 10.3 release. With system versioning, the database will keep track of all changes made to every row in the table. The old versions of the rows are not visible through normal query syntax, but by using a special syntax you can access all of the old versions of the row. This capability lends itself to a large number of use cases, anything from auditing and forensics (finding the exact point-in-time result set from a suspicious query executed some time ago) to things like analyzing changes in your data, comparing customer preferences year to year and a multitude of other possibilities. This feature can be turned on per table and the history can also be deleted periodically so that your table doesn’t grow indefinitely. The use cases are both exciting and endless! For more information on system versioning read our manual or this blog on automatic data versioning.

 

Oracle Compatibility

As the demand for MariaDB Server has increased in larger enterprises we have also seen a need for features that are readily available in proprietary databases. In order for MariaDB to be easier to use for DBAs and skilled database engineers from other products, we wanted to add as much compatibility as possible.

In MariaDB Server 10.3, we added a new stored routine syntax in addition to the already existing MariaDB SQL/PSM syntax. We now support MariaDB SQL/PL which is a syntax designed to be compatible with Oracle PL/SQL. This way, migrating existing applications is a lot easier and existing skills can be used without complex retraining. In the process we also added several new constructs to our stored procedure support like new ROW data types.

The new syntax isn’t the only new compatibility feature, we also added sequences in order to have a more flexible way of creating unique primary keys than the already existing auto_increment feature. This feature is fully compatible with Oracle sequences. Learn more about how to use sequences in this blog post. Together with features added previously (like window functions, common table expressions (CTEs), etc.) we now have a deep set of enterprise-grade features that can tackle any type of application need.

 

Purpose-Built Storage Engines

At MariaDB, we believe in using the right tool for the right trade. However, we don’t feel that you need to change everything in order to achieve that. We have a unique architecture with pluggable storage engines that allows the user to adapt the database to the use case and workload without changing the main characteristics and features. We believe that this flexibility serves the best interest of the user and we will work on further advancing this with future versions of MariaDB. This architecture will enable both the community and our team to innovate further by adding storage engines designed for new hardware and new use cases. In MariaDB Server 10.3, we introduce two new storage engines that are declared stable, MyRocks and Spider.

MyRocks comes from a collaboration with Facebook where the storage engine is built on top of RocksDB – an open source project mainly maintained by Facebook. The MyRocks storage engine is built using a log-structured merge tree (LSM tree) architecture and is well adapted to high write workloads. MyRocks also has a very high compression ratio and is built to optimize the lifetime of SSD disks.

Spider is a storage engine designed for extreme scale. The Spider storage engine allows you to shard a specific table across multiple nodes. It uses the partitioning protocol to define how the table should be split up and each individual shard will then reside on a remote MariaDB Server that will only handle queries for that particular shard. With Spider you get almost linear scaling for INSERTS and key lookup read queries.

  And there’s more ...

In addition to this, we have added a multitude of features to help speed up schema operations (like instant ADD COLUMN) and other optimizations and compatibility features. The ADD COLUMN feature is another example of our collaboration with customers and partners including Alibaba, Tencent and ServiceNow, and is just the beginning of making heavy DDL operations more usable.

Want all the details? Get a full list of features in MariaDB Server 10.3.

Get MariaDB Server 10.3 as part of the MariaDB TX 3.0 download – now available.

Login or Register to post comments

Categories: Web Technologies

Service Worker Cookbook

CSS-Tricks - 5 hours 38 min ago

I stumbled upon this site the other day from Mozilla that’s a collection of recipes to get started with a Service Worker — from caching strategies and notifications to providing an offline fallback to your users, this little cookbook has it all.

You can also check out our guide to making a simple site work offline and the offline site that resulted from it.

Direct Link to ArticlePermalink

The post Service Worker Cookbook appeared first on CSS-Tricks.

Categories: Web Technologies

The Perfect Server - Ubuntu 18.04 (Bionic Beaver) with Apache, PHP, MySQL, PureFTPD, BIND, Postfix, Dovecot and ISPConfig 3.1

Planet MySQL - 9 hours 25 min ago
This tutorial shows how to install an Ubuntu 18.04 LTS (Bionic Beaver) server with Apache, BIND, Dovecot and ISPConfig 3.1. ISPConfig is a web hosting control panel that allows you to configure the following services through a web browser: Apache or nginx web server, Postfix mail server, Courier or Dovecot IMAP/POP3 server, MySQL, BIND or MyDNS nameserver, PureFTPd, SpamAssassin, ClamAV, and many more. This setup covers the installation of Apache (instead of nginx), BIND (instead of MyDNS), and Dovecot (instead of Courier).
Categories: Web Technologies

Recommended fix for MySQL Checker Query Script for Tungsten Clustering

Planet MySQL - 11 hours 45 min ago

We have identified an issue in a script that is executed as part of the Tungsten Clustering solution. The script itself executes a small query against the internal tables used by the replicator in order to help identify the current status and latency of the replication process. We believe this script may occasionally fail under some very specific conditions, which would not normally be an issue, but the knock on effect is to create Out of Memory errors and instability in the way identifying the current replication state is handled.

The issue has been fixed in the upcoming 6.0.1 and 5.3.2 releases, but we have identified that customers on versions of the Tungsten Clustering solution from v4.0 upwards would benefit from using an updated version of the script.

If you are using Tungsten Clustering 4.x or higher, it is recommended that you apply this patch to improve the stability of your clustering installation. 

The script can be updated in a number of different ways, including using a script we have written that will update the in place versions of the file. Due to the nature of the script and the installation process, we recommend that even if you manually update or use the update script to fix existing installations, you should also update the version in the package or staging directories so that updates and new installations do not wipe out the changes.

Note: Updating the live version of the file in an installed directory saves you having to reinstall or restart any of the clustering components. When the file has been updated, the manager will automatically reload and use the new version.

Updating installed directories using a script

The easiest way to apply the fix is to use our script, which simply finds all files named *mysql_checker_query.sql* and swaps out just one line.
Since the standard location for staging extraction is /opt/continuent/software, we expect to be able to patch the files found in the staging directories along with those found in the installed directory.

To apply the patch to both the installed files and staging templates by using our update script:

  1. Download the script file we have written from fix_mysql_checker_query.sh
  2. Make the script executable:
chmod +x fix_mysql_checker_query.sh
  1. Run the script, by default it searches for files in /opt/continuent, and you may also specify the directory. For example:
./fix_mysql_checker_query.sh

or

./fix_mysql_checker_query.sh /opt/tungsten

The script will update both the installed files and any staging templates located under the target directory.

Updating via staging directory (requires tpm update)

To update a version of the software extracted from the distribution tarball:

  1. Download the mysql_checker_query.sql.tpl file.
  2. Change to the extracted software directory
  3. Copy the downloaded file into ./tungsten-manager/samples/conf/mysql_checker_query.sql.tpl

$ cp ~/mysql_checker_query.sql.tpl ./tungsten-manager/samples/conf/mysql_checker_query.sql.tpl

If you want to update your software now, bearing in mind that this will restart the managers and connectors, perform an update through tpm to update the distribution:

./tools/tpm update --replace-release Updating an installed directory manually

To update the script by hand, you will need to update a version of the script for every service within your installation.

  1. Download the mysql_checker_query.sql.tpl file.
  2. Copy the file into a new file named mysql_checker_query.sql
  3. Edit the file and change each instance of the string @{MGR_REPL_SCHEMA} to the name of the tracking schema. For example, if you service is called ‘east’ then the tracking schema is ‘tungsten_east’
  4. Copy the temporary file into the installed configuration directory. For example, /opt/continuent/tungsten/tungsten-manager/conf/mysql_checker_query.sql
Categories: Web Technologies

Learning Gutenberg: Building Our Custom Card Block

CSS-Tricks - 11 hours 50 min ago

We’ve got some base knowledge, we’ve played with some React and now we’ve got our project tools set up. Let’s dive into building our custom block.

Article Series:
  1. Series Introduction
  2. What is Gutenberg, Anyway?
  3. A Primer with create-guten-block
  4. Modern JavaScript Syntax
  5. React 101
  6. Setting up a Custom webpack
  7. A Custom "Card" Block (This Post)
What we’re building

We’re going to build a custom card block that features an image, a title and a summary. It’s a really common design pattern in the web and it also let’s us look at some core Gutenberg components, along with core WordPress elements, such as the Media Library. We’ll also play with some display logic with JSX for the front-end markup.

Our glorious custom card block!

Our glorious custom card block!

We’re going to focus solely on the CMS aspect of this block in this tutorial. What it renders is some nice, clean markup on the front-end though. You could extend this block to feature front-end styles too, if you wanted.

Getting started

The first thing we’re going to do is open up the block.js file that we created in the previous section. In your active plugin folder, this is located at blocks/src/block/block.js.

If you are working from the file structure created with create-guten-block, you might want to start by deleting everything in block.js and writing your code from scratch along with the tutorial.

Right at the top of this file, add the following:

const { registerBlockType, RichText, MediaUpload, PlainText } = wp.blocks; const { Button } = wp.components;

We covered destructuring assignments in Part 3. This is a great example of those, which you’ll see a lot in Gutenberg code. Here, wp.components features more than Button, but that’s all we want, so that’s all we’ll get. It’s a neat because it prevents us from having to write stuff like wp.components.Button, which will be great for keeping our code nice and light.

We’ve got that cleared up, so let’s import our Sass files. This is so that webpack detects them.

import './style.scss'; import './editor.scss';

Now let’s start writing the component that powers our block. Right under those lines, add the following:

registerBlockType('card-block/main', { title: 'Card', icon: 'heart', category: 'common' });

This code tells Gutenberg, "Hey, I’ve got a block for you to add to your collection. It’s called 'Card,’ it has a 'heart' icon and it should live in the 'common' category." This is our component’s basic definition, so let’s add some more code.

This should look familiar—remember our challenge in Part 2, create-guten-block? In case you need reminding, check it out here. The first six of those were relatively straightforward and involved replacing strings or bit of HTML. The seventh item, “Make the paragraph text editable," is much more complicated to implement and was intended to get you thinking a bit. The time has come though, and we will indeed make some text editable in Gutenberg!

You may also recognize this registerBlockType function from the PHP register_block_type function we used in the last article. While that function registers a block from the server-side of WordPress, this one registers our block into the React ecosystem on the client side. Both are necessary to have a block that uses React, and their registered names, card-block/main must match.

Add the following code, but make sure you put a comma after 'common', so it looks like this: 'common',.

Here’s the code:

attributes: { title: { source: 'text', selector: '.card__title' }, body: { type: 'array', source: 'children', selector: '.card__body' }, imageAlt: { attribute: 'alt', selector: '.card__image' }, imageUrl: { attribute: 'src', selector: '.card__image' } }

Here, we are defining the editable attributes of our block and the DOM selector that they are assigned to. This attribute object works in a very similar way to the React state object. It even has a very similar updating method called setAttributes. We’ll get to that later though.

At this point, it’s worth a brief overview of state and attributes because they represent a whole new way of thinking for WordPress developers. I’ll take over for a moment to go over them. About Attributes and State

It may look like a simple JavaScript object, but that chunk of attributes introduces a whole swath of new concepts to a WordPress theme developer’s brain, not least of which is state. The concept of state has a long history in computer science, and life in general, really. Almost everything has a state. What state is your coffee cup in now? Empty, almost empty? How about your clothing? Are your shoes dirty or new? How about your body? Are you tired or wide awake?

At a high level, state simply refers to the present condition of a thing. In computer science, that thing is a computer program, and that program can be much, much simpler than what we create here on the web. Take a vending machine, for instance. The vending machine has a state that updates each time you put in a coin. When the state of the machine reaches a predefined amount, say $1.25, the machine knows to allow you to make your snack choice.

In Gutenberg, attributes track the present condition of data in a block. Attributes are the closest parallel we can draw to custom fields in Gutenberg, but they exist only in the context of Gutenberg and JavaScript. Let’s take the attribute above for title, for example:

title: { source: 'text', selector: 'card__title' },

When Gutenberg fires up, it says, “I need to find some text inside a selector called .card__title, and populate the value for title with whatever I find."

Attributes in Gutenberg are not directly connected to the database like custom fields are connected to post_meta. The entries source and selector are instructions for Gutenberg to populate the state of each block. When we load up the editor, it follows these instructions and assigns a value to title based on the markup saved in the database between the HTML comments that indicate a block of this type. We don’t see the value of title in the attributes we register, but if I were to access props.attributes.title, I would get whatever text exists in .card__title.

We’ve set up some basics, so let’s dive in to our edit function. This is what’s called when the block is accessed from the Gutenberg editor in visual mode. The user will see the rich interface, rather than the HTML code that it generates. That’s what I’ll cover next. Add our edit function

Let’s add some code in. Add the following after the closing } of the attributes object. Like before, make sure you add a trailing comma, so it looks like this },.

Add the following code after that:

edit({ attributes, className, setAttributes }) { return ( ); }

So, we’re using another destructuring assignment to selectively pick our passed parameters to the edit function. The two most important are attributes and setAttributes. The attributes parameter is the same as the attributes block, but it’s the current, reactive state. This means if the setAttributes function updates one of the attributes values, it will automatically update anywhere that references it, which is similar to our React component from Part 3.

There’s a big ol’ return in this function. Can you guess what’s going in it? Yup! We’re going to stick some JSX in there. Add the following within the return parentheses:

<div className="container"> <MediaUpload onSelect={ media => { setAttributes({ imageAlt: media.alt, imageUrl: media.url }); } } type="image" value={ attributes.imageID } render={ ({ open }) => getImageButton(open) } /> <PlainText onChange={ content => setAttributes({ title: content }) } value={ attributes.title } placeholder="Your card title" className="heading" /> <RichText onChange={ content => setAttributes({ body: content }) } value={ attributes.body } multiline="p" placeholder="Your card text" /> </div>

OK, there’s a lot going on in here, but it’s all stuff we’ve covered in previous parts of this series. What we’ve got here is a container with three existing Gutenberg components. For each, we are setting the relevant attribute as its value, a relevant placeholder and an onChange/onSelect handler. We’re also passing a custom renderer to the <MediaUpload />, which we’ll cover shortly.

Each onChange handler is a handy little expression that passes the new content that triggered the onChange into the setAttributes function, where we set which attributes object to update. This update then cascades into any reference of that attribute, where the content will update like magic. The <MediaUpload /> element features an onSelect event which is fired when the user selects or uploads an item to the media library.

Speaking of the <MediaUpload /> element, you’ll notice there’s a custom render attribute, which references a getImageButton function. Let’s write that next. Above the return in the edit function add the following:

const getImageButton = (openEvent) => { if(attributes.imageUrl) { return ( <img src={ attributes.imageUrl } onClick={ openEvent } className="image" /> ); } else { return ( <div className="button-container"> <Button onClick={ openEvent } className="button button-large" > Pick an image </Button> </div> ); } };

What this function does is detect if there’s an imageUrl in the attributes object. If there is, it’ll render that <img /> tag and let a user click it to select another. If there’s no image, it’ll render a WordPress <Button /> which prompts the user to pick an image. This calls the same openEvent that was passed into the function.

To keep things simple in this tutorial, we’ve bound a click to the <img /> element. You should consider building something fancy that leverages a <button /> for your production-ready blocks, for better accessibility support.

Right, that’s our edit function done. Not much code there, considering what it actually does, which is great!

Add our save function

We’ve got our Gutenberg editor-end of the block written now, which is the hard part. Now all we’ve got to do is tell Gutenberg what we want the block to do with the content. With the same reactive data from attributes, we can render out our front-end markup in real-time, too. That means when someone switches into HTML editing mode on the block, it’ll be up to date. If you edit it in HTML editing mode, the visual mode will also be kept up to date. Super useful.

Let’s dig in then. After our edit function, add a comma, so it looks like }, and then add the following on a new line:

save({ attributes }) { const cardImage = (src, alt) => { if(!src) return null; if(alt) { return ( <img className="card__image" src={ src } alt={ alt } /> ); } // No alt set, so let's hide it from screen readers return ( <img className="card__image" src={ src } alt="" aria-hidden="true" /> ); }; return ( <div className="card"> { cardImage(attributes.imageUrl, attributes.imageAlt) } <div className="card__content"> <h3 className="card__title">{ attributes.title }</h3> <div className="card__body"> { attributes.body } </div> </div> </div> ); }

Looks pretty similar to the edit function, right? Let’s step through it.

We start of by using a destructuring assignment to pull out the attributes from the passed paramaters, just like the previous edit function.

Then we have another image helper function that firstly detects if there’s an image and returns null if there’s not one. Remember: we return null in JSX if we want it to render nothing. The next thing this helper does is render a slightly varied <img /> tag if there’s alt text or not. For the latter, it hides it from a screen-reader by adding aria-hidden="true" and setting a blank alt attribute.

Lastly, our return spits out a nice .card block with clean, BEM-driven markup that will load on the front-end of our theme.

And that is that for our save function. We’re so close to having a completed block. Just one more step to go!

Add some style

OK, we’ve got this little bit to do and we’re done. The observant amongst you may have noticed some references to className dotted about. These are referencing our editor.scss rules, so let’s add them.

Open up editor.scss, which lives in the same directory as block.js. Add the following:

@import '../common'; .gutenberg { .container { border: 1px solid $gray; padding: 1rem; } .button-container { text-align: center; padding: 22% 0; background: $off-white; border: 1px solid $gray; border-radius: 2px; margin: 0 0 1.2rem 0; } .heading { font-size: 1.5rem; font-weight: 600; } .image { height: 15.7rem; width: 100%; object-fit: cover; } }

This is some loose CSS to give our block some card-like style. Notice it’s all nested within a .gutenberg class? This is to battle the specificity of some core styles. Within the editor, there is a <div class="gutenberg" wrapped around the block area of the post editor screen, so we can make sure to only affect those elements with this nesting. You’ll probably also notice that we’re importing another Sass file, so let’s fill that one.

Open common.scss which lives in the src directory, which is the parent of the current block directory that we’re in.

/* * Common SCSS can contain your common variables, helpers and mixins * that are shared between all of your blocks. */ // Colors $gray: #cccccc; $off-white: #f1f1f1;

Anyway, guess what? We’ve only gone and built out a custom card block!! Let’s give it a test-drive.

First, check your block is all-good. This is what the complete block.js file should look like:

const { registerBlockType, RichText, MediaUpload, PlainText } = wp.blocks; const { Button } = wp.components; // Import our CSS files import './style.scss'; import './editor.scss'; registerBlockType('card-block/main', { title: 'Card', icon: 'heart', category: 'common', attributes: { title: { source: 'text', selector: '.card__title' }, body: { type: 'array', source: 'children', selector: '.card__body' }, imageAlt: { attribute: 'alt', selector: '.card__image' }, imageUrl: { attribute: 'src', selector: '.card__image' } }, edit({ attributes, className, setAttributes }) { const getImageButton = (openEvent) => { if(attributes.imageUrl) { return ( <img src={ attributes.imageUrl } onClick={ openEvent } className="image" /> ); } else { return ( <div className="button-container"> <Button onClick={ openEvent } className="button button-large" > Pick an image </Button> </div> ); } }; return ( <div className="container"> <MediaUpload onSelect={ media => { setAttributes({ imageAlt: media.alt, imageUrl: media.url }); } } type="image" value={ attributes.imageID } render={ ({ open }) => getImageButton(open) } /> <PlainText onChange={ content => setAttributes({ title: content }) } value={ attributes.title } placeholder="Your card title" className="heading" /> <RichText onChange={ content => setAttributes({ body: content }) } value={ attributes.body } multiline="p" placeholder="Your card text" formattingControls={ ['bold', 'italic', 'underline'] } isSelected={ attributes.isSelected } /> </div> ); }, save({ attributes }) { const cardImage = (src, alt) => { if(!src) return null; if(alt) { return ( <img className="card__image" src={ src } alt={ alt } /> ); } // No alt set, so let's hide it from screen readers return ( <img className="card__image" src={ src } alt="" aria-hidden="true" /> ); }; return ( <div className="card"> { cardImage(attributes.imageUrl, attributes.imageAlt) } <div className="card__content"> <h3 className="card__title">{ attributes.title }</h3> <div className="card__body"> { attributes.body } </div> </div> </div> ); } });

If you’re happy, let’s fire up webpack. While in your current plugin directory in terminal, run this:

npx webpack --watch

This is slightly different to the previous part in the series because we’ve added the --watch argument. This basically keeps an eye on your js files and re-runs webpack if they change.

Fire up the editor!

Let’s fire up the Gutenberg editor by loading up a post in the WordPress back end. In the Gutenberg editor, click the little plus icon and look in the “blocks" tab and there it is: our awesome new card block!

Go ahead and give it a test drive and add some content in there. Feels good right?

Here’s a quick video of what you should be seeing right now, with your fancy new card block:

And with that, you’re done &#x1f389;

Here’s a thing you might be thinking: Aren’t blocks kind of a replacement for custom fields? Can’t I now create my own content structure directly within WordPress instead of using a plugin like Advanced Custom Fields? Not quite... Blocks vs. Custom Fields

While Gutenberg does afford us the ability to customize the structure of data entry from the user’s experience, on the back-end it’s no different than the current WYSIWYG editor. Data saved from a block is part of the post_content column in the wp_posts database table—it’s not stored separately in wp_postmeta like custom fields. This means that, at present, we cannot access the data from our card block from another post in the same way we could if we had created custom fields for title, image and content with a standard Advanced Custom Fields setup.

That said, I could see some really interesting plugins surfacing that provide a way to port data from a block to other parts of a website. With the WordPress REST API, the possibilities are just about limitless! In our screencast, Andy and I took a stab at incorporating an API request into our card block and, although things didn’t turn out exactly as planned, the tools are already in place and you can get a taste of what could be possible with Gutenberg in the future. Time will tell!

Wrapping up and next steps

What a journey we’ve been on together! Let’s list out what you’ve learned in this series:

So, where can you go from here? Now that you’ve got some solid base knowledge from this series, you could do some further learning. There’s already fantastic resources for that:

Some interesting case studies:

Keep an eye on these resources to stay up to date with the project:

Experimental things happening with Gutenberg:

Once Gutenberg becomes part of WordPress core in version 5.0 (release date TBD), you could also publish a useful custom block in the WordPress plugins directory. There’s definitely room for some handy components such as the card block that you’ve just built.

We hope you’ve enjoyed this series, because we’ve certainly enjoyed making it. We really hope this helps you get into Gutenberg and that you build some cool stuff. You should totally send us links of stuff you have built too!

Article Series:
  1. Series Introduction
  2. What is Gutenberg, Anyway?
  3. A Primer with create-guten-block
  4. Modern JavaScript Syntax
  5. React 101
  6. Setting up a Custom webpack
  7. A Custom "Card" Block (This Post)

The post Learning Gutenberg: Building Our Custom Card Block appeared first on CSS-Tricks.

Categories: Web Technologies

Ensure better defaults with InnoDB Dedicated server.

Planet MySQL - 13 hours 17 min ago

We have seen with most of the consulting projects where the customer might be having a dedicated DB (MySQL) server ,but running with a default configuration, without any optimisation for underlying hardware, “An idle hardware is similar to idle money will give you no returns”.

Well again if you are from a non-DBA background and you have chosen InnoDB as your engine of choice. The next question will be, what are the major variable that needs to be tuned for the available hardware? here is the answer for you

In this post, We are going to detail about the variable innodb_dedicated_server in MySQL 8.0.11. This variable solves our above query and of course MySQL 8.0 comes with best default values for production use cases.

If innodb_dedicated_server is enabled in my.cnf, MySQL will tune the below variables as per the memory in the server.

Overview:

innodb_buffer_pool_size

Server Memory Buffer Pool Size < 1 G 128 MiB (the innodb_buffer_pool_size default) <= 4 G Detected server memory * 0.5 > 4 G Detected server memory * 0.75

Innodb_buffer_pool_size can be set up to 80% of physical RAM in offline (at startup).  From MySQL 5.7 we can increase the value in the online method without need of restarting the server.

innodb_log_file_size

Server Memory Log File Size < 1 GB 48 MiB (the innodb_log_file_size default) <= 4 GB 128 MiB <= 8 GB 512 MiB <= 16 GB 1024 MiB > 16 GB 2048 MiB

Innodb_log_file_size is primarily used for recovery of DB server, in case of a crash. The recommended value for redo log sizing is that log should have the ability to hold at least last one-hour of the transaction for better stability and durability. Having small size can slow down the write performance. ie., commit latency, by waiting for transaction logs to get freed up.

innodb_flush_method

The flush method is set to O_DIRECT_NO_FSYNC when innodb_dedicated_server is enabled. If the O_DIRECT_NO_FSYNC setting is not available for the server, the default innodb_flush_method setting is used with respect to the architecture.

Testing innodb_dedicated_server:

Configuration:

  • RAM – 4G
  • CPU –  2
  • DISK – 10G SSD

Variable disabled (default):

Memory:

root@localhost :(none) > \! free -m total used free shared buff/cache available Mem: 3849 630 1706 8 512 1009 Swap: 0 0 0

my.cnf values:

root@localhost :(none) > \! cat /etc/my.cnf | grep -i innodb innodb_buffer_pool_size = 1G innodb_log_file_size = 96M innodb_flush_log_trx_at_commit = 2 innodb_flush_method = O_DIRECT

Global values of the variables without innodb_dedicated_server variable.

root@localhost :(none) > select @@innodb_dedicated_server `Innodb Dedicated Server`, concat(round((@@innodb_buffer_pool_size / 1024 / 1024 / 1024 ),2),' GB') `Innodb Buffer Pool Size`,concat(round((@@innodb_log_file_size / 1024 / 1024),2),' MB') `Innodb Log file size`,@@innodb_flush_method `Innodb Flush Method`\G *************************** 1. row *************************** Innodb Dedicated Server: 0 Innodb Buffer Pool Size: 1.00 GB Innodb Log file size: 96.00 MB Innodb Flush Method: O_DIRECT 1 row in set (0.00 sec)

There is no change in variables because innodb_dedicated_server is disabled and the other values are in place as per the cnf values.

Variable enabled:

my.cnf values:

root@localhost :(none) > \! cat /etc/my.cnf | grep -i innodb innodb_dedicated_server = 1 #innodb_buffer_pool_size = 1G #innodb_log_file_size = 96M innodb_flush_log_trx_at_commit = 2 #innodb_flush_method = O_DIRECT

The above variables will not effect by innodb_dedicated_server if they are explicitly mentioned in my.cnf. So I have commented these variables out and restarted the mysql server.

Global values of the variables with innodb_dedicated_server variable.

root@localhost :(none) > select @@innodb_dedicated_server `Innodb Dedicated Server`, concat(round((@@innodb_buffer_pool_size / 1024 / 1024 / 1024 ),2),' GB') `Innodb Buffer Pool Size`,concat(round((@@innodb_log_file_size / 1024 / 1024),2),' MB') `Innodb Log file size`,@@innodb_flush_method `Innodb Flush Method`\G *************************** 1. row *************************** Innodb Dedicated Server: 1 Innodb Buffer Pool Size: 2.00 GB Innodb Log file size: 128.00 MB Innodb Flush Method: O_DIRECT_NO_FSYNC 1 row in set (0.00 sec)

So if innodb_dedicated_server is enabled those variables will be affected as the above formula.

We will receive a warning message in MySQL error log if we specify the variable in my.cnf along with innodb_dedicated_server.

Error log:

[Warning] [MY-012358] [InnoDB] InnoDB: Option innodb_dedicated_server is ignored for innodb_buffer_pool_size because innodb_buffer_pool_size=419430400 is specified explicitly.

Key Takeaways:

  • Do not enable this variable if the server is not dedicated for MySQL.
  • Do not enable the dependent variables if innodb_dedicated_server is enabled.

This variable can ensure that you are using your hardware better but there are more that can be tuned based on the db usage patterns.

Categories: Web Technologies

Webinar Tues, 5/29: MySQL, Percona XtraDB Cluster, ProxySQL, Kubernetes: How they work together

Planet MySQL - 14 hours 39 min ago

Please join Percona’s Principal Architect Alex Rubin as he presents MySQL, Percona XtraDB Cluster, ProxySQL, Kubernetes: How they work together to give you a highly available cluster database environment on Tuesday, May 29th at 7:00 AM PDT (UTC-7) / 10:00 AM EDT (UTC-4).

Register Now

 

In this webinar, Alex will discuss how to deploy a highly available MySQL database environment on Kubernetes/Openshift using Percona XtraDB Cluster (PXC) together with MySQL Proxy to implement read/write splitting.

If you have never used Kubernetes and Openshift, or never used PXC / MySQL Proxy, Alex will do a quick introduction to these technologies. There will also be a demo where Alex sets up a PXC cluster with ProxySQL in Openshift Origin and tries to break it.

By the end of this webinar you will have a better understanding of:

  • How to deploy Percona XtraDB Cluster with ProxySQL for HA solutions
  • How to leverage Kubernetes/Openshift in your environments
  • How to troubleshoot performance issues

Register for the webinar

Alexander Rubin, Principal Consultant

Alexander joined Percona in 2013. Alexander worked with MySQL since 2000 as DBA and Application Developer. Before joining Percona he was doing MySQL consulting as a principal consultant for over 7 years (started with MySQL AB in 2006, then Sun Microsystems and then Oracle). He helped many customers design large, scalable and highly available MySQL systems and optimize MySQL performance. Alexander also helped customers design Big Data stores with Apache Hadoop and related technologies.

The post Webinar Tues, 5/29: MySQL, Percona XtraDB Cluster, ProxySQL, Kubernetes: How they work together appeared first on Percona Database Performance Blog.

Categories: Web Technologies

Geographic Spatial Reference Systems in MySQL 8.0

MySQL Server Blog - 17 hours 55 min ago

MySQL 8.0.11 comes with a catalog of 5108 spatial reference system (SRS) definitions, and 479 of these are geographic. We usually just refer to them by SRID, but in this blog post we’ll dive into the details and try to understand the definition itself.…

Categories: Web Technologies

Geographic Spatial Reference Systems in MySQL 8.0

Planet MySQL - 17 hours 55 min ago

MySQL 8.0.11 comes with a catalog of 5108 spatial reference system (SRS) definitions, and 479 of these are geographic. We usually just refer to them by SRID, but in this blog post we’ll dive into the details and try to understand the definition itself.…

Categories: Web Technologies

Pages

1 2 3 4 5 6 7 8 9 next › last »