emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Angular 7 Features And Updates

Planet MySQL - Mon, 10/22/2018 - 02:23

Angular 7 Features And Updates is today’s leading topic. Yes, it is true that  Angular v7 is here and the wait is finally over, and we should be extra enthusiastic about this one since it’s a significant release that implements changes, new features, and improvements throughout the entire platform. It is released with Angular Material 7 and Angular CLI 7Angular 7 is released with improved application performance. Angular Material 7 and CDK have new features like Virtual ScrollingDrag and DropCLI prompts the new feature in Angular CLI 7.

If you want to learn more about Angular, then check out this Angular 7 – The complete Guide course. Angular 7 Features And Updates

There are lots of new features in Angular 7, and we see all one by one.

A new ng-compiler

The new compiler is capable of excellent 8-phase rotating ahead-of-time(AOT) compilation. Most Angular applications can expect a massive reduction (95-99%) in bundle sizes. When the actual size of the Angular bundle becomes less than what most languages would take to store the string Angular, you know it’s significant progress.

The ngcc Angular node_module compatibility compiler – The ngcc is a tool which “upgrades” node_module compiled with non-ivy ngc into ivy compliant format.

Angular Compatibility Compiler(NGCC) will convert node_modules compiled with Angular Compatibility Compiler (ngcc), into node_modules which appear to have been composed with TSC compiler transformer (ngtsc) and this compiler change will allow such “legacy” packages to be used by an Ivy rendering engine.

CLI prompts

The CLI will now prompt users when running common commands like ng new or ng add @angular/material to help you discover built-in features like routing or SCSS support. And the great news, it’s customizable! Add a schematic.json using the Schematic CLI, and you can tell the Angular CLI which prompts to execute.

 

Angular DoBootstrap

Angular 7 added a new lifecycle hook that is called ngDoBootstrap and an interface that is called DoBootstrap.

// lifecycle hook class AppModule implements DoBootstrap { ngDoBootstrap(appRef: ApplicationRef) { appRef.bootstrap(AppComponent); } } Application performance

The Angular team has discovered that many developers included the reflect-metadata polyfill in the production. So that is only needed in the development and not in production. So they decided that to fix this, part of the update to v7 will automatically remove it from your polyfills.ts file, and then include it as a build step when building your application in JIT mode. So lifting this polyfill from production builds by default. 

The Ivy rendering engine is a new backward-compatible Angular renderer main focused on

  1. Speed Improvements
  2. Size Reduction
  3. Increased Flexibility

This Ivy Rendering feature will reduce the code size and makes compilation faster.

The Angular 7 upgrade is faster than its previous version (less than 10 minutes for many apps according to the official announcement).  Angular 7 framework is rapid and the virtual scrolling CDK module detailed above makes apps run with better performance. New projects are also now defaulted using Budget Bundles which work to notify you when your app is reaching size limits. By default, you get warnings when you reach 2MB and errors at 5MB. And when you need a little more space, jump in your angular.json and adjust as necessary.

The Angular Material CDK ScrollingModule

As many mobile frameworks have started to make a move toward dynamically loading data such as images or long lists, Angular has followed suit by adding the ScrollingModule to allow for virtual scrolling. As elements gain or lose visibility, they are virtually loaded and unloaded from the DOM. Performance is significantly increased to the user’s eyes. Next time you have a potentially large list of items for your users to scroll, stick it in a cdk-virtual-scroll-viewportcomponent and take advantage of the performance boost!

DragDropModule

Now you can remain entirely within the Angular Material module and implement drag and drop support including reordering lists and transferring items between lists. The CDK includes automatic rendering, drag handlers, drop handlers and even the ability to move data. Don’t like the standard drag animation? No sweat. It’s Angular; it’s yours to override.

Ivy Renderer

The next generation ready-when-it’s-ready renderer…still isn’t quite ready. The Angular team won’t commit to a final timeline, but the development is still active and progressing. They note backward compatibility validation is beginning and while no official team member has commented, a few fervent followers of the commits are expecting a complete beta to launch sometime during V7’s lifespan with a possible official release alongside version 8. Follow the progress yourself on the GitHub Ivy Renderer issue under the official Angular repo. The best news? They fully expect that Ivy can be released in a minor release as long as it’s fully tested and validated. So who knows? Maybe we’ll see it in v7 after all.

Dependency Updates

The 7.0.0 release features updated dependencies on major 3rd party projects:

  1. TypeScript 3.1
  2. RxJS 6.3
  3. Node 10 — support for Node 10 added, and support for 8 continues.

Finally, Angular 7 Features And Updates is over.

The post Angular 7 Features And Updates appeared first on AppDividend.

Categories: Web Technologies

Percona Live Europe Presents: Need for speed – Boosting Apache Cassandra’s performance using Netty

Planet MySQL - Mon, 10/22/2018 - 01:34

My talk is titled Need for speed: Boosting Apache Cassandra’s performance using Netty. Over the years that I have worked in the software industry, making code run fast has fascinated me. So, naturally when I first started contributing to Apache Cassandra, I started looking opportunities to improve its performance. My talk takes us through some interesting challenges within a distributed system like Apache Cassandra and various techniques to significantly improve its performance. Talking about performance is incredibly exciting because you can easily quantify and see the results. Making improvements to the database’s performance not only improves the user experience but also reflects positively on the organization’s bottom line. It also has the added benefit of pushing the boundaries of scale. Furthermore, my talk spans beyond Apache Cassandra and is generally applicable for writing performant networking applications in Java.

Who’d benefit most from the presentation?

My talk is oriented primarily towards developers and operators. Although Apache Cassandra is written in Java and we talk about Netty, there is plenty in the talk that is generic and the lessons learned could be applied towards any Distributed System. I think developers with various experience levels would benefit from the talk. However, intermediate developers would benefit the most.

What I’m most looking forward to at PLE ’18…

There are many interesting sessions at the conference. Here are some of the interesting sessions –

Performance Analyses Technologies for Databases

As I mentioned, I am a big performance geek and in this talk Peter is going to talk about various methods to data infrastructure performance analysis including monitoring.

Securing Access to Facebook’s Databases

This is an interesting session from a security standpoint. Andrew is talking about securing access to MySQL. As most people know Facebook has a huge MySQL deployment and as security and privacy has become a prime concern, we see a lot of movement towards encryption. This talk is going to be particularly interesting because Facebook is using x509 client certs to authenticate. This is a non-trivial challenge for anybody at scale.

TLS for MySQL at large scale

This talk from Wikipedia is along similar lines as the previous one. It just goes to emphasize the importance of security in today’s climate. What’s interesting is that Wikipedia and Facebook, both are talking about it! I am curious to find out what sort of privacy challenges Wikipedia is solving.

Advanced MySQL Data at Rest Encryption in Percona Server

Another security related talk! This one’s about encryption at rest. This is interesting in an of itself as we tend to talk a lot about security in transit and less often about security of data at rest. I hope to learn more about the cost of implementing encryption at rest and it’s impact on the database performance, operations as well as security.

Artificial Intelligence Database Performance Tuning

I think this is an exciting time for the database industry as we’ve not only seen large increase in data volumes but also user expectations have gone up around performance. So, can AI help us tune our databases? Traditionally, the domain of an experienced DBA, I think AI can help us deliver better performance. This talk is about using Genetic Algorithms to tune the database performance. I am curious to find out how these algorithms are applied to tune databases.

The post Percona Live Europe Presents: Need for speed – Boosting Apache Cassandra’s performance using Netty appeared first on Percona Community Blog.

Categories: Web Technologies

MySQL Shell 8.0.13 – What’s New?

Planet MySQL - Sun, 10/21/2018 - 19:40

The MySQL Development team is proud to announce a new version of the MySQL Shell which in addition to the usual bug fixes and enhancements to the existing components,  offers new features we expect are quite useful in your day to day work.…

Categories: Web Technologies

On Some Recent MySQL Optimizer Bugs

Planet MySQL - Sun, 10/21/2018 - 01:46
Yet another topic I missed in my blog post on problematic MySQL features back in July is MySQL optimizer. Unlike with XA transactions, it was done in purpose, as known bugs, limitations and deficiencies of MySQL optimizer is a topic for a series of blog posts if not a separate blog. At the moment the list of known active bug reports in optimize category consists of 380(!) items (mostly "Verified"), aside from feature requests and bugs not considered "production" ones by current "rules" of MySQL public bugs database. I try to check optimizer bugs often in my posts, I reported many of them, but I am still not ready to cover this topic entirely.

What I can do in frames of one blog post is a quick review of some "Verified" optimizer bugs reported over last year. I'll present them one by one in a list, with some comments (mostly related to my checks of the same test case with MariaDB 10.3.7 that I have at hand) and, hopefully, some conclusions about current state of MySQL optimizer.

I'll try to shed some light on current state of MySQL optimizer, but it's huge and dark area, with many details hidden... So, here is the list, starting from most recently reported bugs:
  • Bug #92654 - "GROUP BY fails for queries when a temporary table is involved". This bug affects recent MySQL 8.0.12 and 5.7.23, but does not affect MariaDB 10.3, for example, from what I see:
    MariaDB [test]> insert into domain_tree values (1), (2), (3);
    Query OK, 3 rows affected (0.080 sec)
    Records: 3  Duplicates: 0  Warnings: 0

    MariaDB [test]> insert into host_connection_info values (1), (3);
    Query OK, 2 rows affected (0.054 sec)
    Records: 2  Duplicates: 0  Warnings: 0

    MariaDB [test]> SELECT
        ->   COUNT(1),
        ->   host_connection_status.connection_time
        -> FROM
        ->   (SELECT id
        ->    FROM domain_tree) AS hosts_with_status
        ->   LEFT OUTER JOIN
        ->   (SELECT
        ->      domain_id,
        ->      'recent' AS connection_time
        ->    FROM
        ->      host_connection_info) AS host_connection_status
        ->     ON hosts_with_status.id = host_connection_status.domain_id
        -> GROUP BY host_connection_status.connection_time;
    +----------+-----------------+
    | COUNT(1) | connection_time |
    +----------+-----------------+
    |        1 | NULL            |
    |        2 | recent          |
    +----------+-----------------+
    2 rows in set (0.003 sec)
  • Bug #92524 - "Left join with datetime join condition produces wrong results". The bug was reported by Wei Zhao, who contributed a patch. Again, MariaDB 10.3 is not affected:
    MariaDB [test]> select B.* from h1 left join g B on h1.a=B.a where B.d=str_to_date('99991231',"%Y%m%d") and h1.a=1;
    +---+---------------------+
    | a | d                   |
    +---+---------------------+
    | 1 | 9999-12-31 00:00:00 |
    +---+---------------------+
    1 row in set (0.151 sec)

    MariaDB [test]> select B.* from h1 left join g B on h1.a=B.a and B.d=str_to_date
    ('99991231',"%Y%m%d") where h1.a=1;
    +---+---------------------+
    | a | d                   |
    +---+---------------------+
    | 1 | 9999-12-31 00:00:00 |
    +---+---------------------+
    1 row in set (0.002 sec)
  • Bug #92466 - "Case function error on randomly generated values". See also related older Bug #86624 - "Subquery's RAND() column re-evaluated at every reference". These are either regressions comparing to MySQL 5.6 (and MariaDB), or unclear and weird change in behavior that can be workarounded with some tricks (suggested by Oracle developers) to force materialization of derived table. Essentially, result depends on execution plan - what else could we dream about?
  • Bug #92421 - "Queries with views and operations over local variables don't use indexes". Yet another case when MySQL 5.6 worked differently. As Roy Lyseng explained in comments:
    "... this is due to a deliberate choice that was taken when rewriting derived tables and views in 5.7: When a user variable was assigned a value in a query block, merging of derived tables was disabled.
    ...
    In 8.0, you can override this with a merge hint: /*+ merge(v_test) */, but this is unfortunately not implemented in 5.7.
    "
  • Bug #92209 - "AVG(YEAR(datetime_field)) makes an error result because of overflow". All recent MySQL versions and MariaDB 10.3.7 are affected.
  • Bug #92020 - "Introduce new SQL mode rejecting queries with results depending on query plan". Great feature request by Sveta Smirnova that shows current state of optimizer development properly. We need a feature for MySQL to stop accepting queries that may return different results depending on the execution plan. So, current MySQL considers different results when different execution plans are used normal! Sveta refers to her Bug #91878 - "Wrong results with optimizer_switch='derived_merge=ON';" as an example. MariaDB 10.3 is NOT affected by that bug.
  • Bug #91418 - "derived_merge causing incorrect results with distinct subquery and uuid()". From what I see in my tests, MariaDB 10.3.7 produce wrong results with derived_merge both ON and OFF, unfortunately.
  • Bug #91139 - "use index dives less often". In MySQL 5.7+ the default value of eq_range_index_dive_limit increased from to 10 to 200, and this may negatively affect performance. As Mark Callaghan noted, when there is only one possible index exists optimizer doesn't need to evaluate the query to figure out how to evaluate the query.
  • Bug #90847 - "Query returns wrong data if order by is present". This is definitely a corner case, but still. MariaDB 10.3 returns correct result in my tests.
  • Bug #90398 - "Duplicate entry for key '<group_key>' error". I can not reproduce the last public test case on MariaDB 10.3.
  • Bug #89419 - "Incorrect use of std::max". It was reported based on code analysis by Zsolt Parragi. See also Bug #90853 - "warning: taking the max of a value and unsigned zero is always equal to the other value [-Wmax-unsigned-zero]". Proper compiler detects this.
  • Bug #89410 - "delete from ...where not exists with table alias creates an ERROR 1064 (42000)". MariaDB 10.3 is also affected. Both Oracle and PostrgeSQL accepts the syntax, while in MySQL and MariaDB we can use multi-table delete syntax-based workaround as suggested by Roy Lyseng.
  • Bug #89367 - "Storing result in a variable(UDV) causes query on a view to use derived tables", was reported by Jaime Sicam. This is a kind of regression in MySQL 5.7. MariaDB 10.3 and MySQL 8.0 are not affected. Let me quote a comment by Roy Lyseng:
    "In 5.7, we added a heuristic so that queries that assign user variables are by default materialized and not merged. However, we should have let the ALGORITHM=MERGE override this decision. This is a bug."
  • Bug #89182 - "Optimizer unable to ignore index part entering less optimal query plan". Nice report from Przemyslaw Malkowski. One of many case when "ref" vs "range" decision seems to be wrong based on costs. Looks like optimizer still have parts that are heuristics/rules based and/or do not take costs into account properly.
  • Bug #89149 - "SELECT DISTINCT on multiple TEXT columns is slow". Yet another regression in MySQL 5.7+.
That's all optimizer bugs reported in 2018 and still "Verified" that I wanted to discuss.

From the list above I can conclude the following:
  1. There are many simple enough cases when queries provide wrong results or get not optimal execution plans in MySQL. For many of them MariaDB's optimizer does a better job.
  2. Behavior of optimizer for some popular use cases changed after MySQL 5.6, so take extra care to check queries and their results after upgrade to MySQL 5.7+.
  3. derived_merge optimization seems to cause a lot of problems for users in MySQL 5.7 and 8.0.
  4. It seems optimizer developers care enough to comment on bugs, suggest workarounds and explain decisions made.
Categories: Web Technologies

Combining tiered and leveled compaction

Planet MySQL - Fri, 10/19/2018 - 16:34
There are simple optimization problems for LSM tuning. For example use leveled compaction to minimize space amplification and use tiered to minimize write amplification. But there are interesting problems that are harder to solve:
  1. maximize throughput given a constraint on write and/or space amplification
  2. minimize space and/or write amplification given a constraint on read amplification
To solve the first problem use leveled compaction if it can satisfy the write amp constraint, else use tiered compaction if it can satisfy the space amp constraint, otherwise there is no solution. The lack of a solution might mean the constraints are unreasonable but it can also mean we need to enhance LSM implementations to support more diversity in LSM tree shapes. Even when there is a solution using leveled or tiered compaction there are solutions that would do much better were an LSM to support more varieties of tiered+leveled and leveled-N.
When I mention solved above I leave out that there is more work to find a solution even when tiered or leveled compaction is used. For both there are decisions about the number of levels and per-level fanout. If minimizing write amp is the goal then that is a solved problem. But there are usually more things to consider.
Tiered+leveled
I defined tiered+leveled and leveled-N in a previous post. They occupy the middle ground between tiered and leveled compaction with better read efficiency than tiered and better write efficiency than leveled. They are not supported today by popular LSM implementations but I think they can and should be supported. 
While we tend to explain compaction as a property of an LSM tree (all tiered or all leveled) it is really a property of a level of an LSM tree and RocksDB already supports hybrids, combinations of tiered and leveled. For tiered compaction in RocksDB all levels except the largest use tiered. The largest level is usually configured to use leveled to reduce space amp. For leveled compaction in RocksDB all levels except the smallest use leveled and the smallest (L0) uses tiered.
So tiered+leveled isn't new but I think we need more flexibility. When a string of T and L is created from the per-level compaction choices then the regex for the strings that RocksDB supports is T+L or TL+. I want to support T+L+. I don't want to support cases where leveled is used for a smaller level and tiered for a larger level. So I like TTLL but not LTTL. My reasons for not supporting LTTL are:
  1. The benefit from tiered is less write amp and is independent of the level on which it is used. The reduction in write amp is the same whether tiered is used for L1, L2 or L3.
  2. The cost from tiered is more read and space amp and that is dependent on the level on which it is used. The cost is larger for larger levels. When space amp is 2 more space is wasted on larger levels than smaller levels. More IO read amp is worse for larger levels because they have a lower hit rate than smaller levels and more IO will be done. More IO implies more CPU cost from decompression and the CPU overhead of performing IO.
From above the benefit from using T is the same for all levels but the cost increases for larger levels so when T and L are both used then T (tiered) should be used on the smaller levels and L (leveled) on the larger levels.
Leveled-N
I defined leveled-N in a previous post. Since then a co-worker, Maysam Yabandeh, explained to me that a level that uses leveled-N can also be described as two levels where the smaller uses leveled and the larger uses tiered. So leveled-N might be syntactic sugar in the LSM tree configuration language.
For example with an LSM defined using the triple syntax from here as (compaction type, fanout, runs-per-level) then this is valid: (T,1,8) (T,8,2) (L,8,2) (L,8,1) and has total fanout of 512 (8 * 8 * 8). The third level (L,8,2) uses leveled-N with N=2. Assuming we allow LSM trees where T follows L then the leveled-N level can be replaced with two levels: (L,8,1) (T,1,8). Then the LSM tree is defined as (T,1,8) (T,8,2) (L,8,1) (T,1,8) (L,8,1). These LSM trees have the same total fanout and total read/write/space amp. Compaction from (L,8,1) to (T,1,8) is special. It has zero write amp because it is done by a file move rather than merging/writing data so all that must be updated is LSM metadata to record the move.
So in general I don't support T after L but I do support it in the special case. Of course we can pretend the special case doesn't exist if we use the syntactic sugar provided by leveled-N. But I appreciate that Maysam discovered this.

Categories: Web Technologies

Percona Live Europe Presents: The Latest MySQL Replication Features

Planet MySQL - Fri, 10/19/2018 - 08:06

Considering the modern world of technology, where distributed system play a key role, replication in MySQL® is at the very heart of that change. It is very exciting to deliver this presentation and to be able to show everyone the greatest and the latest features that MySQL brings in order to continue the success that it has always been in the past.

The talk is suitable for anyone that’s interested in knowing what Oracle is doing with MySQL replication. Old acquaintances will get familiarized about new features already delivered and being considered and newcomers to the MySQL ecosystem will see how great MySQL Replication has grown to be and how it fits in their business..

What I’m most looking forward to at Percona Live Europe…

We are always eager to get feedback about the product.

Moreover, MySQL being MySQL has a very large user base and, as such, is deployed and used in many different ways. It is very appealing and useful to continuously learn how our customers and users are making the most out of the product. Especially when it comes to replication, since MySQL replication infrastructure is anenabler for advanced and complex setups, making it a powerful and indispensable tool in virtually any setup nowadays.

The post Percona Live Europe Presents: The Latest MySQL Replication Features appeared first on Percona Community Blog.

Categories: Web Technologies

Effective Monitoring of MySQL with SCUMM Dashboards Part 1

Planet MySQL - Fri, 10/19/2018 - 02:17

We added a number of new dashboards for MySQL in our latest release of ClusterControl 1.7.0. - and in our previous blog, we showed you How to Monitor Your ProxySQL with Prometheus and ClusterControl.

In this blog, we will look at the MySQL Overview dashboard.

So, we have enabled the Agent Based Monitoring under the Dashboard tab to start collecting metrics to the nodes. Take note that when enabling the Agent Based Monitoring, you have the options to set the “Scrape Interval (seconds)” and “Data retention (days)”. Scraping Interval is where you want to set how aggressively Prometheus will harvest data from the target and Data Retention is how long you want to keep your data collected by Prometheus before it’s deleted.

When enabled, you can identify which cluster has agents and which one has agentless monitoring.

Compared to the agentless approach, the granularity of your data in graphs will be higher with agents.

The MySQL Graphs

The latest version of ClusterControl 1.7.0 (which you can download for free - ClusterControl Community) has the following MySQL Dashboards for which you can gather information for your MySQL servers. These are MySQL Overview, MySQL InnoDB Metrics, MySQL Performance Schema, and MySQL Replication.

We’ll cover in details the graphs available in the MySQL Overview dashboard.

MySQL Overview Dashboard

This dashboard contains the usual important variables or information regarding the health of your MySQL node. The graphs contained on this dashboard are specific to the node selected upon viewing the dashboards as seen below:

It consists of 26 graphs, but you might not need all of these when diagnosing problems. However, these graphs provides a vital representation of the overall metrics for your MySQL servers. Let’s go over the basic ones, as these are probably the most common things that a DBA will routinely look at.

The first four graphs shown above along with the MySQL’s uptime, query per-seconds, and buffer pool information are the most basic pointers we might need. From the graphs displayed above, here are their representations:

  • MySQL Connections
    This is where you want to check your total client connections thus far allocated in a specific period of time.
  • MySQL Client Thread Activity
    There are times that your MySQL server could be very busy. For example, it might be expected to receive surge in traffic at a specific time, and you want to monitor your running threads activity. This graph is really important to look at. There can be times your query performance could go south if, for example, a large update causes other threads to wait to acquire lock. This would lead to an increased number of your running threads. The cache miss rate is calculated as Threads_created/Connections.
  • MySQL Questions
    These are the queries running in a specific period of time. A thread might be a transaction composed of multiple queries and this can be a good graph to look at.
  • MySQL Thread Cache
    This graph shows the thread_cache_size value, threads that are cached (threads that are reused), and threads that are created (new threads). You can check on this graph for such instances like you need to tune your read queries when noticing a high number of incoming connections and your threads created increases rapidly. For example, if your Threads_running / thread_cache_size > 2 then increasing your thread_cache_size may give a performance boost to your server. Take note that creation and destruction of threads are expensive. However, in the recent versions of MySQL (>=5.6.8), this variable has autosizing by default which you might consider it untouched.

The next four graphs are MySQL Temporary Objects, MySQL Select Types, MySQL Sorts, and MySQL Slow Queries. These graphs are related to each other specially if you are diagnosing long running queries and large queries that needs optimization.

  • MySQL Temporary Objects
    This graph would be a good source to rely upon if you want to monitor long running queries that would end up using disk instead of temporary tables or files going in-memory. It’s a good place to start looking for periodical occurrence of queries that could add up to create disk space issues especially during odd times.
  • MySQL Select Types
    One source of bad performance is queries that are using full joins, table scans, select range that is not using any indexes. This graph would show how your query performs and what amongst the list from full joins, to full range joins, select range, table scans has the highest trends.
  • MySQL Sorts
    Diagnosing those queries that are using sorting, and the ones that take much time to finish.
  • MySQL Slow Queries
    Trends of your slow queries are collected here on this graph. This is very useful especially on diagnosing how often your queries are slow. What are things that need to be tuned? It could be too small buffer pool, tables that lack indexes and goes a full-table scan, logical backups running on unexpected schedule, etc. Using our Query Monitor in ClusterControl along with this graph is beneficial, as it helps determine slow queries.

The next graphs we have cover is more of the network activity, table locks, and the underlying internal memory that MySQL is consuming during the MySQL’s activity.

  • MySQL Aborted Connections
    The number of aborted connections will render on this graph. This covers the aborted clients such as where the network was closed abruptly or where the internet connection was down or interrupted. It also records the aborted connects or attempts such as wrong passwords or bad packets upon establishing a connection from the client.
  • MySQL Table Locks
    Trends for tables that request for a table lock that has been granted immediately and for tables that request for a lock that has not been acquired immediately. For example, if you have table-level locks on MyISAM tables and incoming requests of the same table, these cannot be granted immediately.
  • MySQL Network Traffic
    This graph shows the trends of the inbound and outbound network activity in the MySQL server. “Inbound” is the data received by the MySQL server while “Outbound” is the data sent or transferred by the server from the MySQL server.This graph is best to check upon if you want to monitor your network traffic especially when diagnosing if your traffic is moderate but you’re wondering why it has a very high outbound transferred data, like for example, BLOB data.
  • MySQL Network Usage Hourly
    Same as the network traffic which shows the Received and Sent data. Take note that it’s based on ‘per hour’ and labeled with ‘last day’ which will not follow the period of time you selected in the date picker.
  • MySQL Internal Memory Overview
    This graph is familiar for a seasoned MySQL DBA. Each of these legends in the bar graph are very important especially if you want to monitor your memory usage, your buffer pool usage, or your adaptive hash index size.

The following graphs show the counters that a DBA can rely upon such as checking the statistics for example, the statistics for selects, inserts, updates, the number of master status that has been executed, the number of SHOW VARIABLES that has been executed, check if you have bad queries doing table scans or tables not using indexes by looking over the read_* counters, etc.


  • Top Command Counters (Hourly)
    These are the graphs you would likely have to check whenever you would like to see the statistics for your inserts, deletes, updates, executed commands such as gathering the processlist, slave status, show status (health statistics of the MySQL server), and many more. This is a good place if you want to check what kind of MySQL command counters are topmost and if some performance tuning or query optimization is needed. It might also allow you to identify which commands are running aggressively while not needing it.
  • MySQL Handlers
    Oftentimes, a DBA would go over these handlers and check how the queries are performing in your MySQL server. Basically, this graph covers the counters from the Handler API of MySQL. Most common handler counters for a DBA for the storage API in MySQL are Handler_read_first, Handler_read_key, Handler_read_last, Handler_read_next, Handler_read_prev, Handler_read_rnd, and Handler_read_rnd_next. There are lots of MySQL Handlers to check upon. You can read about them in the documentation here.
  • MySQL Transaction Handlers
    If your MySQL server is using XA transactions, SAVEPOINT, ROLLBACK TO SAVEPOINT statements. Then this graph is a good reference to look at. You can also use this graph to monitor all your server’s internal commits. Take note that the counter for Handler_commit does increment even for SELECT statements but differs against insert/update/delete statements which goes to the binary log during a call to COMMIT statement.

The next graph will show trends about process states and their hourly usage. There are lots of key points here in the bar graph legend that a DBA would check. Encountering disk space issues, connection issues and see if your connection pool is working as expected, high disk I/O, network issues, etc.

  • Process States/Top Process States Hourly
    This graph is where you can monitor the top thread states of your queries running in the processlist. This is very informative and helpful for such DBA tasks where you can examine here any outstanding statuses that need resolution. For example, opening tables state is very high and its minimum value is almost near to the maximum value. This could indicate that you need to adjust the table_open_cache. If the statistics is high and you’re noticing a slow down of your server, this could indicate that your server is disk-bound and you might need to consider increasing your buffer pool. If you have a high number of creating tmp table then you might have to check your slow log and optimize the offending queries. You can checkout the manual for the complete list of MySQL thread states here.

The next graph we’ll be checking is about query cache, MySQL table definition cache, how often MySQL opens system files.


Related resources  Introducing SCUMM: the agent-based database monitoring infrastructure in ClusterControl  How to Monitor MySQL or MariaDB Galera Cluster with Prometheus Using SCUMM  Download ClusterControl
  • MySQL Query Cache Memory/Activity
    These graphs are related to each other. If you have query_cache_size <> 0 and query_cache_type <> 0, then this graph can be of help. However, in the newer versions of MySQL, the query cache has been marked as deprecated as the MySQL query cache is known to cause performance issues. You might not need this in the future. The most recent version of MySQL 8.0 has drastic improvements; it tends to increase performance as it comes with several strategies to handle cache information in the memory buffers.
  • MySQL File Openings
    This graph shows the trend for the opened files since the MySQL server’s uptime but it excludes files such as sockets or pipes. It does also not include files that are opened by the storage engine since they have their own counter that is Innodb_num_open_files.
  • MySQL Open Files
    This graph is where you want to check your InnoDB files currently held open, the current MySQL open files, and your open_files_limit variable.
  • MySQL Table Open Cache Status
    If you have very low table_open_cache set here, this graph will tell you about those tables that fail the cache (newly opened tables) or miss due to overflow. If you encounter a high number or too much “Opening tables” status in your processlist, this graph will serve as your reference to determine this. This will tell you if there’s a need to increase your table_open_cache variable.
  • MySQL Open Tables
    Relative to MySQL Table Open Cache Status, this graph is useful in certain occasions like you want to identify if there’s a need to increase of your table_open_cache or lower it down if you notice a high increase of open tables or Open_tables status variable. Note that table_open_cache could take a large amount of memory space so you have to set this with care especially in production systems.
  • MySQL Table Definition Cache
    If you want to check the number of your Open_table_definitions and Opened_table_definitions status variables, then this graph is what you need. For newer versions of MySQL (>=5.6.8), you might not need to change the value of this variable and use the default value since it has autoresizing feature.
Conclusion

The SCUMM addition in the latest version of ClusterControl 1.7.0 provides significant new benefits for a number of key DBA tasks. The new graphs can help easily pinpoint the cause of issues that DBAs or sysadmins would typically have to deal with and help find appropriate solutions faster.

We would love to hear your experience and thoughts on using ClusterControl 1.7.0 with SCUMM (which you can download for free - ClusterControl Community).

In part 2 of this blog, I will discuss Effective Monitoring of MySQL Replication with SCUMM Dashboards.

Tags:  MySQL monitoring dashboards clustercontrol scumm
Categories: Web Technologies

ProxySQL 1.4.11 and Updated proxysql-admin Tool Now in the Percona Repository

Planet MySQL - Thu, 10/18/2018 - 14:19

ProxySQL 1.4.11, released by ProxySQL, is now available for download in the Percona Repository along with an updated version of Percona’s proxysql-admin tool.

ProxySQL is a high-performance proxy, currently for MySQL and its forks (like Percona Server for MySQL and MariaDB). It acts as an intermediary for client requests seeking resources from the database. René Cannaò created ProxySQL for DBAs as a means of solving complex replication topology issues.

The ProxySQL 1.4.11 source and binary packages available at https://percona.com/downloads/proxysql include ProxySQL Admin – a tool, developed by Percona to configure Percona XtraDB Cluster nodes into ProxySQL. Docker images for release 1.4.11 are available as well: https://hub.docker.com/r/percona/proxysql/. You can download the original ProxySQL from https://github.com/sysown/proxysql/releases. The documentation is hosted on GitHub in the wiki format.

Improvements
  • mysql_query_rules_fast_routing is enabled in ProxySQL Cluster. For more information, see #1674 at GitHub.
  • In this release, rmpdb checksum error is ignored when building ProxySQL in Docker.
  • By default, the permissions for proxysql.cnf are set to 600 (only the owner of the file can read it or make changes to it).
Bugs Fixed
  • Fixed the bug that could cause crashing of ProxySQL if IPv6 listening was enabled. For more information, see #1646 at GitHub.

ProxySQL is available under Open Source license GPLv3.

Categories: Web Technologies

Where you can find MySQL in October - December 2018 - part 2.

Planet MySQL - Thu, 10/18/2018 - 08:59

As a continue of the previous blog announcement posted on Oct 16, please find below a list of conferences & events where you can find MySQL team &/or MySQL Community during the period of Oct-Dec 2018:

November 2018 - cont.

  • PerconaLive, Frankfrurt Germany, November 5-7, 2018
    • We are happy to inform that MySQL is going to be Silver sponsor of PerconaLive Europe 2018! You will be able to find us on the MySQL booth in the expo hall as well as attend multiple MySQL talks given by our colleagues. The talks are scheduled as follows:
      • Tutorial on: "MySQL InnoDB Cluster in a Nutshell : The Saga Continues with 8.0" by Frédéric Descamps, the MySQL Community Manager, scheduled for Monday, @9:00am-12:00pm.
      • "MySQL 8.0 Performance: Scalability & Benchmarks" by Dimitri KRAVTCHUK, the Performance Architect, talk is scheduled for Tuesday @12:20-13:10.
      • "MySQL Group Replication: the magic explained" by Frédéric Descamps, the MySQL Community Manager, scheduled for Tuesday @14:20-15:10.
      • "Upgrading to MySQL 8.0 and a More Automated Experience" by Dmitry Lenev, the Senior Software Developer, scheduled for Tuesday @ 15:20-16:10.
      • "More SQL in MySQL 8.0" by Norvald Ryeng, the Senior Developer Manager - Optimizer Team, scheduled for Tuesday 17:25-17:50.
      • "The Latest MySQL Replication Features" by Jorge Tiago, the Senior Software Developer, scheduled for Wednesday @14:20-15:10.
      • "Developing Applications with Node.js and the MySQL Document Store" by Johannes Schlüter, the Principal Software Developer, scheduled for Wednesday @17:00-17:25.
    • Come to listen MySQL sessions & stop at our booth to talk to our staff at PerconaLive 2018!
  • PHP.RUHR, Dortmund, Germany, November 8, 2018
    • MySQL team is going to be part of this technical/PHP show. You can come to listen a MySQL talk in the Mainstage: Developer as: "MySQL 8.0 - The new Developer API" given by Mario Beck, the MySQL Sales Consulting Manger for EMEA. There also will be a Q&A session in the expo area about MySQL, where you can come to ask questions.
  • Highload++, Moscow, Russia, November 8-9, 2018
    • MySQL team is going to be part of this technical conference in Moscow. You can find us on our MySQL booth in the expo area as well as you should be able to find MySQL talk in the schedule (still not heard about its acceptance). We are looking forward to talking to you @ Highload++ this year!!!
  • SeaGL, Seattle, US, November 9-10, 2018
    • MySQL community team supports this GNU/Linux conference in Seattle. 
  • PHP[World], Washington DC, US, November 14-15, 2018 
    • MySQL is a Workshop sponsor this year. Unfortunately no talk this year, but we are in the waiting list for lightening talk. Please watch the organizers' website.
  • BGOUG, Pravets, Bulgaria, November 14-15, 2018
    • As a tradition MySQL is a conference sponsor of this Bulgarian Oracle User Group regular conference. This time with a following talk:
      • "Data Masking in MySQL Enterprise 5.7 and 8" given by Georgi Kodinov, the Senior Software Developer Manager for MySQL. 
    • Please come to listen Georgi's talk as well as ask him questions.
  • DOAG, Nuremberg, Germany, November 20-23, 2018
    • Same as in previous years we are going to be part of this conference organized by German Oracle User Group. You can find MySQL staff at the Oracle booth (Place 320) as well as attend several MySQL talks. You can find the talks here.
  • Feira do Conhecimento, Fortaleza, Brazil, November 21-24, 2018
    • The Science, Technology and Higher Education Secretariat (Secitece) will hold the second edition of the Knowledge Fair - Science, Technology, Innovation and Business at the Ceará Event Center (East Pavilion) and we are happy to be part of this event!! A Local MySQL Sales representative will be there to answer all MySQL related questions as well as a MySQL talk on "Innovation, Business & Technology" is approved. Please watch the organizers' website for further updates.
  • PyCon HK, Hong Kong, November 23-24, 2018
    • MySQL is a Bronze sponsor of this Python show and again this year without booth, but with a MySQL talk on "NoSQL Development for MySQL Document Store using Python" by Ivan Ma, the MySQL Principal Sales Consultant.

December 2018

  • Tech18, UKOUG, Liverpool, UK, December 3-5, 2018
    • As a tradition MySQL will be part of this Oracle User Group Conference in the UK. You will be able to find our staff at the Oracle booth in the expo area.
  • IT.Tage 2018, Frankfurt, Germany, December 10-13, 2018
    • Our pleasure to announce that this year MySQL is again part of the IT.Tage event. This time together with Oracle Developer & Linux team. You will have an opportunity to find us all at the shared Oracle booth as well as attend several Oracle's talks. 
    • For MySQL do not miss the opportunity to listen what is new in MySQL 8 during following session:
      • "MySQL 8 - MySQL as a Document Store Database" given by Carsten Thalheimer, the MySQL Principal Sales Consultant, scheduled for December 12, 2018 @ 14:30-15:15 in the Database track. 
    • We are looking forward to talking to you there!
  • OpenSource Conference Fukuoka, Fukuoka, Japan (December 8, 2018)
    • MySQL is Gold sponsor here. You will be able to find us at MySQL booth in expo area as well as find a MySQL talk on "State of Dolphin" given by Yoshiaki Yamasaki, the MySQL Senior Sales Consultant Asia Pacific and Japan region, during the talk general product updates will be covered.
  • OpenSource Enterprise, Tokyo, Japan, (December 14, 2018)
    • Again, MySQL is a Gold sponsor with MySQL booth & talk on the same topic: "State of Dolphin" by Yoshiaki Yamasaki. 

We will continue updating you about upcoming events & conferences where you can find MySQL team at. 

 

 

 

Categories: Web Technologies

Picking a Deployment Method: Staging versus INI

Planet MySQL - Thu, 10/18/2018 - 08:05

Continuent Clustering is an extraordinarily flexible tool, with options at every layer of operation.

In this blog post, we will describe and discuss the two different methods for installing, updating and upgrading Continuent Clustering software.

When first designing a deployment, the question of installation methodology is answered by inspecting the environment and reviewing the customer’s specific needs.

Staging Deployment Methodology

All for One and One for All

Staging deployments were the original method of installing Continuent Clustering, and relied upon command-line tools to configure and install all cluster nodes at once from a central location called the staging server.

This staging server (which could be one of the cluster nodes) requires SSH access to all database nodes.

The simple steps include:

  1. cd to the top-level software staging directory. The default directory for extracting new tarballs is under /opt/continuent/software
  2. extract the software tarball
  3. cd into the newly-extracted directory
  4. configure with tools/tpm configure
  5. install using the tools/tpm install command

During execution, the tools/tpm install command uses SSH to install the Continuent software on all nodes in the cluster in parallel. For example, you could install a 4-site composite cluster with 3 nodes per site across multiple global regions with a single command and all 12 nodes would be handled at once.

Future configuration updates would be handled the same way (i.e. via tools/tpm update from the extracted software directory), and normally would affect all the cluster nodes at once.

The --no-connectors option may also be used to allow for the graceful cycling of the Connector connectivity layer.

INI Deployment Methodology

The Strength of Individuality

The INI methodology was developed with two situations in mind – no SSH access, and the ability to support per-node configuration files for automation tools like Chef, Puppet and Ansible.

In this deployment model, an INI file is placed on every host associated with the cluster first (/etc/tungsten/tungsten.ini by default). This is the equivalent to running the tools/tpm configure step from the staging method.

In the INI world, any tpm install or tpm update reads the configuration for that one node and executes the install or update ON THAT NODE ONLY. The presence of an INI file tells tpm to limit install and update actions to the current host.

In Continuent Clustering v4.0 and later, tpm will automatically search all INI files within the /etc/tungsten and current directories.

Once the INI file is populated on all cluster nodes, the procedure is similar to the staging method, except for the fact that it must occur on EVERY node individually.

The simple steps include:

  1. cd to the top-level software staging directory. The default directory for extracting new tarballs is under /opt/continuent/software
  2. extract the software tarball
  3. cd into the newly-extracted directory
  4. install using the tools/tpm install command

Note that the tpm configure step is gone since the configuration is now stored in the INI file.

Staging versus INI – Differences What’s the difference between the two methods of deployment?

Here are some of the key differences between Staging and INI deployments:

Staging

  • there is just one staging directory needed which resides on a single host (which may be a member cluster node or a separate host)
  • configuration is performed centrally in the staging directory on a single host
  • requires passwordless SSH
  • installs and updates multiple cluster nodes
  • provides a one-command cluster upgrade path

INI

  • every node has a locally-extracted staging directory
  • configuration is performed locally on each node via the /etc/tungsten/tungsten.ini file
  • does NOT require passwordless SSH
  • installs and updates one single cluster node at a time
  • installs and updates must be performed on every node one-by-one
  • there is NO one-command cluster upgrade path
  • ideal for automated environments
  • possible to have multiple configuration files in /etc/tungsten

Click here to read more about “Comparing Staging and INI tpm Methods”.

Staging versus INI – Decision We always recommend the INI deployment for many reasons:

– INI deployments have a very clear standard configuration location
– INI deployments are very easy to tweak on a per-host basis
– INI deployments are cloud and devops-friendly
– INI is required for certain topologies (i.e. Composite Multimaster)

Deployment Type – How Do I Know? Determining the deployment type after install

To determine which method your current deployment uses, check the output of:

shell&gt; tpm query deployments | grep deployment_external_configuration_type "deployment_external_configuration_type": "ini",

If you see the value of "ini", then the INI method was used to deploy your cluster.

~or~

shell&gt; ls -l /etc/tungsten/* -rw-rw-r-- 1 tungsten tungsten 1347 Oct 5 16:06 /etc/tungsten/tungsten.ini

If you use the ls command and see at least one .ini file, then you are using the INI method to configure your cluster.

Staging Directory – How Do I Know? Determining the staging directory location

To determine the staging directory location from where your cluster was installed, check the output of:

shell&gt; tpm query staging # Installed from tungsten@staging-host:/opt/continuent/software/tungsten-clustering-5.0.1-136

When using the INI method of configuration, the staging-host in the above example output would display the host name of the local host.

Continuent Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

For more information, please visit https://www.continuent.com/solutions

Want to learn more or run a POC? Contact us.

Categories: Web Technologies

MySQL Enterprise Monitor 8.0.3 has been released

Planet MySQL - Thu, 10/18/2018 - 06:14

The MySQL Enterprise Tools Development Team is pleased to announce the maintenance release of MySQL Enterprise Monitor 8.0.3 is now available for download on the My Oracle Support (MOS) web site. It will also be available for download via the Oracle Software Delivery Cloud in a few days.

If you are not familiar with MySQL Enterprise Monitor, it is the best-in-class tool for monitoring and management of your MySQL assets and is part of MySQL Enterprise Edition and MySQL Cluster Carrier Grade Edition subscriptions. You are invited to give it a try using our 30-day free customer trial. Go to http://www.mysql.com/trials, or contact Sales at http://www.mysql.com/about/contact.

To download MySQL Enterprise Monitor, go to My Oracle Support, choose the "Patches & Updates" tab, and then choose the "Product or Family (Advanced Search)" side tab in the "Patch Search" portlet. You will also find the binaries on the Oracle Software Delivery Cloud soon. Type "MySQL Enterprise Monitor 8.0.3" in the search box.

You can find the highlights of MySQL Enterprise Monitor 8.0 on the "What's New" section of the manual and more information on the contents of the 8.0.3 maintenance release in the change log.

Please open a bug or a ticket on My Oracle Support to report problems, request features, or give us general feedback about how this release meets your needs.

Thanks and Happy Monitoring!

Useful links

What's New in 8.0
Change log
Installation documentation
Product information
Frequently Asked Questions

Categories: Web Technologies

Advice on pt-stalk for connection saturated servers

Planet MySQL - Wed, 10/17/2018 - 16:30

Given an environment where a high volume web application is prone to opening many connections to backend resources, using a utility like pt-stalk is important.  When performance or availability affecting events like innodb lock waits or connection saturation occur, pt-stalk helps give you information you may need in troubleshooting what was happening.

The tendency may be to create multiple pt-stalks for various conditions.  This can be a poor decision when your server is dealing with both lock contention and high connections. When pt-stalk triggers, it triggers multiple simultaneous connections to MySQL to get the full processlist, lock waits, innodb transactions, slave status and other attributes.  Pt-stalk has the concept of sleeping a number of seconds after triggering, but once that time expires, the trigger may fire again, compounding the issue.  Put simply, pt-stalk can absorb the last few remaining connections on your database, particularly if you use extra_port (and run pt-stalk on the extra_port) and have a relatively low number of extra_max_connections.

Advice:  Stick to using only one of the built-in functions (like “processlist”) if it triggering when your processlist is large is enough for you.  Alternatively, write your own trg_plugin() function encompassing multiple tests that are relevant to your environment, if you need more than one check.

Unfortunately I cannot share the one I just wrote at this time (will need to write a more generic one to share later).  It checks processlist length, replication lag, innodb_trx wait time, and innodb_lock_waits, so that I could fold four of our more relevant checks into 1 pt-stalk and avoid the “connection stack_up” when MySQL was having an issue and mutiple stalks were firing.

Categories: Web Technologies

Percona Live Europe Presents: MariaDB System-Versioned Tables

Planet MySQL - Wed, 10/17/2018 - 09:14

System-versioned tables, or temporal tables, are a typical feature of proprietary database management systems like DB2, Oracle and SQL Server. They also appeared at some point in PostgreSQL, but only as an extension; and also in CockroachDB, but in a somewhat limited fashion.

The MariaDB® implementation is the first appearance of temporal tables in the MySQL ecosystem, and the most complete implementation in the open source world.

My presentation will be useful for analysts, and some managers, who will definitely benefit from learning how to use temporal tables. Statistics about how data evolves over time is an important part of their job. This feature will allow them to query data as it was at a certain point in time. Or to query how data changed over a period, including rows that were added, deleted or modified.

Developers will also find this feature useful, if they deal with data versioning or auditing. Recording the evolution of data into a database is not easy – several solutions are possible, but none is perfect. Streaming data changes to some event-based technology is also complex, and sometimes it’s simply a waste of resources. System-versioned tables are a good solution for many use cases.

And of course, DBA’s. Those guys will need to know what this feature is about, suggest it when appropriate, and maintain it in production systems.

More generally, many people are interested in understanding MariaDB’s unique features, as well as its MySQL ones. Their approach allows them to choose “the right tool for the right purpose”.

What I’m looking forward to…

I am excited about Percona Live agenda. A session that I definitely want to attend is MySQL Replication Crash Safety. I find extremely useful and interesting the talks about technology limitations and flaws. Jean-François has a long series of writings on MySQL replication and crash-safety, and I have questions for him.

I also like the evolution that PMM and its components had over the years. I want to understand how to use them at best in my new job, so I am glad to see that there will be several sessions on the topic. I plan to attend some sessions about PMM and Prometheus.

Performance Analyses Technologies for Databases makes me think to the cases when I saw a technology evaluated in an inappropriate way, and the talks I had with people impressed by some blog posts showing impressive benchmarks which didn’t fully understand. I will definitely attend.

And finally, I plan to learn something about ClickHouse, MyRocks and TiDB.

See you there!

The post Percona Live Europe Presents: MariaDB System-Versioned Tables appeared first on Percona Community Blog.

Categories: Web Technologies

Uber’s Big Data Platform: 100+ Petabytes with Minute Latency

Planet MySQL - Wed, 10/17/2018 - 09:00

Responsible for cleaning, storing, and serving over 100 petabytes of analytical data, Uber's Hadoop platform ensures data reliability, scalability, and ease-of-use with minimal latency.

The post Uber’s Big Data Platform: 100+ Petabytes with Minute Latency appeared first on Uber Engineering Blog.

Categories: Web Technologies

Continuent Clustering 5.3.3/5.3.4 and Tungsten Replicator 5.3.3/5.3.4 Released

Planet MySQL - Tue, 10/16/2018 - 16:00

Continuent is pleased to announce that Continuent Clustering 5.3.4 and Tungsten Replicator 5.3.4 are now available!

Release 5.3.4 was released shortly after 5.3.3 due to a specific bug in our reporting tool tpm diag. All of the major changes except this one fix are in the 5.3.3 release.

Our 5.3.3/4 release fixes a number of bugs and has been released to improve stability in certain parts of the products.

Highlights common to both products:

  • Fixed an issue with LOAD DATA INFILE
  • Replicator now outputs the filename of the file when using thl to show events
  • tpm diag has been improved the way we extract MySQL and support Net:SSH options

Highlights in the clustering product:

  • Tungsten Manager stability has been improved by identifying some memory leaks.
  • Tungsten Connector has fixed a number of bugs relating to bridge mode, connectivity to hosts and memory usage on the manager service
  • tpm has been improved to provide backup information

Highlights for the replicator product only:

  • trepctl logging did not include the right information for multiple services

Release notes:

https://docs.continuent.com/tungsten-clustering-5.3/release-notes-5-3-4.html

https://docs.continuent.com/tungsten-clustering-5.3/release-notes-5-3-3.html

https://docs.continuent.com/tungsten-replicator-5.3/release-notes-5-3-3.html

https://docs.continuent.com/tungsten-replicator-5.3/release-notes-5-3-4.html

 

Categories: Web Technologies

MySQL InnoDB Cluster Setup and Server Failover / Restart

Planet MySQL - Tue, 10/16/2018 - 08:51
MySQL InnoDB Cluster Setup and Configuration

Reference :
https://dev.mysql.com/doc/refman/8.0/en/mysql-innodb-cluster-production-deployment.html

Pr-requisite  Assumption :
    MySQL Server 8.0.12+ installation : /usr/local/mysql
    MySQL Shell installation : /usr/local/shell
    MySQL Router Installation : /usr/local/router
    Hostname and IP must be resolvable.   
        Make sure the /etc/hosts valid to have the IP and Hostname entries correctly with ALL MySQL Server machines.

The Video [ https://youtu.be/_jR_bJGTf-o ] provides the full steps showing the Demonstration of
1. Setting up of 3 x MySQL Servers on 1 VM
    a.  Configuration my1.cnf, my2.cnf and my3.cnf

    b.  MySQL  Server (mysqld) initialization
         # mysqld --defaults-file=<config file> --initialize-insecure

    c.  Start up MySQL Server
        # mysqld_safe --defaults-file=<config file> &
        
    Note : It is possible to setup MySQL Server on different machines.  Steps should be the same.
   
2. Using MySQL Shell (mysqlsh) to configure the MySQL Instance (LOCALLY)
    a. mysqlsh > dba.configureInstance( "root@localhost:<port>", {clusterAdmin:"gradmin", clusterAdminPassword:"grpass"})
   
    The configuration should be done on each of the server (LOCALLY).  Because, by default the MySQL installation creates 'root@localhost'  which can only access the Database LOCALLY.  

3.  Execute SQL "reset master;reset slave;"  on all the servers to make sure ALL data to be in-sync.   (If data import to all nodes, it should be good to reset the state)
    # mysql -uroot -h127.0.0.1 -P<port> -e "reset master;reset slave;"

4. Creating the Cluster
     a.  Connect to any one of the MySQL Server with "mysqlsh"
     # mysqlsh --uri gradmin:grpass@<hostname1>:<port1>
     mysqlsh> var cl = dba.createCluster('mycluster')
     b. Add the 2 more instances to the Cluster
     mysqlsh> cl.addInstance('gradmin:grpass@<hostname2>:<port2>')
     mysqlsh> cl.addInstance('gradmin:grpass@<hostname3>:<port3>')
     c. Show the status of the Cluster
      mysqlsh> cl.status()

The MySQL InnoDB Cluster should have been configured and running.

5. MySQL Router Setup and Startup
   a. Bootstrapping the configuration (Assuming /home/mysql/config/ folder exists)
       # mysqlrouter --bootstrap=gradmin:grpass@<hostname1>:<port1> --directory /home/mysql/config/mysqlrouter01 --force
    b. Start up MySQL Router
       # cd /home/mysql/config/mysqlrouter01;./start.sh
    The 'start.sh" is created by the bootstrapping process.   Once it is started, the default PORT (6446) is for RW routing.  PORT(6447) is for RO routing.

6. Testing MySQL Routing for RW
    # while [ 1 ]
    # do
    #    sleep 1
    #    mysql -ugradmin -pgrpass -h127.0.0.1 -P6446 -e "select @@hostname, @@port;"
    # done

7. Kill (or shutdown) the RW server
    # mysql -uroot -h127.0.0.1 -P<port> -e "shutdown;"      Note : root can only access locally
    Check the Routing acess from Step 6, the Hostname/Port should have switched/failed over.

Enjoy and check the Video [ https://youtu.be/_jR_bJGTf-o ] :  



 


   



Categories: Web Technologies

Where you can find MySQL in October - December 2018 - part 1.

Planet MySQL - Tue, 10/16/2018 - 06:33

We would like to announce the shows & conferences where you can find MySQL Community Team or MySQL representatives at. Please be aware that the list below could be subject of change.

October 2018

  • GITEX, Dubai, United Arab Emirates, October 14-18, 2018
    • Same as last year Oracle & MySQL are part of GITEX converence. There is an Oracle booth (Stand A5-01, Hall 5) as well as MySQL one (POD2, Hall 5) where you can find us. 
  • ZendCon, Las Vegas, US, October 15-17, 2018
    • We are Exhibitor sponsor same as last year, however newly this year we are going to have a new very cool booth design! Come to check and talk to us there!
    • Also if you are around, do not miss the sessions by David Stokes, The MySQL Community Manager as follows: 
      • "MySQL without the SQL -- Oh my!" scheduled for today @11:30-12:30pm
      • "MySQL 8 performance tuning" scheduled for tomorrow, Oct 17 @4:00-5:00pm
    • We are looking forward to seeing and talking to you at ZendCon!
  • JCConf, Taipei, Taiwan, October 19, 2018
    • Same as last year we are again supporting Java Community Conference (JCConf) in Taiwan. This year you can find our staff at Oracle/MySQL booth in expo area and listen following MySQL & Oracle talks:
      • "How MySQL support NoSQL with Java applications" by Ivan Tu, the Principal Sales consultant, MySQL APAC.
      • "GraalVM (polyglot environment supports Oracle, MySQL and Java)" given by Yudi Zheng, Senior researcher at Oracle Lab organization.
    • We are looking forward to talking to you there!
  • Oracle Open World & MySQL Community Reception, San Francisco, October 22-25, 2018
    • MySQL is again part of the OOW show, please check the website for a MySQL talks in the agenda. And same as last year MySQL Community organizes a MySQL Reception. It is an evening event organized by MySQL Community on October 23rd at Samovar Tea Lounge at 730 Howard St., San Francisco, CA from 7-9 pm. 
    • Join the MySQL team at the MySQL Reception to celebrate the dynamics and growth of the MySQL community!
  • Forum PHP, Paris, France, October 25-26, 2018
    • We are a Bronze sponsor of Forum PHP this year with an approved talk on: "MySQL 8.0: What's new". Please check the organizers' website for further details
  • JSFoo, Bangalore, India October 26-27, 2018
    • This is the first time we are attending this JavaScript conference, this year we are having just a talk there. The talk is scheduled for Sat, Oct 27, 2018 @13:40-14:10 with the topic of: "MySQL 8 loves JavaScript". The speaker is Sanjay Manwani, Developer Evangelism at Oracle.

November 2018

  • Madison PHP, Madison, US, November 2-3, 2018
    • MySQL Community Team is a Community Sponsor of Madison PHP conference this year again.
  • MOPCON 2018, Taipei, Taiwan, November 3-4, 2018
    • This year for the first time we are going to have a talk at MOPCON 2018. Please do not miss the opportunity to listen the talk given by Ivan Tu, the MySQL Principal Consultant Manager as follows:
      • "The mobile application supported by new generation MySQL 8.0" scheduled for Nov 4, @11:05-11:45am in BigData track.
    • ​​We are looking forward to meeting & talking to you there!

..More shows to be announced soon....

Categories: Web Technologies

Cluster Performance Validation via Load Testing

Planet MySQL - Tue, 10/16/2018 - 06:20

Your database cluster contains your most business-critical data and therefore proper performance under load is critical to business health. If response time is slow, customers (and staff) get frustrated and the business suffers a slow-down.

If the database layer is unable to keep up with demand, all applications can and will suffer slow performance as a result.

To prevent this situation, use load tests to determine the throughput as objectively as possible.

In the sample load.pl script below, increase load by increasing the thread quantity.

You could also run this on a database with data in it without polluting the existing data since new test databases are created to match each node’s hostname for uniqueness.

Note: The examples in this blog post assume that a Connector is running on each database node and listening on port 3306, and that the database itself is listening on non-standard port 13306.

Install Sysbench

As the root user:

wget https://www.percona.com/redir/downloads/percona-release/redhat/latest/percona-release-0.1-4.noarch.rpm rpm -Uvh percona-release-0.1-4.noarch.rpm yum install sysbench sysbench

https://github.com/akopytov/sysbench#usage

Create and Prepare the Test Databases

First, prepare the per-host test databases by writing via the Connector so that the create database commands are replicated from the master to all nodes.

Repeat on all nodes as OS user tungsten:

export HOST=`/bin/hostname -s`; mysql -u app_user -P3306 -psecret -h127.0.0.1 -e "create database test_$HOST;" sysbench --db-driver=mysql --mysql-user=app_user --mysql-password=secret --mysql-host=127.0.0.1 --mysql-port=3306 --mysql-db=test_`/bin/hostname -s` --db-ps-mode=disable --range_size=100 --table_size=10000 --tables=2 --threads=1 --events=0 --time=60 --rand-type=uniform /usr/share/sysbench/oltp_read_write.lua prepare

Create and run the load test script

touch load.pl; chmod 755 load.pl; vim load.pl ./load.pl

Below is the load.pl script:

#!/usr/bin/perl # load.pl use strict; our $time = 58; our $threads = 2; our $name = `/bin/hostname -s`; chomp($name); our $cmd = <<EOT; sysbench --db-driver=mysql --mysql-user=app_user --mysql-password=secret --mysql-host=127.0.0.1 --mysql-port=3306 --mysql-db=test_$name --db-ps-mode=disable --range_size=100 --table_size=10000 --tables=2 --threads=$threads --events=0 --time=$time --rand-type=uniform /usr/share/sysbench/oltp_read_write.lua run EOT print "Executing: $cmd\n"; system($cmd); exit 0;

Here is an example run with threads set to 1 for a small test:

tungsten@db5:/home/tungsten # ./load.pl Executing: sysbench --db-driver=mysql --mysql-user=app_user --mysql-password=secret --mysql-host=127.0.0.1 --mysql-port=3306 --mysql-db=test_db5 --db-ps-mode=disable --range_size=100 --table_size=10000 --tables=2 --threads=1 --events=0 --time=58 --rand-type=uniform /usr/share/sysbench/oltp_read_write.lua run sysbench 1.0.15 (using bundled LuaJIT 2.1.0-beta2) Running the test with following options: Number of threads: 1 Initializing random number generator from current time Initializing worker threads... Threads started! SQL statistics: queries performed: read: 71204 write: 8220 other: 22296 total: 101720 transactions: 5086 (87.68 per sec.) queries: 101720 (1753.69 per sec.) ignored errors: 0 (0.00 per sec.) reconnects: 0 (0.00 per sec.) General statistics: total time: 58.0014s total number of events: 5086 Latency (ms): min: 9.09 avg: 11.40 max: 218.45 95th percentile: 13.70 sum: 57978.28 Threads fairness: events (avg/stddev): 5086.0000/0.00 execution time (avg/stddev): 57.9783/0.00

Install into cron if desired

We use this script on our lab clusters to generate ongoing load. To do so, we simply add the 58-second sysbench load job (-time=58) into cron to be executed once per minute:

tungsten@db3:/home/tungsten # crontab -e * * * * * /home/tungsten/load.pl > /dev/null 2>&1

The Wrap-Up

Increase load gradually by increasing the number of threads be host. Use the script on more than one host to increase load further. Use commands like top and iostat to monitor system resource utilization.

By testing through the Connector, you are testing multiple things at once:

  • Overall throughput via the Connector (also helps prove out the overall network bandwidth)
  • MySQL read and write capabilities on the Master in terms of throughput and system utilization
  • Replicator speed to the slaves – check the slave latency – are they up to date? This tests the replicator, the network and the write speed of the slave databases
  • Behavior and erformance of the application (do a switch under load. Does it reconnect properly? Are there any issues or errors?)

Perform as much testing as possible BEFORE you go live and save yourself tons of headaches!

To learn about Continuent solutions in general, check out https://www.continuent.com/solutions

For more information about monitoring Continuent clusters, please visit https://docs.continuent.com/tungsten-clustering-6.0/ecosystem-nagios.html.

Continuent Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

For more information, please visit https://www.continuent.com/solutions

Want to learn more or run a POC? Contact us.

Categories: Web Technologies

[Solved] MySQL User Operation - ERROR 1396 (HY000): Operation CREATE / DROP USER failed for 'user'@'host'

Planet MySQL - Mon, 10/15/2018 - 09:38
Error Message: ERROR 1396 (HY000): Operation CREATE USER failed for 'admin'@'%'
Generic Error Message: Operation %s failed for %s
Error Scenario: Operation CREATE USER failed
Operation DROP USER failed

Reason: The reason for this error is, you are trying to do some user operation but the user does not exist on the MySQL system. Also, for drop user, the user details are stored somewhere in the system, even though, you have already dropped the user from MySQL server.
Resolution: Revoke all access granted to the user, drop the user, and run FLUSH PRIVILEGE command to remove the caches. Now create/drop/alter the user, it will work.
REVOKE ALL ON *.* FROM 'user'@'host'; DROP USER 'user'@'host'; FLUSH PRIVILEGES;
Grant Tables: The following tables will help you in identifying the user related informations (as of MySQL 5.7):

mysql.user: User accounts, global privileges, and other non-privilege columns mysql.db: Database-level privileges mysql.tables_priv: Table-level privileges mysql.columns_priv: Column-level privileges mysql.procs_priv: Stored procedure and function privileges
mysql.proxies_priv: Proxy-user privilege
Related MySQL Bug Reports: https://bugs.mysql.com/bug.php?id=28331 https://bugs.mysql.com/bug.php?id=86523
I hope this post will help you, if you faced this error on some other scenarios or if you know, some other workaround / solution for this error, please add on the comment section. It will be helpful for other readers.



Categories: Web Technologies

Percona Live Europe Tutorial: Query Optimization and TLS at Large Scale

Planet MySQL - Mon, 10/15/2018 - 07:05

For Percona Live Europe this year, I got accepted a workshop on query optimization and a 50-minute talk covering TLS for MySQL at Large Scale, talking about our experiences at the Wikimedia Foundation.

Workshop

The 3-hour workshop on Monday, titled Query Optimization with MySQL 8.0 and MariaDB 10.3: The Basics is a beginners’ tutorial–though dense in content. It’s for people who are more familiar with database storage systems other than InnoDB for MySQL, MariaDB or Percona Server. Or who, already familiar with them, are suffering performance and scaling issues with their SQL queries. If you get confused with the output of basic commands like EXPLAIN and SHOW STATUS and want to learn some SQL-level optimizations, such as creating the right indexes or altering the schema to get the most out of the performance of your database server, then you want to attend this tutorial before going into more advanced topics. Even veteran DBAs and developers may learn one or two new tricks, only available on the latest server versions!

Something that people may enjoy is that, during the tutorial, every attendee will be able to throw queries to a real-time copy of the Wikipedia database servers—or setup their own offline Wikipedia copy in their laptop. They’ll get practice by themselves what is being explained—so it will be fully hands-on. I like my sessions to be interactive, so all attendees should get ready to answer questions and think through the proposed problems by themselves!

Fifty minutes talk

My 50 minute talk TLS for MySQL at Large Scale will be a bit more advanced, although maybe more attractive to users of other database technologies. On Tuesday, I will tell the tale of the mistakes and lessons learned while deploying encryption (TLS/SSL) for the replication, administration, and client connections of our databases. At the Wikimedia Foundation we take very seriously the privacy of our users—Wikipedia readers, project contributors, data reusers and every members of our community—and while none of our databases are publicly reachable, our aim is to encrypt every single connection between servers, even within our datacenters.

However, when people talk about security topics, most of the time they are trying to show off the good parts of their set up, while hiding the ugly parts. Or maybe they are too theoretical to actually learn something. My focus will not be on the security principles everybody should follow, but on the pure operational problems, and the solutions we needed to deploy, as well what we would have done differently if we had known, while deploying TLS on our 200+ MariaDB server pool.

Looking forward…

For me, as an attendee, I always look forward to the ProxySQL sessions, as it is something we are currently deploying in our production. Also, I want to know more about the maturity and roadmap of the newest MySQL and MariaDB releases, as they keep adding new interesting features we need, as well as cluster technologies such as Galera and InnoDB Cluster. I like, too, to talk with people developing and using other technologies outside of my stack, and you never know when they will fill in a need we have (analytics, compression, NoSQL, etc.).

But above all, the thing I enjoy the most is the networking—being able to talk with professionals that suffer the same problems that I do is something I normally cannot do, and that I enjoy doing a lot during Percona Live.

Jaime Crespo in a Percona Live T-Shirt – why not come to this year’s event and start YOUR collection.

The post Percona Live Europe Tutorial: Query Optimization and TLS at Large Scale appeared first on Percona Community Blog.

Categories: Web Technologies

Pages