emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Micro frontends in practice

Echo JS - Wed, 08/01/2018 - 16:25
Categories: Web Technologies

A Nice Introduction to MySQL Window Functions III

Planet MySQL - Wed, 08/01/2018 - 15:48
Windowing Functions can get quite complex very quickly when you start taking advantage of the frame clause. Ranges and rows can get confusing.  So for review lets look at how the specification looks:

Window_spec:
   [window name] [partition clause] [order clause] [frame clause]

That looks simple. And them come terms like UNBOUNDED PRECEDING that could put a knot in your gut.  The manual is not exactly written to help novices in this area get up to speed.  But don't panic.  If you work through the examples that follow (and please do the preceding part of this series before trying these examples) you will have a better appreciation of what is going on with window function.

The Frame Clause
So the frame clause is optional in the window function.  A frame is considered a subset of the current partition and defines that subset.  Frames are determined with respect to the current row. This allows grouping of data within a partition depending on the current row in the partition.  If the frame is defined as all rows  from the start of the end of the partition you can computer running totals for each row.  Or the frame can be defined as extending a desired number of rows  either side of the current row which lets you compute rolling averages.

The first example in this series had the window defined as W OVER() which does not have a frame clause and computes over the entire column.

mysql> select x, 
              sum(x) over() from x;
+------+---------------+
| x    | sum(x) over() |
+------+---------------+
|    1 |            55 |
|    2 |            55 |
|    3 |            55 |
|    4 |            55 |
|    5 |            55 |
|    6 |            55 |
|    7 |            55 |
|    8 |            55 |
|    9 |            55 |
|   10 |            55 |
+------+---------------+
10 rows in set (0.00 sec)

UnboundedThere is usually more than one way to do things. OVER() can also be written as WINDOW w AS (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) or WINDOW w AS (RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING).

Uh, wha, huh?

Besides the confusion of windows/frames and range/rows, the wording gets tricky.   UNBOUNDED should be taken to mean as 'everything' so UNBOUNDED BEFORE means everything before and UNBOUNDED AFTER means everything after.



mysql> SELECT x, 
             sum(x) over() as 'over', 
             sum(x) OVER a as 'row',
             sum(x) OVER b AS 'range' 
             FROM x 
             window 
a AS (rows between unbounded preceding and unbounded following),    b AS (range between unbounded preceding and unbounded following);
+------+------+------+-------+
| x    | over | row  | range |
+------+------+------+-------+
|    1 |   55 |   55 |    55 |
|    2 |   55 |   55 |    55 |
|    3 |   55 |   55 |    55 |
|    4 |   55 |   55 |    55 |
|    5 |   55 |   55 |    55 |
|    6 |   55 |   55 |    55 |
|    7 |   55 |   55 |    55 |
|    8 |   55 |   55 |    55 |
|    9 |   55 |   55 |    55 |
|   10 |   55 |   55 |    55 |
+------+------+------+-------+
10 rows in set (0.01 sec)

So in the above example the windows are framed so that all the rows are used for the summation of the column when using rows and range.

ROW vs RANGE

Right now you are probably wondering what the difference is between RANGE and ROW.  Let's modify the last example slightly. So we replace between current row and unbounded following as the core of the frame specification.

mysql> SELECT x, 
         sum(x) over() as 'over', 
         sum(x) OVER a as 'row', 
         sum(x) OVER b AS 'range' 
         FROM x 
  window a AS (rows between current row and unbounded following), 
         b AS (range between current row and unbounded following);
+------+------+------+-------+
| x    | over | row  | range |
+------+------+------+-------+
|    1 |   55 |   55 |    55 |
|    2 |   55 |   54 |    55 |
|    3 |   55 |   52 |    55 |
|    4 |   55 |   49 |    55 |
|    5 |   55 |   45 |    55 |
|    6 |   55 |   40 |    55 |
|    7 |   55 |   34 |    55 |
|    8 |   55 |   27 |    55 |
|    9 |   55 |   19 |    55 |
|   10 |   55 |   10 |    55 |
+------+------+------+-------+
10 rows in set (0.00 sec)

The row column now counts down while the range column is unchanged.  What happened? So rows have a frame defined by the beginning and ending row position.  The first row has a sum of all ten items in the column. But the second row has a sum starting with the value of 2 and ending at 10, the third row sums 3 to 10, etcetera.

For range, the frame is defined by rows within a value range of an ORDER BY clause.  But where is the ORDER BY??  Well in this case it is implied to be the entire partition so it takes all the column values.

Range gets a little more differentiated when you use it a little more creatively.

mysql> SELECT x,
             sum(x) over() as 'over', 
             sum(x) OVER a as 'row', 
             sum(x) OVER b AS 'range' 
             FROM x 
             window 
               a AS (rows between current row and 1 following), 
               b AS (order by x 
                     range between current row and 1 following);
+------+------+------+-------+
| x    | over | row  | range |
+------+------+------+-------+
|    1 |   55 |    3 |     3 |
|    2 |   55 |    5 |     5 |
|    3 |   55 |    7 |     7 |
|    4 |   55 |    9 |     9 |
|    5 |   55 |   11 |    11 |
|    6 |   55 |   13 |    13 |
|    7 |   55 |   15 |    15 |
|    8 |   55 |   17 |    17 |
|    9 |   55 |   19 |    19 |
|   10 |   55 |   10 |    10 |
+------+------+------+-------+
10 rows in set (0.06 sec)

The above will sum the current row and the next row.  With row the frame becomes rows between current row and 1 following.  But this time for range the order has to be specified and the frame becomes  order by x range between current row and 1 following. 

It is possible to specify bigger ranges.  

mysql> SELECT x, 
          sum(x) over() as 'over', 
           sum(x) OVER a as 'row', 
           sum(x) OVER b AS 'range' 
          FROM x 
      window a AS (rows between 1 preceding and 3 following), 
        b AS (order by x range between 1 preceding and 3 following);
+------+------+------+-------+
| x    | over | row  | range |
+------+------+------+-------+
|    1 |   55 |   10 |    10 |
|    2 |   55 |   15 |    15 |
|    3 |   55 |   20 |    20 |
|    4 |   55 |   25 |    25 |
|    5 |   55 |   30 |    30 |
|    6 |   55 |   35 |    35 |
|    7 |   55 |   40 |    40 |
|    8 |   55 |   34 |    34 |
|    9 |   55 |   27 |    27 |
|   10 |   55 |   19 |    19 |
+------+------+------+-------+
10 rows in set (0.00 sec)


Hopefully this takes a little bit of the mystery out of windowing functions for novices.

Next time I hope to look into some time series data and windowing functions.
Categories: Web Technologies

Percona Monitoring and Management 1.13.0 Is Now Available

Planet MySQL - Wed, 08/01/2018 - 11:20

PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring MySQL and MongoDB performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

The most significant feature in this release is Prometheus 2, however we also packed a lot of visual changes into release 1.13:

  • Prometheus 2 – Consumes less resources, and Dashboards load faster!
  • New Dashboard: Network Overview – New dashboard for all things IPv4!
  • New Dashboard: NUMA Overview – New Dashboard! Understand memory allocation across DIMMs
  • Snapshots and Updates Improvements – Clearer instructions for snapshot sharing, add ability to disable update reporting
  • System Overview Dashboard improvements – See high level summary, plus drill in on CPU, Memory, Disk, and Network
  • Improved SingleStat for percentages – Trend line now reflects percentage value

We addressed 13 new features and improvements, and fixed 13 bugs.

Prometheus 2

The long awaited Prometheus 2 release is here!  By upgrading to PMM release 1.13, Percona’s internal testing has shown you will achieve a 3x-10x reduction in CPU usage, which translates into PMM Server being able to handle more instances than you could in 1.12.  You won’t see any gaps in graphs since internally PMM Server will run two instances of Prometheus and leverage remote_read in order to provide consistent graphs!

Our Engineering teams have worked very hard to make this upgrade as transparent as possible – hats off to them for their efforts!!

Lastly on Prometheus 2, we also included a new set of graphs to the Prometheus Dashboard to help you better understand when your PMM Server may run out of space. We hope you find this useful!

Network Overview Dashboard

We’re introducing a new dashboard that focuses on all things Networking – we placed a Last Hour panel highlighting high-level network metrics, and then drill into Network Traffic + Details, then focus on TCP, UDP, and ICMP behavior.

Snapshots and Updates Improvements

Of most interest to current Percona Customers, we’ve clarified the instructions on how to take a snapshot of a Dashboard in order to highlight that you are securely sharing with Percona. We’ve also configured the sharing timeout to 30 seconds (up from 4 seconds) so that we more reliably share useful data to Percona Support Engineers, as shorter timeout led to incomplete graphs being shared.

Packed into this feature is also a change to how we report installed version, latest version, and what’s new information:

Lastly, we modified the behavior of the docker environment option DISABLE_UPDATES to remove the Update button.  As a reminder, you can choose to disable update reporting for environments where you want tighter control over (i.e. lock down) who can initiate an update by launching the PMM docker container along with the environment variable as follows:

docker run ... -e DISABLE_UPDATES=TRUE System Overview Dashboard Improvements

We’ve updated our System Overview Dashboard to focus on the four criteria of CPU, Memory, Disk, and Network, while also presenting a single panel row of high level information (uptime, count of CPUs, load average, etc)

Our last feature we’re introducing in 1.13 is a fix to SingleStat panels where the percentage value is reflected in the level of the trend line in the background.  For example, if you have a stat panel at 20% and 86%, the line in the background should fill the respective amount of the box:Improved SingleStat for percentages

New Features & Improvements
  • PMM-2225 – Add new Dashboard: Network Overview
  • PMM-2485 – Improve Singlestat for percentage values to accurately display trend line
  • PMM-2550 – Update to Prometheus 2
  • PMM-1667 – New Dashboard: NUMA Overview
  • PMM-1930 – Reduce Durability for MySQL
  • PMM-2291 – Add Prometheus Disk Space Utilization Information
  • PMM-2444 – Increase space for legends
  • PMM-2594 – Upgrade to Percona Toolkit 3.0.10
  • PMM-2610 – Configure Snapshot Timeout Default Higher and Update Instructions
  • PMM-2637 – Check for Updates and Disable Updates Improvements
  • PMM-2652 – Fix “Unexpected error” on Home dashboard after upgrade
  • PMM-2661 – Data resolution on Dashboards became 15sec min instead of 1sec
  • PMM-2663 – System Overview Dashboard Improvements
Bug Fixes
  • PMM-1977 – after upgrade pmm-client (1.6.1-1) can’t start mysql:metrics – can’t find .my.cnf
  • PMM-2379 – Invert colours for Memory Available graph
  • PMM-2413 – Charts on MySQL InnoDB metrics are not fully displayed
  • PMM-2427 – Information loss in CPU Graph with Grafana 5 upgrade
  • PMM-2476 – AWS PMM is broken on C5/M5 instances
  • PMM-2576 – Error in logs for MySQL 8 instance on CentOS
  • PMM-2612 – Wrong information in PMM Scrapes Task
  • PMM-2639 – mysql:metrics does not work on Ubuntu 18.04
  • PMM-2643 – Socket detection and MySQL 8
  • PMM-2698 – Misleading Graphs for Rare Events
  • PMM-2701 – MySQL 8 – Innodb Checkpoint Age
  • PMM-2722 – Memory auto-configuration for Prometheus evaluates to minimum of 128MB in entrypoint.sh
How to get PMM Server

PMM is available for installation using three methods:

The post Percona Monitoring and Management 1.13.0 Is Now Available appeared first on Percona Database Performance Blog.

Categories: Web Technologies

What’s new in Angular: Version 6.1 ships

InfoWorld JavaScript - Wed, 08/01/2018 - 11:10

Version 6.1 of Angular, Google’s popular JavaScript framework for building mobile and desktop applications, is here, with the first production release available.

To read this article in full, please click here

(Insider Story)
Categories: Web Technologies

Lock elision, pthreads and MySQL

Planet MySQL - Wed, 08/01/2018 - 10:26
Yesterday I learned that lock elision is supported in recent versions of glibc for pthread mutex and rw-lock. I am curious if anyone has results for MySQL with it. My memory is that InnoDB can suffer from contention on a rw-lock, but that is a custom rw-lock not the one included with glibc. But code above the storage engine uses mutex and maybe rw-lock from glibc.

A rw-lock where reads dominate can suffer from contention because it has at least twice the memory writes per lock/unlock pair compared to a mutex. So when the lock hold time is short a mutex wins even when exclusive access isn't required. This can often be seen in PMP output where there are convoys and the worst-case is when a thread gets stuck trying to get the internal latch during unlock, but the InnoDB custom rw-lock might not have that problem. Lock elision for the rw-lock might be a big deal in this case.

RocksDB might also benefit from this change.

One of the challenges with glibc pthreads is documentation. I previously wrote about the difficulty of finding documentation for PTHREAD_MUTEX_ADAPTIVE_NP. The problem continues. There isn't much about pthreads in a recent version of the glibc manual. From Google searches I wasn't able to find recent docs elsewhere, except for man pages. But man pages don't document PTHREAD_MUTEX_ADAPTIVE_NP.  With lock elision we get new options -- PTHREAD_MUTEX_ELISION_NP and PTHREAD_MUTEX_NO_ELISION_MP. Google searches will take you to bits of source code and email list discussions. I hope this can be improved. Given the lack of docs you might need to read the source. I hope that the community (web-scale companies) can sponsor a tech writer to provide the missing docs.

There has been drama because the introduction of this feature failed when it encountered buggy microcode on certain CPUs. Then there was more drama when it broke buggy software that worked despite the bugs, until lock elision made the bugs serious. Google searches find many of the stories.

One of my favorite perks at work is getting answers from experts. In this case the expert is Nathan Bronson (thank you). A summary of the glibc 2.23 implementation per the expert is:
  • NPTL lock elision is performed using TSX's RTM (Restricted Transactional Memory) instructions XBEGIN, XEND, and XABORT, rather than TSX's HLE (Hardware Lock Elision) instructions XACQUIRE and XRELEASE
  • On x86, elision support is always present when detected by HAS_CPU_FEATURE(RTM)
  • pthread_rwlock_t always attempts elision if the hardware has it (both for .._rdlock and .._wrlock)
  • pthread_rwlock_t uses an adaptive strategy for falling back to the non-TSX implementation. If the lock is held in a non-TSX mode, there is a transaction conflict, or the transaction exceeds TSX's (undocumented) capacity, then the current lock acquisition and the 3 following use the non-TXN code path. This means that once a lock falls off the elision path it needs several uncontended acquisitions before a transaction it will be attempted again. This seems quite conservative
  • pthread_rwlock_rdlock -> pthread_rwlock_unlock with a successful transaction is about twice as fast as the non-TSX implementation under no contention, and massively better under contention
Categories: Web Technologies

Mastering Continuent Clustering Series: Converting a standalone cluster to a Composite Primary/DR topology using INI configuration

Planet MySQL - Wed, 08/01/2018 - 08:30

In this blog post, we demonstrate how to convert a single standalone cluster into a Composite Primary/DR topology running in two data centers.

Our example starting cluster has 5 nodes (1 master and 4 slaves) and uses service name alpha. Our target cluster will have 6 nodes (3 per cluster) in 2 member clusters alpha_east and alpha_west in composite service alpha.

This means that we will reuse the existing service name alpha as the name of the new composite service, and create two new service names, one for each cluster (alpha_east and alpha_west).

Below is an INI file extract example for our starting standalone cluster with 5 nodes:

[defaults] ... [alpha] connectors=db1,db2,db3,db4,db5 master=db1 members=db1,db2,db3,db4,db5 topology=clustered

To convert the above configuration to a Composite Primary/DR:

  1. First you must stop all services on all existing nodes: shell> stopall
  2. Update tungsten.ini on all nodes.
    Create the two new services and put the correct information into all three stanzas.
    For example, below is an INI file extract example for our target composite cluster with 6 nodes: [defaults] start-and-report=false start=false ... [alpha_east] connectors=db1,db2,db3 master=db1 members=db1,db2,db3 topology=clustered [alpha_west] connectors=db4,db6 master=db4 members=db4,db5,db6 topology=clustered relay-source=alpha_east [alpha] composite-datasources=alpha_east,alpha_west
  3. Invoke the conversion using the tpm command from the software extraction directory: shell> tpm query staging shell> cd {software_staging_dir_from_tpm_query} shell> ./tools/tpm update --replace-release shell> rm /opt/cont/tung/cluster-home/conf/cluster/*/datasource/*
  4. Finally, start all services on all existing nodes. shell> startall

In future articles, we will continue to cover more advanced subjects of interest!

Questions? Contact Continuent

Categories: Web Technologies

Saving With MyRocks in The Cloud

Planet MySQL - Wed, 08/01/2018 - 08:13

The main focus of a previous blog post was the performance of MyRocks when using fast SSD devices. However, I figured that MyRocks would be beneficial for use in cloud workloads, where storage is either slow or expensive.

In that earlier post, we demonstrated the benefits of MyRocks, especially for heavy IO workloads. Meanwhile, Mark wrote in his blog that the CPU overhead in MyRocks might be significant for CPU-bound workloads, but this should not be the issue for IO-bound workloads.

In the cloud the cost of resources is a major consideration. Let’s review the annual cost for the processing and storage resources.

 Resource cost/year, $   IO cost $/year   Total $/year  c5.9xlarge  7881    7881 1TB io1 5000 IOPS  1500  3900    5400 1TB io1 10000 IOPS  1500  7800    9300 1TB io1 15000 IOPS  1500  11700  13200 1TB io1 20000 IOPS  1500  15600  17100 1TB io1 30000 IOPS  1500  23400  24900 3.4TB GP2 (10000 IOPS)  4800    4800

 

The scenario

The server version is Percona Server 5.7.22

For instances, I used c5.9xlarge instances. The reason for c5 was that it provides high performance Nitro virtualization: Brendan Gregg describes this in his blog post. The rationale for 9xlarge instances was to be able to utilize io1 volumes with a 30000 IOPS throughput – smaller instances will cap io1 throughput at a lower level.

I also used huge gp2 volumes: 3400GB, as this volume provides guaranteed 10000 IOPS even if we do not use io1 volumes. This is a cheaper alternative to io1 volumes to achieve 10000 IOPS.

For the workload I used sysbench-tpcc 5000W (50 tables * 100W), which for InnoDB gave about 471GB in storage used space.

For the cache I used 27GB and 54G buffer size, so the workload is IO-heavy.

I wanted to compare how InnoDB and RocksDB performed under this scenario.

If you are curious I prepared my terraform+ansible deployment files here: https://github.com/vadimtk/terraform-ansible-percona

Before jumping to the results, I should note that for MyRocks I used LZ4 compression for all levels, which in its final size is 91GB. That is five times less than InnoDB size. This alone provides operational benefits—for example to copy InnoDB files (471GB) from a backup volume takes longer than 1 hour, while it is much faster (five times) for MyRocks.

The benchmark results

So let’s review the results.

Or presenting average throughput in a tabular form:

cachesize IOPS engine avg TPS 27 5000 innodb 132.66 27 5000 rocksdb 481.03 27 10000 innodb 285.93 27 10000 rocksdb 1224.14 27 10000gp2 innodb 227.19 27 10000gp2 rocksdb 1268.89 27 15000 innodb 436.04 27 15000 rocksdb 1839.66 27 20000 innodb 584.69 27 20000 rocksdb 2336.94 27 30000 innodb 753.86 27 30000 rocksdb 2508.97 54 5000 innodb 197.51 54 5000 rocksdb 667.63 54 10000 innodb 433.99 54 10000 rocksdb 1600.01 54 10000gp2 innodb 326.12 54 10000gp2 rocksdb 1559.98 54 15000 innodb 661.34 54 15000 rocksdb 2176.83 54 20000 innodb 888.74 54 20000 rocksdb 2506.22 54 30000 innodb 1097.31 54 30000 rocksdb 2690.91

 

We can see that MyRocks outperformed InnoDB in every single combination, but it is also important to note the following:

MyRocks on io1 5000 IOPS showed the performance that InnoDB showed in io1 15000 IOPS.

That means that InnoDB requires three times more in storage throughput. If we take a look at the storage cost, it corresponds to three times more expensive storage. Given that MyRocks requires less storage, it is possible to save even more on storage capacity.

On the most economical storage (3400GB gp2, which will provide 10000 IOPS) MyRocks showed 4.7 times better throughput.

For the 30000 IOPS storage, MyRocks was still better by 2.45 times.

However it is worth noting that MyRocks showed a greater variance in throughput during the runs. Let’s review the charts with 1 sec resolution for GP2 and io1 30000 IOPS storage:

Such variance might be problematic for workloads that require stable throughput and where periodical slowdowns are unacceptable.

Conclusion

MyRocks is suitable and beneficial not only for fast SSD, but also for cloud deployments. By requiring less IOPS, MyRocks can provide better performance and save on the storage costs.

However, before evaluating MyRocks, make sure that your workload is IO-bound i.e. the working set is much bigger than available memory. For CPU-intensive workloads (where the working set fits into memory), MyRocks will be less beneficial or even perform worse than InnoDB (as described in the blog post A Look at MyRocks Performance)

 

 

 

The post Saving With MyRocks in The Cloud appeared first on Percona Database Performance Blog.

Categories: Web Technologies

On Switching Code Editors

CSS-Tricks - Wed, 08/01/2018 - 07:05

I'm sure a lot of you are like me and have switched code editors a number of times. I think my first major editor was Coda. Then I moved to TextMate when I started working primarily on local. Then Sublime Text. And, most recently, VS Code. I bet your journey was different. I know lots of folks that quite love Atom, Brackets, WebStorm, and even BBedit. You do you!

For me, that's four changes in a dozen years, or a change every three years. Moving isn't something I do quickly. Here's a collection of thoughts around the idea of changing editors.

When moving, I have to take time to make sure it works pretty much like the old one.

Otherwise, I'll just end up disliking it to the point that I switch back a day or two later. It's happened to me every time I switch. I have little false-starts after a switch where I go back to the old editor because something bugged me too much or it affected my productivity and I gave up. (Now that I know I do this, I don't let a single false-start make me feel like the editor I'm trying is never a possibility.)

My latest switch was from Sublime Text to VS Code. I'd become become very used to the key bindings (e.g. CMD+Shift+d to duplicate a line) in Sublime Text, so thankfully VS Code has that covered.

I was amazed to find even my VIM friends happy and comfortable in VS Code. (Fun fact: we have key bindings choices in CodePen, too.)

Nothing can be too obnoxious.

In one of my first attempts at switching, I found the UI in VS code to be too cluttered and the find-in-project feature to be a little slow and awkward. Those things bugged me to the point they caused false-starts and I went back to Sublime Text.

On this last switch attempt (my 3rd or 4th maybe?) I finally have a theme I quite like (customized a smidge), found some settings to clean up the UI (I removed the whitespace indicators which were overwhelming to me, and overrode that intense blue footer with something more chill).

In working with find-in-project a bit more, I've grown to get used to it. I finally might even like it more than Sublime, as the sidebar approach is more consistent than opening a new tab of results. I find the jump-to-line feature works more consistently and search feels more the first-class citizen it should be.

Another factor would be Emmet. I'm quite sure that I'd be too annoyed writing HTML and CSS in an editor without Emmet, and I'd just give up and use something else that had it. Emmet isn't even an extension in VS Code, it's built in.

I'm cool with making small changes after a successful switch.

Once I've actually done it, and made the switch to full-time use, then I can make some changes. Maybe I'll learn some new key commands. Maybe I'll add an extension that adds functionality I've never had before. Maybe the editor affects some workflow thing in a way I'm now willing to try.

The new editor better have some killer feature that incentivizes me to switch.

If it's exactly the same, why bother?

The new editor needs to be faster (or feel just as fast). Or should look better. Or it should have some awesome package that is only available on it. Ideally all of that.

In this recent switch for me, it was GitLens.

How cool is that? It ought to have a plugin architecture.

Meaning that anyone can write code to extend the editor. I'm fairly certain that having a plugin architecture (plus a healthy amount of community momentum) is what is key to any editor's success. Sublime's package manager, and the subsequent built-in packages feature of VS Code, seem crucial. Not only for functionality but even just for the look of the editor. I wouldn't use an editor with a look I hate, but I'd be tempted to if the functionality was awesome. That's a non-issue with a plugin-based editor. Open source seems smart as well.

Careful for those GOTCHAs.

One of those was spell-checking for me. In Sublime Text, it was an option under the View menu. I had it checked all the time, and it spell-checked all the time.

This is not a thing in VS Code. That was dangerous after switching because, who knows, I may have been committing typos all over the place. Fortunately, this extension seems to be doing the trick. Thank jeepers for extensions!

Your thoughts!

I thought it might be interesting to ask what y'all think about when switching code editors. There were lots of responses. I picked out as many as I could here and focused on one thing that you mentioned.

The post On Switching Code Editors appeared first on CSS-Tricks.

Categories: Web Technologies

MariaDB AX Distributed Tarball Installation

Planet MySQL - Wed, 08/01/2018 - 05:13
MariaDB AX Distributed Tarball Installation Faisal Wed, 08/01/2018 - 08:13

This guide is meant to help set up a MariaDB AX cluster using TARBALL tar.gz binary image instead of RPM files with a non-root account on CentOS 7 machines. But we still need to install some dependencies using root and yum repository manager.

By the end of this, we will have a 2 User Modules (UM), 3 Performance Module (PM) node cluster running on local storage.

Summary

Download the latest tarball binaries for MariaDB AX.

The following is the summary of tasks to be performed:

  • VMs OS prerequisites setup
    • We will be using CentOS 7, it should be identical for Red Hat Enterprise Linux as well.
  • Create MariaDB ColumnStore owner account and a group as mcsadm and set its password.
  • Setup the /etc/hosts file on all the nodes with IP - HostName mapping for easier access.
  • Download the ColumnStore TARBALL and extract it under /home/mcsadm.
  • Generate ssh key on PM1 using mcsadm user and copy the key to all the nodes.
  • Generate ssh key on UM1 and copy the public key to UM1 and UM2.
    • This is used for UM1 to UM2 data replication.
  • Generate ssh key on UM2 and copy the public key to UM2 and UM1.
    • This is used for UM2 to UM1 data replication.
  • Server preparation
    • Using the root user
      • Install the ColumnStore dependencies on all nodes using yum
      • setup umask on all nodes
      • setup sudo access on all nodes for mcsadm user
      • setup ulimit on all nodes for mcsadm user
  • Test the setup using the ClusterTest tool using mcsadm user
Preparing the VM OS

There are a few important things that are required before we start the installations.

Note: the following steps must be performed and validated on all the VMs

Disable SELinux

For this we will edit the SELinux configuration, in the file /etc/selinux/config, make sure to change SELINUX=disabled and it should look like this:

# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of these two values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted

After saving and exiting, we will need to reboot the VM to take permanent effect. Check if the SELinux has actually been disabled, use either of the two commands (sestatus/getenforce) to confirm:

[root@localhost ~] sestatus SELinux status: disabled [root@localhost ~] getenforce Disabled Disable firewalld

Firewalld is a standard service that is disabled using the command systemctl on the RHEL 7 / CentOS 7. Disable it on all the nodes and check its status using the systemctl status firewalld:

[root@localhost ~] systemctl stop firewalld [root@localhost ~] systemctl disable firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@localhost ~] systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) Change localedef

Execute the following on all the nodes to make sure proper character set across the cluster.

[root@localhost ~] localedef -i en_US -f UTF-8 en_US.UTF-8 Performance optimization considerations

Following are some of the network-related optimizations that we can do, please also consult with your network/OS administrators for other optimizations that might be beneficial.

GbE NIC settings:

Modify /etc/rc.d/rc.local to include the following:

/sbin/ifconfig eth0 txqueuelen 10000 Modify /etc/sysctl.conf for the following: # increase TCP max buffer size net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 # increase Linux autotuning TCP buffer limits # min, default, and max number of bytes to use net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 # don't cache ssthresh from previous connection net.ipv4.tcp_no_metrics_save = 1 # recommended to increase this for 1000 BT or higher net.core.netdev_max_backlog = 2500 # for 10 GigE, use this net.core.netdev_max_backlog = 30000 NOTE: Make sure there is only 1 setting of net.core.netdev_max_backlog in the /etc/sysctl.conf file. Cache memory settings

To optimize Linux to cache directories and inodes the vm.vfs_cache_pressure can be set to a lower value than 100 to attempt to retain caches for inode and directory structures. This will help improve read performance. A value of 10 is suggested. The following commands must all be run as the root user or with sudo.

To check the current value:

cat /proc/sys/vm/vfs_cache_pressure

Add the following to /etc/sysctl.conf to make the cache changes permanent.

vm.vfs_cache_pressure = 10 Create mcsadm account

Now we are ready to create the MariaDB ColumnStore owner account mcsadm which will be used for the installation of the tarball.

The mcsadm user and group are required to be created on all the nodes. sudo privilege to the mcsadm user is also mandatory, this requirement of sudo access is being removed in future releases of ColumnStore.

Following this, all steps will be done using mcsadm user with the help of sudo unless specified differently.

Remember to set mcsadm user's password as it will be required later on for key exchange.

[root@localhost ~] groupadd mcsadm [root@localhost ~] useradd -g mcsadm mcsadm [root@localhost ~] passwd mcsadm Changing password for user mcsadm. New password: Retype new password: passwd: all authentication tokens updated successfully. Setup /etc/hosts file

Add the following to the file /etc/hosts on all the nodes. This will ensure all nodes are accessible by their respective hostnames.

192.168.56.104 UM1 192.168.56.105 UM2 192.168.56.106 PM1 192.168.56.107 PM2 192.168.56.108 PM3 Download

In this case, we are going to download the tar file directly from the server, but feel free to download it externally and transfer to the server using your favorite secure file transfer tools.

[mcsadm@pm1 ~]$ wget https://downloads.mariadb.com/MariaDB/mariadb-columnstore/latest/centos/x86_64/7/mariadb-columnstore-1.1.5-1-centos7.x86_64.bin.tar.gz --2018-06-30 14:02:50-- https://downloads.mariadb.com/MariaDB/mariadb-columnstore/latest/centos/x86_64/7/mariadb-columnstore-1.1.5-1-centos7.x86_64.bin.tar.gz Resolving downloads.mariadb.com (downloads.mariadb.com)... 51.255.94.155, 2001:41d0:1004:249b:: Connecting to downloads.mariadb.com (downloads.mariadb.com)|51.255.94.155|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 489245964 (467M) [application/octet-stream] Saving to: ‘mariadb-columnstore-1.1.5-1-centos7.x86_64.bin.tar.gz’ 100%[================================================================>] 489,245,964 5.54MB/s in 2m 16s 2018-06-30 14:05:13 (3.44 MB/s) - ‘mariadb-columnstore-1.1.5-1-centos7.x86_64.bin.tar.gz’ saved [489245964/489245964] [mcsadm@pm1 ~]$ ls -rlt total 477780 -rw-rw-r-- 1 mcsadm mcsadm 489245964 Jun 15 16:45 mariadb-columnstore-1.1.5-1-centos7.x86_64.bin.tar.gz [mcsadm@pm1 downloads]$ tar -zxf mariadb-columnstore-1.1.5-1-centos7.x86_64.bin.tar.gz [mcsadm@pm1 ~]$ ls -rlt total 0 drwxr-xr-x 3 mcsadm mcsadm 25 Jun 11 19:29 mariadb -rw-rw-r-- 1 mcsadm mcsadm 489245964 Jun 15 16:45 mariadb-columnstore-1.1.5-1-centos7.x86_64.bin.tar.gz [mcsadm@pm1 ~]$ pwd /home/mcsadm Generating SSH Keys

Once the tarball is downloaded and extracted under /home/mcsadm folder, generate the Key on PM1 node using ssh-keygen and then copy it to all the nodes using ssh-copyid -i

[mcsadm@pm1 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/mcsadm/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/mcsadm/.ssh/id_rsa. Your public key has been saved in /home/mcsadm/.ssh/id_rsa.pub. The key fingerprint is: SHA256:IcAWtetlQw8vHS3QwAgS4vphvY22HdCJaCHYqRHa1ab mcsadm@um1 The key's randomart image is: +---[RSA 2048]----+ |. . o..-.+ .=. | |o= + .B.+ +o *. | |= B .+.-. .+B.o | | * X o o E.+o- | | * + o S . | | -+ o == . E | | E o o--B | | - + . | | o . | +----[SHA256]-----+ [mcsadm@pm1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub pm1 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mcsadm/.ssh/id_rsa.pub" The authenticity of host 'pm1 (192.168.56.106)' can't be established. ECDSA key fingerprint is SHA256:Jle/edRpKz9ysV8xp1K9TlIGvbg8Sb1p+GbDob3Id0g. ECDSA key fingerprint is MD5:a1:ce:9d:58:80:c6:ed:5a:95:7b:33:82:68:cb:0f:40. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys mcsadm@pm1's password: ****** Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'pm1'" and check to make sure that only the key(s) you wanted were added. [mcsadm@pm1 ~]$ ssh pm1 Last login: Sat Jun 30 13:37:50 2018 [mcsadm@pm1 ~]$ exit logout Connection to pm1 closed. [mcsadm@pm1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub pm2 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mcsadm/.ssh/id_rsa.pub" The authenticity of host 'pm2 (192.168.56.107)' can't be established. ECDSA key fingerprint is SHA256:Jle/edRpKz9ysV8xp1K9TlIGvbg8Sb1p+GbDob3Id0g. ECDSA key fingerprint is MD5:a1:ce:9d:58:80:c6:ed:5a:95:7b:33:82:68:cb:0f:40. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys mcsadm@pm2's password: ****** Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'pm2'" and check to make sure that only the key(s) you wanted were added. [mcsadm@pm1 ~]$ ssh pm2 [mcsadm@pm2 ~]$ exit logout Connection to pm2 closed. [mcsadm@pm1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub pm3 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mcsadm/.ssh/id_rsa.pub" The authenticity of host 'pm3 (192.168.56.108)' can't be established. ECDSA key fingerprint is SHA256:Jle/edRpKz9ysV8xp1K9TlIGvbg8Sb1p+GbDob3Id0g. ECDSA key fingerprint is MD5:a1:ce:9d:58:80:c6:ed:5a:95:7b:33:82:68:cb:0f:40. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys mcsadm@pm3's password: ****** Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'pm3'" and check to make sure that only the key(s) you wanted were added. [mcsadm@pm1 ~]$ ssh pm3 [mcsadm@pm3 ~]$ exit logout Connection to pm3 closed. [mcsadm@pm1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub um1 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mcsadm/.ssh/id_rsa.pub" The authenticity of host 'um1 (192.168.56.104)' can't be established. ECDSA key fingerprint is SHA256:Jle/edRpKz9ysV8xp1K9TlIGvbg8Sb1p+GbDob3Id0g. ECDSA key fingerprint is MD5:a1:ce:9d:58:80:c6:ed:5a:95:7b:33:82:68:cb:0f:40. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys mcsadm@um1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'um1'" and check to make sure that only the key(s) you wanted were added. [mcsadm@pm1 ~]$ ssh um1 [mcsadm@um1 ~]$ exit logout Connection to um1 closed. [mcsadm@pm1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub um2 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mcsadm/.ssh/id_rsa.pub" The authenticity of host 'um2 (192.168.56.105)' can't be established. ECDSA key fingerprint is SHA256:Jle/edRpKz9ysV8xp1K9TlIGvbg8Sb1p+GbDob3Id0g. ECDSA key fingerprint is MD5:a1:ce:9d:58:80:c6:ed:5a:95:7b:33:82:68:cb:0f:40. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys mcsadm@um2's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'um2'" and check to make sure that only the key(s) you wanted were added. [mcsadm@pm1 ~]$ ssh um2 [mcsadm@um2 ~]$ exit logout Connection to um2 closed. [mcsadm@pm1 ~]$ Key Exchange between UM1 and UM2

Generate an SSH key on UM1 and copy in both UM1 and UM2, similarly generate another SSH key on UM2 and copy it to UM2 and UM1 respectively.

UM1:

[mcsadm@um1 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/mcsadm/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/mcsadm/.ssh/id_rsa. Your public key has been saved in /home/mcsadm/.ssh/id_rsa.pub. The key fingerprint is: SHA256:IcXWteJlQw8vHK3YwAgS4vphvY2bHdCJaCHYqRHa1ro mcsadm@um1 The key's randomart image is: +---[RSA 2048]----+ |. . o..o.+ .=. | |o= + . .+ +o *. | |= B . ... .+B.o | | * = o o o.+oo | |o * + o S . | | + o = | | E o o | | + . | | o . | +----[SHA256]-----+ [mcsadm@um1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub um1 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mcsadm/.ssh/id_rsa.pub" The authenticity of host 'um1 (192.168.56.104)' can't be established. ECDSA key fingerprint is SHA256:Jle/edRpKz9ysV8xp1K9TlIGvbg8Sb1p+GbDob3Id0g. ECDSA key fingerprint is MD5:a1:ce:9d:58:80:c6:ed:5a:95:7b:33:82:68:cb:0f:40. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys mcsadm@um1's password: Permission denied, please try again. mcsadm@um1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'um1'" and check to make sure that only the key(s) you wanted were added. [mcsadm@um1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub um2 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mcsadm/.ssh/id_rsa.pub" The authenticity of host 'um2 (192.168.56.105)' can't be established. ECDSA key fingerprint is SHA256:Jle/edRpKz9ysV8xp1K9TlIGvbg8Sb1p+GbDob3Id0g. ECDSA key fingerprint is MD5:a1:ce:9d:58:80:c6:ed:5a:95:7b:33:82:68:cb:0f:40. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys mcsadm@um2's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'um2'" and check to make sure that only the key(s) you wanted were added. [mcsadm@um1 ~]$ ssh um2 [mcsadm@um2 ~]$ exit logout Connection to um2 closed. [mcsadm@um1 ~]$

UM2:

[mcsadm@um2 ~]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/mcsadm/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/mcsadm/.ssh/id_rsa. Your public key has been saved in /home/mcsadm/.ssh/id_rsa.pub. The key fingerprint is: SHA256:VSiUVyGkQQYcQE/WxeNP6wiWIWtNOfpSqzdEG/uImA8 mcsadm@um2 The key's randomart image is: +---[RSA 2048]----+ | .oo=*==+o+. | | +..+o=o | | . .=.. | | . B.. . | | BSB o . | | + X o | | E+ * = o | | o.o * o . | | .o+ . | +----[SHA256]-----+ [mcsadm@um2 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub um2 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mcsadm/.ssh/id_rsa.pub" The authenticity of host 'um2 (192.168.56.105)' can't be established. ECDSA key fingerprint is SHA256:Jle/edRpKz9ysV8xp1K9TlIGvbg8Sb1p+GbDob3Id0g. ECDSA key fingerprint is MD5:a1:ce:9d:58:80:c6:ed:5a:95:7b:33:82:68:cb:0f:40. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys mcsadm@um2's password: Permission denied, please try again. mcsadm@um2's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'um2'" and check to make sure that only the key(s) you wanted were added. [mcsadm@um2 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub um1 /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mcsadm/.ssh/id_rsa.pub" The authenticity of host 'um1 (192.168.56.104)' can't be established. ECDSA key fingerprint is SHA256:Jle/edRpKz9ysV8xp1K9TlIGvbg8Sb1p+GbDob3Id0g. ECDSA key fingerprint is MD5:a1:ce:9d:58:80:c6:ed:5a:95:7b:33:82:68:cb:0f:40. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys mcsadm@um1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'um1'" and check to make sure that only the key(s) you wanted were added. Install Dependencies and Configurations

Using user root, install all the required packages, set  umask, sudo privileges and ulimit for mcsadm user. All these are required by ColumnStore on all the nodes and also disable "tty" for sudoers.

[root@pm1 local] yum -y install boost expect perl perl-DBI openssl zlib file sudo libaio rsync snappy net-tools perl-DBD-MySQL [root@pm1 local] echo "umask 022" >> /etc/profile [root@pm1 local] echo "mcsadm ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers [root@pm1 local] echo "Defaults:mcsadm !requiretty" >> /etc/sudoers [root@pm1 local] echo "@mcsadm hard nofile 65536" >> /etc/security/limits.conf [root@pm1 local] echo "@mcsadm soft nofile 65536" >> /etc/security/limits.conf Cluster test

Once the dependencies have been set up along with ssh key exchange, execute the Columnstore cluster tester tool from PM1 node using mcsadm user.

[mcsadm@pm1 ~]$ ./mariadb/columnstore/bin/columnstoreClusterTester.sh *** This is the MariaDB Columnstore Cluster System Test Tool *** ** Validate local OS is supported Local Node OS System Name : CentOS Linux 7 (Core) ** Run Non-root User directory permissions check on Local Node Local Node permission test on /tmp : Passed Local Node permission test on /dev/shm : Passed ** Run MariaDB Console Password check Passed, no problems detected with a MariaDB password being set without an associated /root/.my.cnf ** Run MariaDB ColumnStore Dependent Package Check Local Node - Passed, all dependency packages are installed Failed, Local Node package mariadb-libs is installed, please un-install Failure occurred, do you want to continue? (y,n) > y *** Finished Validation of the Cluster, Failures occurred. Check for Error/Failed test results *** [mcsadm@pm1 ~]$ rpm -qa | grep mariadb mariadb-libs-5.5.56-2.el7.x86_64 [mcsadm@pm1 ~]$

The above shows a failed cluster test as the default MariaDB libraries that are provided within the Linux distribution are already installed, we need to uninstall these for a clean ColumnStore Installation.

Remove the MariaDB 5.5 libraries and execute the cluster test one more time.

[mcsadm@pm1 ~]$ sudo rpm -e --nodeps mariadb-libs-5.5.56-2.el7.x86_64 [mcsadm@pm1 ~]$ ./mariadb/columnstore/bin/columnstoreClusterTester.sh *** This is the MariaDB Columnstore Cluster System Test Tool *** ** Validate local OS is supported Local Node OS System Name : CentOS Linux 7 (Core) ** Run Non-root User directory permissions check on Local Node Local Node permission test on /tmp : Passed Local Node permission test on /dev/shm : Passed ** Run MariaDB Console Password check Passed, no problems detected with a MariaDB password being set without an associated /root/.my.cnf ** Run MariaDB ColumnStore Dependent Package Check Local Node - Passed, all dependency packages are installed Local Node - Passed, all packages that should not be installed aren't installed *** Finished Validation of the Cluster, all Tests Passed *** [mcsadm@pm1 ~]$

All the tests are cleared, now its time to configure ColumnStore for a 2 UM and 3 PM nodes.

Post Install

Once the cluster test tool is successful, execute post-install and follow the instruction for a distributed install, the user inputs are as follows:

  • Select the type of Data Storage [1=internal, 2=external] () > 1
    • Internal means local disk
  • Enter number of User Modules [1,1024] () > 2

    • Since we are doing a 2UM Nodes setup

  • Enter Nic Interface #1 Host Name () > um1

    • Enter "um1" here based on the /etc/hosts file setup

  • Enter Nic Interface #1 IP Address of um1 (192.168.56.104) >

    • Just press enter to accept the IP already identified by the setup

  • Enter Nic Interface #2 Host Name (unassigned) >

    • Press Enter without any value here as we are only configuring 1 network interface per node

  • The above 3 will repeat depending on the number of UM nodes specified

  • Enter number of Performance Modules [1,1024] () > 3

    • Since we are doing a 3 PM nodes

  • Enter Nic Interface #1 Host Name () > pm1

    • Enter "pm1" here based on the /etc/hosts setup

  • Enter Nic Interface #1 IP Address of pm1 (192.168.56.106) >

    • Just press enter to accept the IP already identified by the setup

  • Enter Nic Interface #2 Host Name (unassigned) >

    • Press Enter without any value here as we are only configuring 1 network interface per node

  • The above three will be repeated depending on the number of PM nodes specified

[mcsadm@pm1 ~]$ ./mariadb/columnstore/bin/post-install --installdir=$HOME/mariadb/columnstore The next steps are: If installing on a pm1 node: export COLUMNSTORE_INSTALL_DIR=/home/mcsadm/mariadb/columnstore export LD_LIBRARY_PATH=/home/mcsadm/mariadb/columnstore/lib:/home/mcsadm/mariadb/columnstore/mysql/lib/mysql:/home/mcsadm/mariadb/columnstore/lib:/home/mcsadm/mariadb/columnstore/mysql/lib:/home/mcsadm/mariadb/columnstore/lib:/home/mcsadm/mariadb/columnstore/mysql/lib:/home/mcsadm/mariadb/columnstore/lib:/home/mcsadm/mariadb/columnstore/mysql/lib /home/mcsadm/mariadb/columnstore/bin/postConfigure -i /home/mcsadm/mariadb/columnstore If installing on a non-pm1 using the non-distributed option: export COLUMNSTORE_INSTALL_DIR=/home/mcsadm/mariadb/columnstore export LD_LIBRARY_PATH=/home/mcsadm/mariadb/columnstore/lib:/home/mcsadm/mariadb/columnstore/mysql/lib/mysql:/home/mcsadm/mariadb/columnstore/lib:/home/mcsadm/mariadb/columnstore/mysql/lib:/home/mcsadm/mariadb/columnstore/lib:/home/mcsadm/mariadb/columnstore/mysql/lib:/home/mcsadm/mariadb/columnstore/lib:/home/mcsadm/mariadb/columnstore/mysql/lib /home/mcsadm/mariadb/columnstore/bin/columnstore start

Copy the above export scripts in the ~/.bashrc script for PM1 node install Make sure the tar.gz file is under /home/mcsadm/ home folder before executing postConfigure.

[mcsadm@pm1 ~]$ /home/mcsadm/mariadb/bin/postConfigure -i /home/mcsadm/mariadb/columnstore This is the MariaDB ColumnStore System Configuration and Installation tool. It will Configure the MariaDB ColumnStore System and will perform a Package Installation of all of the Servers within the System that is being configured. IMPORTANT: This tool should only be run on the Parent OAM Module            which is a Performance Module, preferred Module #1 Prompting instructions:         Press 'enter' to accept a value in (), if available or         Enter one of the options within [], if available, or         Enter a new value ===== Setup System Server Type Configuration ===== There are 2 options when configuring the System Server Type: single and multi   'single'  - Single-Server install is used when there will only be 1 server configured               on the system. It can also be used for production systems, if the plan is               to stay single-server.   'multi'   - Multi-Server install is used when you want to configure multiple servers now or               in the future. With Multi-Server install, you can still configure just 1 server               now and add on addition servers/modules in the future. Select the type of System Server install [1=single, 2=multi] (2) > 2 ===== Setup System Module Type Configuration ===== There are 2 options when configuring the System Module Type: separate and combined   'separate' - User and Performance functionality on separate servers.   'combined' - User and Performance functionality on the same server Select the type of System Module Install [1=separate, 2=combined] (1) > 1 Seperate Server Installation will be performed. NOTE: Local Query Feature allows the ability to query data from a single Performance       Module. Check MariaDB ColumnStore Admin Guide for additional information. Enable Local Query feature? [y,n] (n) > n NOTE: The MariaDB ColumnStore Schema Sync feature will replicate all of the       schemas and InnoDB tables across the User Module nodes. This feature can be enabled       or disabled, for example, if you wish to configure your own replication post installation. MariaDB ColumnStore Schema Sync feature is Enabled, do you want to leave enabled? [y,n] (y) > y NOTE: MariaDB ColumnStore Replication Feature is enabled Enter System Name (columnstore-1) > ===== Setup Storage Configuration ===== ----- Setup Performance Module DBRoot Data Storage Mount Configuration ----- There are 2 options when configuring the storage: internal or external   'internal' -    This is specified when a local disk is used for the DBRoot storage.                   High Availability Server Failover is not Supported in this mode   'external' -    This is specified when the DBRoot directories are mounted.                   High Availability Server Failover is Supported in this mode. Select the type of Data Storage [1=internal, 2=external] (1) > 1 ===== Setup Memory Configuration ===== NOTE: Setting 'NumBlocksPct' to 70%       Setting 'TotalUmMemory' to 50% ===== Setup the Module Configuration ===== ----- User Module Configuration ----- Enter number of User Modules [1,1024] (2) > 2 *** User Module #1 Configuration *** Enter Nic Interface #1 Host Name (um1) > um1 Enter Nic Interface #1 IP Address of um1 (192.168.56.104) > Enter Nic Interface #2 Host Name (unassigned) > *** User Module #2 Configuration *** Enter Nic Interface #1 Host Name (um2) > um2 Enter Nic Interface #1 IP Address of um2 (192.168.56.105) > Enter Nic Interface #2 Host Name (unassigned) > ----- Performance Module Configuration ----- Enter number of Performance Modules [1,1024] (3) > 3 *** Parent OAM Module Performance Module #1 Configuration *** Enter Nic Interface #1 Host Name (pm1) > pm1 Enter Nic Interface #1 IP Address of pm1 (192.168.56.106) > Enter Nic Interface #2 Host Name (unassigned) > Enter the list (Nx,Ny,Nz) or range (Nx-Nz) of DBRoot IDs assigned to module 'pm1' (1) > 1 *** Performance Module #2 Configuration *** Enter Nic Interface #1 Host Name (pm2) > pm2 Enter Nic Interface #1 IP Address of pm2 (192.168.56.107) > Enter Nic Interface #2 Host Name (unassigned) > Enter the list (Nx,Ny,Nz) or range (Nx-Nz) of DBRoot IDs assigned to module 'pm2' (2) > 2 *** Performance Module #3 Configuration *** Enter Nic Interface #1 Host Name (pm3) > pm3 Enter Nic Interface #1 IP Address of pm3 (192.168.56.108) > Enter Nic Interface #2 Host Name (unassigned) > Enter the list (Nx,Ny,Nz) or range (Nx-Nz) of DBRoot IDs assigned to module 'pm3' (3) > 3 ===== System Installation ===== System Configuration is complete. Performing System Installation. Performing a MariaDB ColumnStore System install using a Binary package located in the /home/mcsadm directory. Next step is to enter the password to access the other Servers. This is either your password or you can default to using a ssh key If using a password, the password needs to be the same on all Servers. Enter password, hit 'enter' to default to using a ssh key, or 'exit' > ----- Performing Install on 'um1 / um1' ----- Install log file is located here: /tmp/um1_binary_install.log ----- Performing Install on 'um2 / um2' ----- Install log file is located here: /tmp/um2_binary_install.log ----- Performing Install on 'pm2 / pm2' ----- Install log file is located here: /tmp/pm2_binary_install.log ----- Performing Install on 'pm3 / pm3' ----- Install log file is located here: /tmp/pm3_binary_install.log MariaDB ColumnStore Package being installed, please wait ...  DONE ===== Checking MariaDB ColumnStore System Logging Functionality ===== The MariaDB ColumnStore system logging is setup and working on local server ===== MariaDB ColumnStore System Startup ===== System Configuration is complete. Performing System Installation. ----- Starting MariaDB ColumnStore on local server ----- MariaDB ColumnStore successfully started MariaDB ColumnStore Database Platform Starting, please wait ...................... DONE Run MariaDB ColumnStore Replication Setup..  DONE MariaDB ColumnStore Install Successfully Completed, System is Active Enter the following command to define MariaDB ColumnStore Alias Commands . /home/mcsadm/mariadb/columnstore/bin/columnstoreAlias Enter 'mcsmysql' to access the MariaDB ColumnStore SQL console Enter 'mcsadmin' to access the MariaDB ColumnStore Admin console NOTE: The MariaDB ColumnStore Alias Commands are in /etc/profile.d/columnstoreAlias.sh [mcsadm@pm1 ~]$

The above indicates a successful installation of MariaDB AX and ColumnStore.

In case of any errors, the log files for all the nodes are located under PM 1's /tmp folder with node-specific file names.

Test ColumnStore by using mcsmysql from UM1 node

[mcsadm@um1 ~]$ mcsmysql Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 40 Server version: 10.2.15-MariaDB-log Columnstore 1.1.5-1 Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> show databases; +---------------------+ | Database | +---------------------+ | calpontsys | | columnstore_info | | infinidb_querystats | | infinidb_vtable | | information_schema | | mysql | | performance_schema | | test | +---------------------+ 8 rows in set (0.00 sec) MariaDB [(none)]> create database testdb; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> use testdb; Database changed MariaDB [testdb]> show tables; Empty set (0.00 sec) MariaDB [testdb]> create table tab(id int, name varchar(100)); Query OK, 0 rows affected (0.01 sec) MariaDB [testdb]> show create table tab\G *************************** 1. row *************************** Table: tab Create Table: CREATE TABLE `tab` ( `id` int(11) DEFAULT NULL, `name` varchar(100) DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1 1 row in set (0.00 sec) MariaDB [testdb]> create table tab_cs(id int, name varchar(100)) engine=columnstore; Query OK, 0 rows affected (0.39 sec) MariaDB [testdb]> show create table tab_cs\G *************************** 1. row *************************** Table: tab_cs Create Table: CREATE TABLE `tab_cs` ( `id` int(11) DEFAULT NULL, `name` varchar(100) DEFAULT NULL ) ENGINE=Columnstore DEFAULT CHARSET=latin1 1 row in set (0.00 sec) MariaDB [testdb]> insert into tab_cs select rand()*10000, column_name from information_schema.columns; Query OK, 1921 rows affected (1.70 sec) Records: 1921 Duplicates: 0 Warnings: 0 MariaDB [testdb]> select count(*) from tab_cs; +----------+ | count(*) | +----------+ | 1921 | +----------+ 1 row in set (0.08 sec) MariaDB [testdb]>

This concludes a successful MariaDB AX distributed tarball install.

Conclusion

MariaDB ColumnStore is a powerful distributed analytical storage engine available as part of MariaDB AX along with MariaDB MaxScale. Installation using a tarball is quite an easy task.

Summary:
  • Create a new user/group mcsadm
  • Download and untar the tarball tar.gz file under /home/mcsadm 
  • Perform SSH Key Exchange between PM1 and all nodes
  • Perform SSH Key Exchange between UM1 and UM2 and vice versa
  • Perform a Cluster Test to ensure no problems
  • Execute Post Install
  • Execute Post Configure
References

This blog is about setting up a MariaDB AX Cluster with 2 UM and 3 PM nodes on a CentOS 7 Cluster. We will be using a tarball binary image non-root installation.

Login or Register to post comments

Categories: Web Technologies

What is WebAssembly? The next-generation web platform explained

InfoWorld JavaScript - Wed, 08/01/2018 - 03:00

For two decades now, we’ve had only one programming language available to use natively in a web browser: JavaScript. The slow death of third-party binary plug-ins has ruled out other languages, such as Java and Flash’s ActionScript, as first-class citizens for web development. Other web languages, like CoffeeScript, are merely compiled to JavaScript.

But now we have a new possibility: WebAssembly, or WASM for short. WebAssembly is a small, fast binary format that promises near-native performance for web applications. Plus, WebAssembly is designed to be a compilation target for any language, JavaScript being just one of them. With every major browser now supporting WebAssembly, it’s time to start thinking seriously about writing client-side apps for the web that can be compiled as WebAssembly.

To read this article in full, please click here

Categories: Web Technologies

Getting to Know a Legacy Codebase

CSS-Tricks - Tue, 07/31/2018 - 13:44

Harry Roberts talks about some methods for getting comfy with a new ("specifically CSS") code base. Harry's done this a lot as someone who parachutes into new code bases regularly as a consultant. But I think this is also quite interesting for people starting a new job. So much web development work is working on existing sites, not green fielding new ones.

Direct Link to ArticlePermalink

The post Getting to Know a Legacy Codebase appeared first on CSS-Tricks.

Categories: Web Technologies

Framer X

CSS-Tricks - Tue, 07/31/2018 - 13:35

Framer X is a brand new app that’s about to be released and this quick demo reel takes us on a tour through some of the changes to the previous app—it all looks super exciting.

As a designer, I’m most interested in the prototyping tools and being able to quickly explore complex scene transitions between one state and another. But as a developer, I’m interested in how it all ties into React. The website describes it like so:

Use actual React in your projects to create interactive components from scratch. Want more control? Create custom UI in the properties panel for your components.

I can imagine a wonderful near-future where it’s possible to tie Framer X into a design system so that folks on a team can use all the real life React components without having to worry if they’re up-to-date or not.

Direct Link to ArticlePermalink

The post Framer X appeared first on CSS-Tricks.

Categories: Web Technologies

A Kind Introduction to MySQL Windowing Functions II

Planet MySQL - Tue, 07/31/2018 - 12:32
Before I take up from the last blog, I need to introduce RANGE and ROWs.  Windows over data can be framed and this is where things can get wild and woolly.   Table x has a column named x (me being overly creative again) that has the values one through 10.  If we sum the values of x we can get different values depending on how the frame is constructed.

If the frame for the window is defined as a the range between 'unbounded preceding and current row' the value for the sum of x will the sum for the entire column of data.  However, if the frame is defined as the rows between 'unbounded preceding and current row' it will sum up the values of the current row and the values of the rows that came before; 1, 1+2, 1+2+3. etc.


mysql> SELECT x, sum(x) over w as 'rows',
    -> sum(x) over y as 'range'
    -> from x
    -> window w as 
           (rows between unbounded preceding and current row),
    ->        y as 
           (range between unbounded preceding and current row);
+------+------+-------+
| x    | rows | range |
+------+------+-------+
|    1 |    1 |    55 |
|    2 |    3 |    55 |
|    3 |    6 |    55 |
|    4 |   10 |    55 |
|    5 |   15 |    55 |
|    6 |   21 |    55 |
|    7 |   28 |    55 |
|    8 |   36 |    55 |
|    9 |   45 |    55 |
|   10 |   55 |    55 |
+------+------+-------+
10 rows in set (0.00 sec)

The terminology of frames and windows gets a little confusing and sound like a home remodeling project.  For now consider 'unbounded' to be everything in the column, so unbounded preceding would be everything in the column before this row. 


mysql> SELECT x, 
       sum(x) over w as 'rows', 
       sum(x) over y as 'range' 
       from x 
       window w as 
         (rows between current row and unbounded following), 
         y as (range between unbounded preceding and current row);
+------+------+-------+
| x    | rows | range |
+------+------+-------+
|    1 |   55 |    55 |
|    2 |   54 |    55 |
|    3 |   52 |    55 |
|    4 |   49 |    55 |
|    5 |   45 |    55 |
|    6 |   40 |    55 |
|    7 |   34 |    55 |
|    8 |   27 |    55 |
|    9 |   19 |    55 |
|   10 |   10 |    55 |
+------+------+-------+
10 rows in set (0.00 sec)

So unbounded following would take all the values in the column after the current.  So the rows value where x is equal to 2 of 54 is the grand total sum of 55 less the value of the first row or 54 (and x =3 is 55 less 1 & 2).

More frames and windows next time!


More State Stuff
Last  time was the start of a series on windowing functions that were introduced with MySQL 8. This entry will delve a little bit deeper into this realm.  The data used for these examples starts off with the World database.

Population By State
Previously the data showed a total population for each state but listed an entry for each city in that state, er, district.  It is quite easy to get a per state total by adding a simple GROUP BY to the query.


mysql> SELECT district, Sum(Population) OVER w 
FROM city 
WHERE CountryCode = 'USA' 
GROUP BY District  
WINDOW w AS (PARTITION BY District) 
LIMIT 10;
+----------------------+------------------------+
| district             | Sum(Population) OVER w |
+----------------------+------------------------+
| Alabama              |                 242820 |
| Alaska               |                 260283 |
| Arizona              |                1321045 |
| Arkansas             |                 183133 |
| California           |                3694820 |
| Colorado             |                 554636 |
| Connecticut          |                 139529 |
| District of Columbia |                 572059 |
| Florida              |                 735167 |
| Georgia              |                 416474 |
+----------------------+------------------------+
10 rows in set (0.00 sec)


Going Back to Ohio
But what if we wanted a little more statistical information on a state's population?  Well, there several functions just waiting for you.

mysql> select name, Population as 'Pop', 
       SUM(Population) OVER w AS 'sum', 
       RANK() over w as 'rank', 
       percent_rank() OVER w as '%rank' 
       from city where District='Ohio' 
       WINDOW w AS (ORDER BY Population);
+------------+--------+---------+------+-------+
| name       | Pop    | sum     | rank | %rank |
+------------+--------+---------+------+-------+
| Dayton     | 166179 |  166179 |    1 |     0 |
| Akron      | 217074 |  383253 |    2 |   0.2 |
| Toledo     | 313619 |  696872 |    3 |   0.4 |
| Cincinnati | 331285 | 1028157 |    4 |   0.6 |
| Cleveland  | 478403 | 1506560 |    5 |   0.8 |
| Columbus   | 711470 | 2218030 |    6 |     1 |
+------------+--------+---------+------+-------+
6 rows in set (0.01 sec)


Ohio is used here as it has just enough data to illustrate some concepts.  Looking at the cities in that state starting with the largest population is done by creating a window that is ordered by Population, biggest first.  Columbus has the largest population and its rank is '1'.

The PERCENTAGE_RANK  is the percentage of scores in the window that are equal or lower to the value in the row, excluding the highest value, for the rank on a range from zero to one for that row.  For a fun exercise , rerun the above query with the desc removed from the window definition (ORDER BY Population).

Differences

The functions first_value and last_value provide a way to see how much small the population is in the other cities than Columbus. To do this the current row, or last_value, is subtracted from the  first value  - Columbus's 711,470 - to provide the answer.

 mysql> select name, 
         Population as 'Pop', 
         SUM(Population) OVER w AS 'sum', 
        (first_value(Population) over w - last_value(Population) over w) as 'diff'  
        from city 
        where District='Ohio' 
        WINDOW w AS (ORDER BY Population desc) ;
+------------+--------+---------+--------+
| name       | Pop    | sum     | diff   |
+------------+--------+---------+--------+
| Columbus   | 711470 |  711470 |      0 |
| Cleveland  | 478403 | 1189873 | 233067 |
| Cincinnati | 331285 | 1521158 | 380185 |
| Toledo     | 313619 | 1834777 | 397851 |
| Akron      | 217074 | 2051851 | 494396 |
| Dayton     | 166179 | 2218030 | 545291 |
+------------+--------+---------+--------+
6 rows in set (0.00 sec)

But how much smaller are cities three to six from the second city, Cleveland.  The NTH_VALUE(expression, n) allows us to subtract the smaller values from the second entry.
mysql> select name, Population as 'Pop',         SUM(Population) OVER w AS 'sum',     (first_value(Population) over w - last_value(Population) over w)              as 'diff Columbus',     (nth_value(Population,2) over w - last_value(Population) over w)              as 'diff Cleveland'         from city where District='Ohio'         WINDOW w AS (ORDER BY Population desc) ; +------------+--------+---------+---------------+----------------+ | name       | Pop    | sum     | diff Columbus | diff Cleveland | +------------+--------+---------+---------------+----------------+ | Columbus   | 711470 |  711470 |             0 |           NULL | | Cleveland  | 478403 | 1189873 |        233067 |              0 | | Cincinnati | 331285 | 1521158 |        380185 |         147118 | | Toledo     | 313619 | 1834777 |        397851 |         164784 | | Akron      | 217074 | 2051851 |        494396 |         261329 | | Dayton     | 166179 | 2218030 |        545291 |         312224 | +------------+--------+---------+---------------+----------------+ 6 rows in set (0.01 sec)
But how much bigger is each city than the next biggest?? For that we need to use LAG(),  LAG() takes the value of column from the row before (So Cleveland's value for LAG is Columbus's Population) and the default lag is 1.  The corresponding LEAD() takes the value from the row after.  So now it is easy to see that Toledo is only 17,666 behind Cincinnati.
mysql> select name,         Population as 'Pop',         lag(Population,1) over w as 'lag',         lead(Population,1) over w as 'lead',         Population - lag(Population) over w as 'diff'          from city         WHERE District = 'Ohio'         window w as (ORDER BY Population desc); +------------+--------+--------+--------+---------+ | name       | Pop    | lag    | lead   | diff    | +------------+--------+--------+--------+---------+ | Columbus   | 711470 |   NULL | 478403 |    NULL | | Cleveland  | 478403 | 711470 | 331285 | -233067 | | Cincinnati | 331285 | 478403 | 313619 | -147118 | | Toledo     | 313619 | 331285 | 217074 |  -17666 | | Akron      | 217074 | 313619 | 166179 |  -96545 | | Dayton     | 166179 | 217074 |   NULL |  -50895 | +------------+--------+--------+--------+---------+ 6 rows in set (0.00 sec)


Dividing Up Into Buckets
Another useful function is the NTILE which divvies up the data into a certain number of groups or buckets.  Be sure to use ORDER BY to ensure the rows you have selected are ordered correctly.  Below the cities are split up into a group with four buckets and a group with three buckets.  
mysql> SELECT name,                Population as 'Pop',                NTILE(4) over w as 'ntile4',                NTILE(3) over w as 'ntile3'                from city                where District = 'Ohio'                Window w as (ORDER BY Population desc); +------------+--------+--------+--------+ | name       | Pop    | ntile4 | ntile3 | +------------+--------+--------+--------+ | Columbus   | 711470 |      1 |      1 | | Cleveland  | 478403 |      1 |      1 | | Cincinnati | 331285 |      2 |      2 | | Toledo     | 313619 |      2 |      2 | | Akron      | 217074 |      3 |      3 | | Dayton     | 166179 |      4 |      3 | +------------+--------+--------+--------+ 6 rows in set (0.00 sec)






Categories: Web Technologies

MariaDB 5.5.61, MariaDB Connector/Node.js 0.7.0 and MariaDB Connector/J 2.2.6 now available

Planet MySQL - Tue, 07/31/2018 - 09:55

The MariaDB Foundation is pleased to announce the availability of MariaDB 5.5.61, the latest stable release in the MariaDB 5.5 series, as well as MariaDB Connector/Node.js 0.7.0, the first alpha release of the new 100% JavaScript non-blocking MariaDB client for Node.js, compatible with Node.js 6+, and MariaDB Connector/J 2.2.6, the latest stable MariaDB Connector/J release. […]

The post MariaDB 5.5.61, MariaDB Connector/Node.js 0.7.0 and MariaDB Connector/J 2.2.6 now available appeared first on MariaDB.org.

Categories: Web Technologies

204 No Content - Evert Pot

Planet PHP - Tue, 07/31/2018 - 08:00

204 No Content should be returned by a server when a request was successful, but there was no response body.

For most APIs, the following two responses are the same:

HTTP/1.1 200 OK Content-Length: 0 HTTP/1.1 204 No Content

The second format has a slight preference by most API designers. The reason there is a separate status-code, is because the 204 No Content response can signal a hypermedia client to not change the document and just stay on the current ‘view’. For example, a client might ‘save’ a document temporarily and not refresh to a new page.

Personally I have not really seen this behavior, but I can absolutely see this as a real possibility for a true HATEOAS client.

However, most of the time, 204 is simply used as a stand-in for 200 without a response body.

204 No content is sometimes misunderstood by implementors of HTTP APIs to indicate that the target resource ‘no longer has any contents’ after a DELETE request. However, other successful requests that don’t have a response body should also have the 204 No Content status, including PUT and POST.

References
Categories: Web Technologies

Pages