emGee Software Solutions Custom Database Applications

Share this

Web Technologies

MariaDB Galera Cluster 5.5.61, MariaDB Connector/C 3.0.6 and MariaDB Connector/ODBC 3.0.6 now available

Planet MySQL - Fri, 08/03/2018 - 09:33

The MariaDB Foundation is pleased to announce the availability of MariaDB Galera Cluster 5.5.61 as well as MariaDB Connector/C 3.0.6 and MariaDB Connector/ODBC 3.0.6 all stable releases. See the release notes and changelogs for details. Download MariaDB Galera Cluster 5.5.61 Release Notes Changelog What is MariaDB Galera Cluster? Download MariaDB Connector/C 3.0.6 Release Notes Changelog […]

The post MariaDB Galera Cluster 5.5.61, MariaDB Connector/C 3.0.6 and MariaDB Connector/ODBC 3.0.6 now available appeared first on MariaDB.org.

Categories: Web Technologies

MariaDB Cluster 5.5.61 and updated connectors now available

Planet MySQL - Fri, 08/03/2018 - 09:00
MariaDB Cluster 5.5.61 and updated connectors now available dbart Fri, 08/03/2018 - 12:00

The MariaDB project is pleased to announce the immediate availability of MariaDB Cluster 5.5.61 and updated MariaDB C/C++ and ODBC connectors. See the release notes and changelogs for details and visit mariadb.com/downloads to download.

Download MariaDB Cluster 5.5.61

Release Notes Changelog What is MariaDB Cluster?

Download MariaDB C/C++ and ODBC Connectors

MariaDB Connector/C 3.0.6 Release Notes MariaDB Connector/ODBC 3.0.6 Release Notes

The MariaDB project is pleased to announce the immediate availability of MariaDB Galera Cluster 5.5.61 and updated MariaDB Connectors C/C++ and ODBC. See the release notes and changelogs for details.

Login or Register to post comments

Categories: Web Technologies

Databook: Turning Big Data into Knowledge with Metadata at Uber

Planet MySQL - Fri, 08/03/2018 - 08:30

Databook, Uber's in-house platform for surfacing and exploring contextual metadata, makes dataset discovery and exploration easier for teams across the company.

The post Databook: Turning Big Data into Knowledge with Metadata at Uber appeared first on Uber Engineering Blog.

Categories: Web Technologies

Using data in React with the Fetch API and axios

CSS-Tricks - Fri, 08/03/2018 - 07:15

If you are new to React, and perhaps have only played with building to-do and counter apps, you may not yet have run across a need to pull in data for your app. There will likely come a time when you’ll need to do this, as React apps are most well suited for situations where you’re handling both data and state.

The first set of data you may need to handle might be hard-coded into your React application, like we did for this demo from our Error Boundary tutorial:

See the Pen error boundary 0 by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

What if you want to handle data from an API? That's the purpose of this tutorial. Specifically, we'll make use of the Fetch API and axios as examples for how to request and use data.

The Fetch API

The Fetch API provides an interface for fetching resources. We'll use it to fetch data from a third-party API and see how to use it when fetching data from an API built in-house.

Using Fetch with a third-party API

See the Pen React Fetch API Pen 1 by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

We will be fetching random users from JSONPlaceholder, a fake online REST API for testing. Let's start by creating our component and declaring some default state.

class App extends React.Component { state = { isLoading: true, users: [], error: null } render() { <React.Fragment> </React.Fragment> } }

There is bound to be a delay when data is being requested by the network. It could be a few seconds or maybe a few milliseconds. Either way, during this delay, it’s good practice to let users know that something is happening while the request is processing.

To do that we'll make use of isLoading to either display the loading message or the requested data. The data will be displayed when isLoading is false, else a loading message will be shown on the screen. So the render() method will look like this:

render() { const { isLoading, users, error } = this.state; return ( <React.Fragment> <h1>Random User</h1> // Display a message if we encounter an error {error ? <p>{error.message}</p> : null} // Here's our data check {!isLoading ? ( users.map(user => { const { username, name, email } = user; return ( <div key={username}> <p>Name: {name}</p> <p>Email Address: {email}</p> <hr /> </div> ); }) // If there is a delay in data, let's let the user know it's loading ) : ( <h3>Loading...</h3> )} </React.Fragment> ); }

The code is basically doing this:

  1. De-structures isLoading, users and error from the application state so we don't have to keep typing this.state.
  2. Prints a message if the application encounters an error establishing a connection
  3. Checks to see if data is loading
  4. If loading is not happening, then we must have the data, so we display it
  5. If loading is happening, then we must still be working on it and display "Loading..." while the app is working

For Steps 3-5 to work, we need to make the request to fetch data from an API. This is where the JSONplaceholder API will come in handy for our example.

fetchUsers() { // Where we're fetching data from fetch(`https://jsonplaceholder.typicode.com/users`) // We get the API response and receive data in JSON format... .then(response => response.json()) // ...then we update the users state .then(data => this.setState({ users: data, isLoading: false, }) ) // Catch any errors we hit and update the app .catch(error => this.setState({ error, isLoading: false })); }

We create a method called fetchUser() and use it to do exactly what you might think: request user data from the API endpoint and fetch it for our app. Fetch is a promise-based API which returns a response object. So, we make use of the json() method to get the response object which is stored in data and used to update the state of users in our application. We also need to change the state of isLoading to false so that our application knows that loading has completed and all is clear to render the data.

The fact that Fetch is promise-based means we can also catch errors using the .catch() method. Any error encountered is used a value to update our error's state. Handy!

The first time the application renders, the data won't have been received — it can take seconds. We want to trigger the method to fetch the users when the application state can be accessed for an update and the application re-rendered. React's componentDidMount() is the best place for this, so we'll place the fetchUsers() method in it.

componentDidMount() { this.fetchUsers(); } Using Fetch With Self-Owned API

So far, we’ve looked at how to put someone else’s data to use in an application. But what if we’re working with our own data in our own API? That’s what we’re going to cover right now.

See the Pen React Fetch API Pen 2 by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

I built an API which is available on GitHub. The JSON response you get has been placed on AWS — that’s what we will use for this tutorial.

As we did before, let's create our component and set up some default state.

class App extends React.Component { state = { isLoading: true, posts: [], error: null } render() { <React.Fragment> </React.Fragment> } }

Our method for looping through the data will be different from the one we used before but only because of the data’s structure, which is going to be different. You can see the difference between our data structure here and the one we obtained from JSONPlaceholder.

Here is how the render() method will look like for our API:

render() { const { isLoading, posts, error } = this.state; return ( <React.Fragment> <h1>React Fetch - Blog</h1> <hr /> {!isLoading ? Object.keys(posts).map(key => <Post key={key} body={posts[key]} />) : <h3>Loading...</h3>} </React.Fragment> ); }

Let's break down the logic

{ !isLoading ? Object.keys(posts).map(key => <Post key={key} body={posts[key]} />) : <h3>Loading...</h3> }

When isLoading is not true, we return an array, map through it and pass the information to the Post component as props. Otherwise, we display a "Loading..." message while the application is at work. Very similar to before.

The method to fetch posts will look like the one used in the first part.

fetchPosts() { // The API where we're fetching data from fetch(`https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/posts.json`) // We get a response and receive the data in JSON format... .then(response => response.json()) // ...then we update the state of our application .then( data => this.setState({ posts: data, isLoading: false, }) ) // If we catch errors instead of a response, let's update the app .catch(error => this.setState({ error, isLoading: false })); }

Now we can call the fetchPosts method inside a componentDidMount() method

componentDidMount() { this.fetchPosts(); }

In the Post component, we map through the props we received and render the title and content for each post:

const Post = ({ body }) => { return ( <div> {body.map(post => { const { _id, title, content } = post; return ( <div key={_id}> <h2>{title}</h2> <p>{content}</p> <hr /> </div> ); })} </div> ); };

There we have it! Now we know how to use the Fetch API to request data from different sources and put it to use in an application. High fives. ✋

axios

OK, so we’ve spent a good amount of time looking at the Fetch API and now we’re going to turn our attention to axios.

Like the Fetch API, axios is a way we can make a request for data to use in our application. Where axios shines is how it allows you to send an asynchronous request to REST endpoints. This comes in handy when working with the REST API in a React project, say a headless WordPress CMS.

There’s ongoing debate about whether Fetch is better than axios and vice versa. We’re not going to dive into that here because, well, you can pick the right tool for the right job. If you’re curious about the points from each side, you can read here and here.

Using axios with a third-party API

See the Pen React Axios 1 Pen by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

Like we did with the Fetch API, let's start by requesting data from an API. For this one, we’ll fetch random users from the Random User API.

First, we create the App component like we’ve done it each time before:

class App extends React.Component { state = { users: [], isLoading: true, errors: null }; render() { return ( <React.Fragment> </React.Fragment> ); } }

The idea is still the same: check to see if loading is in process and either render the data we get back or let the user know things are still loading.

To make the request to the API, we'll need to create a function. We’ll call the function getUsers(). Inside it, we'll make the request to the API using axios. Let's see how that looks like before explaining further.

getUsers() { // We're using axios instead of Fetch axios // The API we're requesting data from .get("https://randomuser.me/api/?results=5") // Once we get a response, we'll map the API endpoints to our props .then(response => response.data.results.map(user => ({ name: `${user.name.first} ${user.name.last}`, username: `${user.login.username}`, email: `${user.email}`, image: `${user.picture.thumbnail}` })) ) // Let's make sure to change the loading state to display the data .then(users => { this.setState({ users, isLoading: false }); }) // We can still use the `.catch()` method since axios is promise-based .catch(error => this.setState({ error, isLoading: false })); }

Quite different from the Fetch examples, right? The basic structure is actually pretty similar, but now we’re in the business of mapping data between endpoints.

The GET request is passed from the API URL as a parameter. The response we get from the API contains an object called data and that contains other objects. The information we want is available in data.results, which is an array of objects containing the data of individual users.

Here we go again with calling our method inside of the componentDidMount() method:

componentDidMount() { this.getUsers(); }

Alternatively, you can do this instead and basically combine these first two steps:

componentDidMount() { axios .get("https://randomuser.me/api/?results=5") .then(response => response.data.results.map(user => ({ name: `${user.name.first} ${user.name.last}`, username: `${user.login.username}`, email: `${user.email}`, image: `${user.picture.thumbnail}` })) ) .then(users => { this.setState({ users, isLoading: false }); }) .catch(error => this.setState({ error, isLoading: false })); }

If you are coding locally from your machine, you can temporarily edit the getUsers() function to look like this:

getUsers() { axios .get("https://randomuser.me/api/?results=5") .then(response => console.log(response)) .catch(error => this.setState({ error, isLoading: false })); }

Your console should get something similar to this:

We map through the results array to obtain the information we need for each user. The array of users is then used to set a new value for our users state. With that done, we can then change the value of isLoading.

By default, isLoading is set to true. When the state of users is updated, we want to change the value of isLoading to false since this is the cue our app is looking for to make the switch from "Loading..." to rendered data.

render() { const { isLoading, users } = this.state; return ( <React.Fragment> <h2>Random User</h2> <div> {!isLoading ? ( users.map(user => { const { username, name, email, image } = user; return ( <div key={username}> <p>{name}</p> <div> <img src={image} alt={name} /> </div> <p>{email}</p> <hr /> </div> ); }) ) : ( <p>Loading...</p> )} </div> </React.Fragment> ); }

If you log the users state to the console, you will see that it is an array of objects:

The empty array shows the value before the data was obtained. The returned data contains only the name, username, email address and image of individual users because those are the endpoints we mapped out. There is a lot more data available from the API, of course, but we’d have to add those to our getUsers method.

Using axios with your own API

See the Pen React Axios 2 Pen by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

You have seen how to use axios with a third-party API but we can look at what it’s like to request data from our own API, just like we did with the Fetch API. In fact, let’s use same JSON file we used for Fetch so we can see the difference between the two approaches.

Here is everything put together:

class App extends React.Component { // State will apply to the posts object which is set to loading by default state = { posts: [], isLoading: true, errors: null }; // Now we're going to make a request for data using axios getPosts() { axios // This is where the data is hosted .get("https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/posts.json") // Once we get a response and store data, let's change the loading state .then(response => { this.setState({ posts: response.data.posts, isLoading: false }); }) // If we catch any errors connecting, let's update accordingly .catch(error => this.setState({ error, isLoading: false })); } // Let's our app know we're ready to render the data componentDidMount() { this.getPosts(); } // Putting that data to use render() { const { isLoading, posts } = this.state; return ( <React.Fragment> <h2>Random Post</h2> <div> {!isLoading ? ( posts.map(post => { const { _id, title, content } = post; return ( <div key={_id}> <h2>{title}</h2> <p>{content}</p> <hr /> </div> ); }) ) : ( <p>Loading...</p> )} </div> </React.Fragment> ); } }

The main difference between this method and using axios to fetch from a third-party is how the data is formatted. We’re getting straight-up JSON this way rather than mapping endpoints.

The posts data we get from the API is used to update the value of the component's posts state. With this, we can map through the array of posts in render(). We then obtain the id, title and content of each post using ES6 de-structuring, which is then rendered to the user.

Like we did before, what is displayed depends on the value of isLoading. When we set a new state for posts using the data obtained from the API, we had to set a new state for isLoading, too. Then we can finally let the user know data is loading or render the data we’ve received.

async and await

Another thing the promise-based nate of axios allows us to do is take advantage of is async and await . Using this, the getPosts() function will look like this.

async getPosts() { const response = await axios.get("https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/posts.json"); try { this.setState({ posts: response.data.posts, isLoading: false }); } catch (error) { this.setState({ error, isLoading: false }); } } Base instance

With axios, it’s possible to create a base instance where we drop in the URL for our API like so:

const api = axios.create({ baseURL: "https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/posts.json" });

...then make use of it like this:

async getPosts() { const response = await api.get(); try { this.setState({ posts: response.data.posts, isLoading: false }); } catch (error) { this.setState({ error, isLoading: false }); } }

Simply a nice way of abstracting the API URL.

Now, data all the things!

As you build React applications, you will run into lots of scenarios where you want to handle data from an API. Hopefully you know feel armed and ready to roll with data from a variety of sources with options for how to request it.

Want to play with more data? Sarah recently wrote up the steps for creating your own serverless API from a list of public APIs.

The post Using data in React with the Fetch API and axios appeared first on CSS-Tricks.

Categories: Web Technologies

This Week in Data with Colin Charles 47: MySQL 8.0.12 and It’s Time To Submit!

Planet MySQL - Fri, 08/03/2018 - 05:10

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Don’t wait, submit a talk for Percona Live Europe 2018 to be held in Frankfurt 5-7 November 2018. The call for proposals is ending soon, there is a committee being created, and it is a great conference to speak at, with a new city to boot!

Releases
  • A big release, MySQL 8.0.12, with INSTANT ADD COLUMN support, BLOB optimisations, changes around replication, the query rewrite plugin and lots more. Naturally this also means the connectors get bumped up to the 8.0.12, including a nice new MySQL Shell.
  • A maintenance release, with security fixes, MySQL 5.5.61 as well as MariaDB 5.5.61.
  • repmgr v4.1 helps monitor PostgreSQL replication, and can handle switch overs and failovers.
Link List
  • Saving With MyRocks in The Cloud – a great MyRocks use case, as in the cloud, resources are major considerations and you can save on I/O with MyRocks. As long as your workload is I/O bound, you’re bound to benefit.
  • Hasura GraphQL Engine allows you to get an instant GraphQL API on any PostgreSQL based application. This is in addition to Graphile. For MySQL users, there is Prisma.
Industry Updates
  • Jeremy Cole (Linkedin) ended his sabbatical to start work at Shopify. He was previously hacking on MySQL and MariaDB Server at Google, and had stints at Twitter, Yahoo!, his co-owned firm Proven Scaling, as well as MySQL AB.
  • Dremio raises $30 million from the likes of Cisco and more for their Series B. They are a “data-as-a-service” company, having raised a total of $45m in two rounds (Crunchbase).
Upcoming Appearances Feedback

I look forward to feedback/tips via e-mail at colin.charles@percona.com or on Twitter @bytebot.

 

The post This Week in Data with Colin Charles 47: MySQL 8.0.12 and It’s Time To Submit! appeared first on Percona Database Performance Blog.

Categories: Web Technologies

Global Read-Scaling using Continuent Clustering

Planet MySQL - Fri, 08/03/2018 - 05:00

Did you know that Continuent Clustering supports having clusters at multiple sites world-wide with either active-active or active-passive replication meshing them together?

Not only that, but we support a flexible hybrid model that allows for a blended architecture using any combination of node types. So mix-and-match your highly available database layer on bare metal, Amazon Web Services (AWS), Azure, Google Cloud, VMware, etc.

In this article we will discuss using the Active/Passive model to scale reads worldwide.

The model is simple: select one site as the Primary where all writes will happen. The rest of the sites will pull events as quickly as possible over the WAN and make the data available to all local clients. This means your application gets the best of both worlds:

  • Simple deployment with no application changes needed. All writes are sent the the master node in the Primary site cluster. Multimaster topologies can be more difficult to deploy due to conflicting writes.
  • Application clients are able to read data locally, so response time is much better
  • Ideal for Read-heavy/Write-light applications

The possibilities are endless, as is the business value. This distributed topology allows you to have all the benefits of high availability with centralized writes and local reads for all regions. Latency is limited only by the WAN link and the speed of the target node.

This aligns perfectly with the distributed Software-as-a-Service (SaaS) model where customers and data span the globe. Applications have access to ALL the data in ALL regions while having the ability to scale reads across all available slave nodes, giving you the confidence that operations will continue in the face of disruption.

Continuent Clustering incorporates the asynchronous Tungsten Replicator to distribute events from the write master to all read slaves. The loosely-coupled nature of this method allows for resilience in the face of uncertain global network communications and speeds. The Replicator intelligently picks up where it left off in the event of a network outage. Not only that, performance is enhanced by the asynchronous nature of the replication because the master does not need to wait for any slave to acknowledge the write.

Overall, Continuent Clustering is the most flexible, performant global database layer available today – use it underlying your SaaS offering as a strong base upon which to grow your worldwide business!

Click here for more online information on Continuent Clustering solutions

Want to learn more or run a POC? Contact us.

Categories: Web Technologies

Fusion.js JavaScript framework is geared to lightweight apps

InfoWorld JavaScript - Fri, 08/03/2018 - 03:00

Uber has introduced an open source web framework called Fusion.js that is anchored by a plugin architecture.

Intended for development of high-performing, lightweight apps, the JavaScript framework offers code reuse on both the server and browser and works with libraries such as React and Redux.

[ Go deeper at InfoWorld: Beyond jQuery: An expert guide to JavaScript frameworks • The complete guide to Node.js frameworks • The 10 essential JavaScript developer tools • The 6 best JavaScript IDEs and 10 best JavaScript editors. | Keep up with hot topics in programming with InfoWorld’s App Dev Report newsletter. ]

Fusion.js offers a command-line interface, a webpack/babel transpilation pipeline, and a Koa server. You use its plug-in-based architecture to build single-page applications and applications that depend on service layers to meet requirements such observability, testing, and internationalization. There are plugins for data-fetching and styling.

To read this article in full, please click here

Categories: Web Technologies

Query Macroeconomics

Planet MySQL - Thu, 08/02/2018 - 18:57

I studied some macroeconomics in school.  I’m still interested in it 20 years hence.  I was recently in a discussion about query optimization and how to prioritize what to fix first.  My pen and paper started graphing things, and here we are with an abstract thought.  Bear with me.  This is for entertainment purposes, mostly, but may actually have a small amount of value in your thought processes around optimizing queries.  This is a riff on various supply, demand graphs from macroeconomics.

In the graph below:

  • Axes:
    • Vertical: number of distinct problem queries
    • Horizontal: Database “query load capacity” gains (from optimization)
  • Lines:
    • LIRQ (long and/or infrequently run queries)
    • SFRQ (short, frequently run queries)
    • AC: Absolute capacity (the point at which you’re going as fast as I/O platform you run on will let you and your query capacity bottlenecks have less to do with queries and more to do with not enough IOPS).
  • Point:
    • E (subscript) O: Equilibrium of optimization

On LIRQ: Simply put, on a typical OLTP workload, you may have several long and infrequently running queries on the database that are “problems” for overall system performance.  If you optimize those queries, your performance gain in load capacity is sometimes fairly small.

ON SFRQ: Conversely, optimizing short running but very very frequently run “problem queries” can sometimes create very large gains in query load capacity.  Example: a covering index that takes a query that’s run many thousands of times a minute from 10 milliseconds down to < 1 millisecond by ensuring the data is in the bufferpool can give you some serious horsepower back.

On AC: Working on optimizing often run queries that are not creating an I/O logjam do not return any benefits.  You can only go as fast as your platform will let you, so if you are getting close to the point where your database is so well optimized that you really can’t read or write to disk any faster, then you have hit the wall and you will produce little ROI with your optimization efforts unless you make the platform faster (which moves the red line to the right).

On EO:  Often run long (or somewhat long) queries are the low hanging fruit.  They should stand out naturally and be a real “apparent pain” in the processlist or in application response times without even bothering to pt-query-digest.

Speaking of pt-query-digest: digests of the slow query log (when log_query_time is set to 0) are a good way to figure out what types of queries are taking up the lion’s share of your database load. You will be able to tell via the ranking and the total time and percentiles shown in the digest what queries are taking up your database’s valuable time.  I wish for you that you have SFRQ, so that your optimization effort may produce high rewards in capacity gain.

Thanks for bearing with me on my database capacity economics.

 

Categories: Web Technologies

Database Objects migration to RDS/ Aurora (AWS)

Planet MySQL - Thu, 08/02/2018 - 13:33

The world of application and its related services are migrating more towards cloud, because of availability, Elasticity, Manageability etc. While moving the entire stack we need to be very cautious while migrating the database part.

Migration of DB servers is not a simple lift and shift operation, Rather it would require a proper planning and more cautious in maintaining data consistency with existing DB server and cloud server by means of native replication or by using any third party tools.

The best way to migrate the existing MySQL database to RDS, in my opinion, is by using “logical backup“. Some of the logical backup tools as below,

Mysqldump — single threaded (widely used)
Mysqlpump — Multithreaded
Mydumper — Multithreaded

In this blog, we will see about a simple workaround and best practices to migrate DB objects such as procedures, triggers, etc from a existing database server on premises to Amazon RDS (MySQL), which is a fully managed relational database service provided by AWS.

In order to provide managed services, RDS restricts certain privileges at the user level. Below are the list of restricted privileges in RDS.

  • SUPER – Enable use of other administrative operations such as CHANGE MASTER TO, KILL (any connection), PURGE BINARY LOGS, SET GLOBAL, and mysqladmin debug command. Level: Global.
  • SHUTDOWN – Enable use of mysqladmin shutdown. Level: Global.
  • FILE – Enable the user to cause the server to read or write files. Level: Global.
  • CREATE TABLESPACE – Enable tablespaces and log file groups to be created, altered, or dropped. Level: Global.

All stored programs (procedures, functions, triggers, and events) and views can have a DEFINER attribute that names a MySQL account. As shown below.

DELIMITER ;; CREATE DEFINER=`xxxxx`@`localhost` PROCEDURE `prc_hcsct_try`(IN `contactId` INT, IN `section` VARCHAR(255)) BEGIN IF NOT EXISTS (SELECT 1 FROM contacts_details WHERE contact_id = contactId) THEN INSERT INTO contacts_details (contact_id, last_touch_source, last_touch_time) VALUES (contactId, section, NOW()); ELSE UPDATE contacts_details SET last_touch_source = section, last_touch_time = NOW() WHERE contact_id = contactId; END IF; END ;; DELIMITER ;

While restoring same on to the RDS server, since the RDS doesn’t provide a SUPER privilege to its user, The restoration fails with the below error, since it fails

ERROR 1227 (42000) at line 15316: Access denied; you need (at least one of) the SUPER privilege(s) for this operation

This will be very annoying since the restore fails at the end,

To overcome this below is the simple one-liner piped with the mysqldump command, which replaces the “DEFINER=`xxxxx`@`localhost`”, So when you are restoring the dump file, the definer will be a user which is used to restore

mysqldump -u user -p -h 'testdb.xcvadshkgfd..us-east-1.rds.amazonaws.com' --single-transaction --quick --triggers --routines --no-data --events testdb | perl -pe 's/\sDEFINER=`[^`]+`@`[^`]+`//' > test_dump.sql

Below is the content from the dump file after ignoring the default “DEFINER”, the same can also be done vis AWK and SED commands too.

DELIMITER ;; CREATE PROCEDURE `prc_contact_touch`(IN `contactId` INT, IN `section` VARCHAR(255)) BEGIN IF NOT EXISTS (SELECT 1 FROM contacts_details WHERE contact_id = contactId) THEN INSERT INTO contacts_details (contact_id, last_touch_source, last_touch_time) VALUES (contactId, section, NOW()); ELSE UPDATE contacts_details SET last_touch_source = section, last_touch_time = NOW() WHERE contact_id = contactId; END IF; END ;; DELIMITER ;

As you can see from the above the DEFINER section is completely removed.

Best practices for RDS migration,

1, Restore dump files from EC2 within the same VPC and RDS to have minimal network latency
2, Increase max_allowed_packet to 1G(max), to accommodate bigger packets
3, Dump data in parallel ,based on the instance capacity.
4, Bigger redo-log files can enhance the write performance
5, Make innodb_flush_log_at_trx_commit=2 for faster write with a little compromise to durability.

 

Categories: Web Technologies

Easy and Effective Way of Building External Dictionaries for ClickHouse with Pentaho Data Integration Tool

Planet MySQL - Thu, 08/02/2018 - 09:09

In this post, I provide an illustration of how to use Pentaho Data Integration (PDI) tool to set up external dictionaries in MySQL to support ClickHouse. Although I use MySQL in this example, you can use any PDI supported source.

ClickHouse

ClickHouse is an open-source column-oriented DBMS (columnar database management system) for online analytical processing. Source: wiki.

Pentaho Data Integration

Information from the Pentaho wiki: Pentaho Data Integration (PDI, also called Kettle) is the component of Pentaho responsible for the Extract, Transform and Load (ETL) processes. Though ETL tools are most frequently used in data warehouses environments, PDI can also be used for other purposes:

  • Migrating data between applications or databases
  • Exporting data from databases to flat files
  • Loading data massively into databases
  • Data cleansing
  • Integrating applications

PDI is easy to use. Every process is created with a graphical tool where you specify what to do without writing code to indicate how to do it; because of this, you could say that PDI is metadata oriented.

External dictionaries

You can add your own dictionaries from various data sources. The data source for a dictionary can be a local text or executable file, an HTTP(s) resource, or another DBMS. For more information, see “Sources for external dictionaries“.

ClickHouse:

  • Fully or partially stores dictionaries in RAM.
  • Periodically updates dictionaries and dynamically loads missing values. In other words, dictionaries can be loaded dynamically.

The configuration of external dictionaries is located in one or more files. The path to the configuration is specified in the dictionaries_config parameter.

Dictionaries can be loaded at server startup or at first use, depending on the dictionaries_lazy_load setting.

Source: dictionaries.

Example of external dictionary

In two words, dictionary is a key(s)-value(s) mapping that could be used for storing some value(s) which will be retrieved using a key. It is a way to build a “star” schema, where dictionaries are dimensions:

Using dictionaries you can lookup data by key(customer_id in this example). Why do not use tables for simple JOIN? Here is what documentation says:

If you need a JOIN for joining with dimension tables (these are relatively small tables that contain dimension properties, such as names for advertising campaigns), a JOIN might not be very convenient due to the bulky syntax and the fact that the right table is re-accessed for every query. For such cases, there is an “external dictionaries” feature that you should use instead of JOIN. For more information, see the section “External dictionaries”.

Main point of this blog post:

Demonstrating filling a MySQL table using PDI tool and connecting this table to ClickHouse as an external dictionary. You can create a scheduled job for loading or updating this table.

Filling dictionaries during the ETL process is a challenge. Of course you can write a script (or scripts) that will do all of this, but I’ve found a better way. Benefits:

  • Self-documented: you see what exactly PDI job does;
  • Easy to modify(see example below)
  • Built-in logging
  • Very flexible
  • If you use the Community Edition you will not pay anything.
Pentaho Data Integration part

You need a UI for running/developing ETL, but it’s not necessary to use the UI for running a transformation or job. Here’s an example of running it from a Linux shell(read PDI’s docs about jobs/transformation):

${PDI_FOLDER}/kitchen.sh -file=${PATH_TO_PDI_JOB_FILE}.kjb [-param:SOMEPARAM=SOMEVALUE] ${PDI_FOLDER}/pan.sh -file=${PATH_TO_PDI_TRANSFORMATION_FILE}.ktr [-param:SOMEPARAM=SOMEVALUE]

Here is a PDI transformation. In this example I use three tables as a source of information, but you can create very complex logic:

“Datasource1” definition example

Dimension lookup/update is a step that updates the MySQL table (in this example, it could be any database supported by PDI output step). It will be the source for ClickHouse’s external dictionary:

Fields definition:

Once you have done this, you hit the “SQL” button and it will generate the DDL code for D_CUSTOMER table. You can manage the algorithm of storing data in the step above: update or insert new record(with time_start/time_end fields). Also, if you use PDI for ETL, then you can generate a “technical key” for your dimension and store this key in ClickHouse, this is a different story… For this example, I will use “id” as a key in the ClickHouse dictionary.

The last step is setting up external dictionary in ClickHouse’s server config.

The ClickHouse part

External dictionary config, in this example you’ll see that I use MySQL:

<dictionaries> <dictionary> <name>customers</name> <source> <!-- Source configuration --> <mysql> <port>3306</port> <user>MySQL_User</user> <password>MySQL_Pass</password> <replica> <host>MySQL_host</host> <priority>1</priority> </replica> <db>DB_NAME</db> <table>D_CUSTOMER</table> </mysql> </source> <layout> <!-- Memory layout configuration --> <flat/> </layout> <structure> <id> <name>id</name> </id> <attribute> <name>name</name> <type>String</type> <null_value></null_value> </attribute> <attribute> <name>address</name> <type>String</type> <null_value></null_value> </attribute> <!-- Will be uncommented later <attribute> <name>phone</name> <type>String</type> <null_value></null_value> </attribute> --> </structure> <lifetime> <min>3600</min> <max>86400</max> </lifetime> </dictionary> </dictionaries>

Creating the fact table in ClickHouse:

Some sample data:

Now we can fetch data aggregated against the customer name:

Dictionary modification

Sometimes, it happens that you need to modify your dimensions. In my example I am going to add phone number to the “customers” dictionary. Not a problem at all. You update your datasource in PDI job:

Open the “Dimension lookup/update” step and add the phone field:

And hit the SQL button.

Also add the “phone” field in ClickHouse’s dictionary config:

   <attribute>        <name>phone</name>                <type>String</type>                <null_value></null_value>    </attribute>

ClickHouse will update a dictionary on the fly and we are ready to go—if not please check the logs. Now you can run the query without a modification of fact_table:

Also, note that PDI job is an XML file that could be put under version source control tools, so it is easy to track or rollback if needed.

Please do not hesitate to ask if you have questions!

The post Easy and Effective Way of Building External Dictionaries for ClickHouse with Pentaho Data Integration Tool appeared first on Percona Community Blog.

Categories: Web Technologies

VS Code extensions for the discerning developer palate

CSS-Tricks - Thu, 08/02/2018 - 07:10

I am a VS Code extension snob. I like to hunt down the most obscure extensions for VS Code — the ones that nobody knows about — and impress people at parties with my knowledge of finely aged and little-known VS Code capabilities… then watch as they look around desperately for someone else to talk to. It’s like the “Sideways” of VS Code.

In my endless pursuit of the perfect VS Code setup, I reached out to my colleagues here on the Azure team and asked them to share their favorite extension in their own words. So clear your pallet and breathe in the aromatic flavors of productivity; I am your VS Code Extension Sommelier.

Christina Warren - Settings Sync

I cannot live without this extension. If you use multiple machines (especially on multiple platforms, where a sym-linked Dropbox folder won’t really work), this extension is for you. It syncs your extensions, settings file, keybinding file, launch file, snippets folder, extension settings, and workspaces folder. This means that when you login to a new machine, you can quickly get back to work with your own settings and workflow tools in just a few minutes.

&#x1f449; Get Settings SyncExtension

Shayne Boyer - Paste JSON as Code

Consuming an endpoint that produces JSON is like breathing, but no one wants to choke on the hand cranking of an object by looking back and forth between JSON and the target language. This is a long loved feature in Visual Studio for .NET developers, but now you too can copy the JSON and paste that class into the editor as your target language and save a ton of time. Currently supports C#, Go, C++, Java, TypeScript, Swift, Elm, and JSON Schema.

&#x1f449; Get Paste JSON as Code Extension

Jeremy Likness - Spell Right

I find myself authoring blog posts, articles, and documentation almost every day. After embracing the power of Markdown (it is, after all, what is used to drive our own https://docs.com), I began writing my content in Visual Studio Code. It has a built-in preview window so I can edit the Markdown source and see the rendered result side-by-side. As much as I’ve written over the years, mastering the fine art of spelling still eludes me. Maybe it’s because I’m lazy, and this extension doesn’t help at all. With Spell Right, I get to reunite with my same favorite red squiggly lines that I first met in Word. It does a great job of catching spelling mistakes in real time, then illuminates my mistakes with a handy light bulb with alternative suggestions that give me single-click corrections. It enables me to be highly productive and look like I know what I’m doing. I recommend this for anyone who uses Code to write.

&#x1f449; Get Spell Right Extension

Aaron Wislang - Go

I live in VS Code and use it for everything from code and content to its integrated terminal. This extension enables first-class support for IntelliSense, testing, refactoring and more, making Code the best place to me to write Go. And it turns out I’m not the only one who thinks so; it helped to make Code the most popular editor amongst Gophers, just ahead of vim-go, as of the Go 2017 Survey!

&#x1f449; Get Go Extension

Cecil Phillip - C# Extensions

This extension was created by one of our community members, and it’s a great companion to the official C# extension from Microsoft. The “New Class|Interface” actions make it easy to add new types, and takes some of the hassle out of fixing up the namespaces. It also comes with a few interesting refactorings like "Initialize fields from constructors,” which I use pretty often. Whenever I’m teaching a C# course, I always have my students that are using Visual Studio Code install this extension.

&#x1f449; Get C# Extension

Brian Clark - VS Live Share

Pair programming just got way better. Gone are the days where I need to set up screen sharing to review code with coworkers. Instead I fire up a live share session, invite the other party and we can all view and edit code directly from our editors. I’ve used it in a situations where I review someone else’s C# code on my machine while it runs on THEIR machine! I didn’t have anything installed on my Mac for C# and yet I could debug their code!

&#x1f449; Get VS Live Share Extension

David Smith - Rewrap

I write a lot of text, and sometimes I just want (or need) to write in a plain-text environment. Easy reflowing of text is essential. (Surprised this isn’t built in, in fact.)

&#x1f449; Get Rewrap Extension

Anthony Chu - Git Lens

At a glance, GitLens shows me contextual information from Git about the line of code and the file I'm working in. It adds some useful commands to view history and diffs, search commits, and browse local and remote branches... all without leaving VS Code.

&#x1f449; Get Git Lens Extension

Asim Hussain - AsciiDoc

I used to write with Markdown, we all make mistakes. The solution to my Markdown mistake is AsciiDoc, especially if you write a lot of code snippets as I do. Out of the box it let’s you add line numbers, annotate and highlight lines and provides an incredible amount of customization. Plus, as a bonus it also can convert your blog posts into PDFs, ePubs, Mobis which is perfect for ebooks.

Once you start with AsciiDoc it’s hard to go back to Markdown and this plugin lets you preview your AsciiDoc right inside the editor.

&#x1f449; Get AsciiDoctor Extension

Seth Juarez) - VS Code Tools For AI

With Visual Studio Code Tools for AI, I can finally use machines I need but might never have access to in order to build the next Skynet — all within the comfort of my favorite lightweight editor. We live in amazing times, friends...

&#x1f449; Get VS Code Tools For AI Extension

Alena Hall - Ionide

Ionide is an awesome Visual Studio Code extension for cross-platform F# development. It’s open-source and it was created by the F# Community. I use it every day on multiple machines I have. It runs perfectly on both my Mac and Linux machines. Ionide conveniently integrates with Paket, Project Scaffold, and you can experiment away as much as you want in F# Interactive!

&#x1f449; Get Ionide Extension

Matt Soucoup - VSCodeVim

There’s an old joke that goes: “How do you know if a developer uses vim? They’ll tell you.” Well, I use vim! But… I want more. I want to tell everybody I use vim and I want to use all the great features and extensions that VS Code offers. (I mean, look at the list here!) So that’s where VSCodeVim saves the day for me. It puts a full-featured vim emulator into my VS Code editor, letting me edit files super fast by typing esoteric commands like h, 10 k, i, and u (lots and lots of u) and I still get to use all the awesome features of VS Code.

&#x1f449; Get VSCodeVim Extension

John Papa - Docker

If you like it put a container on it. Yeah, containers are the latest craze, but in a constantly evolving containerization world, it’s nice to have great tooling make it easy to use containers. Enter the Docker extension for VS Code. It handles the complete container development and deployment lifecycle! Start by generating docker files to your project, create an image, run it, and even push it to a container registry. If you’re like me, you like to make sure you still have complete control over your code and your app, even when they are inside of containers. Accessing the files, showing logs, and debugging the running container are all essential tools for development. This extension puts all of this within your reach without having to learn the docker command line!

&#x1f449; Get Docker Extension

Suz Hinton - Arduino

My favorite extension for VS Code is Arduino. I'm pretty sure anyone who knows me wouldn't be surprised about that. Traditionally, developing programs for Arduino-compatible micro-controller boards has been done in the Arduino IDE. It's a powerful program which smooths over the compilation and uploading experiences for dozens of boards. It is, however, not a full code IDE. It's missing some of the features you love, such as autocomplete, a file tree, and fine-grained tuning of the editor itself.

The good news is that the Arduino extension allows you to finally develop freely for all of your favorite micro-controller boards without leaving VS Code!

Here are some of my favorite things about the extension:

  1. It's open source! So reporting bugs and contributing improvements is a straightforward experience.
  2. The Command Palette integration is so handy. Compile and upload your code to an Arduino with one simple shortcut.
  3. Access all the great tools from the Arduino IDE right in VS Code. Yes, that even means board / library management and the serial monitor!
  4. Scaffolding brand new Arduino projects is a command away. No more copy + pasting older project directories to get set up.

&#x1f449; Get Arduino Extension

Burke Holland - Azure Functions

Serverless is like Hansel — so hot right now. But Serverless shouldn’t be a black box. The Azure Functions extensions for VS Code puts Serverless right inside of the editor. I love it because it lets me create new Serverless projects, new functions for all of the available trigger types (http, timer, blob storage, etc.), and most importantly, I can run them locally and debug them. Not that I would ever need to debug. My code is always perfect.

&#x1f449; Get Azure Functions Extension

The post VS Code extensions for the discerning developer palate appeared first on CSS-Tricks.

Categories: Web Technologies

​Experience a Simpler Cloud Computing Platform with DigitalOcean

CSS-Tricks - Thu, 08/02/2018 - 07:07

(This is a sponsored post.)

From deploying static sites and blogging platforms to managing multiple client websites, DigitalOcean provides a flexible platform for developers and their teams to deliver an unparalleled end-user experience with a lightning-fast network, pre-configured applications, and a 99.99% uptime SLA. Simply let us know your needs and our solutions engineers will provide the best options available.

Direct Link to ArticlePermalink

The post ​Experience a Simpler Cloud Computing Platform with DigitalOcean appeared first on CSS-Tricks.

Categories: Web Technologies

Amazon RDS Multi-AZ Deployments and Read Replicas

Planet MySQL - Thu, 08/02/2018 - 05:37

Amazon RDS is a managed relational database service that makes it easier to set up, operate, and scale a relational database in the cloud. One of the common questions that we get is “What is Multi-AZ and how it’s different from Read Replica, do I need both?”.  I have tried to answer this question in this blog post and it depends on your application needs. Are you looking for High Availability (HA), read scalability … or both?

Before we go to into detail, let me explain two common terms used with Amazon AWS.

Region – an AWS region is a separate geographical area like US East (N. Virginia), Asia Pacific (Mumbai), EU (London) etc. Each AWS Region has multiple, isolated locations known as Availability Zones.

Availability Zone (AZ) – AZ is simply one or more data centers, each with redundant power, networking and connectivity, housed in separate facilities. Data centers are geographically isolated within the same region.

What is Multi-AZ?

Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments.

In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica of the master DB in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to the standby replica to provide data redundancy, failover support and to minimize latency during system backups. In the event of planned database maintenance, DB instance failure, or an AZ failure of your primary DB instance, Amazon RDS automatically performs a failover to the standby so that database operations can resume quickly without administrative intervention.

You can check in the AWS management console if a database instance is configured as Multi-AZ. Select the RDS service, click on the DB instance and review the details section.

This screenshot from AWS management console (above) shows that the database is hosted as Multi-AZ deployment and the standby replica is deployed in us-east-1a AZ.

Benefits of Multi-AZ deployment:
  • Replication to a standby replica is synchronous which is highly durable.
  • When a problem is detected on the primary instance, it will automatically failover to the standby in the following conditions:
    • The primary DB instance fails
    • An Availability Zone outage
    • The DB instance server type is changed
    • The operating system of the DB instance is undergoing software patching.
    • A manual failover of the DB instance was initiated using Reboot with failover.
  • The endpoint of the DB instance remains the same after a failover, the application can resume database operations without manual intervention.
  • If a failure occurs, your availability impact is limited to the time that the automatic failover takes to complete. This helps to achieve increased availability.
  • It reduces the impact of maintenance. RDS performs maintenance on the standby first, promotes the standby to primary master, and then performs maintenance on the old master which is now a standby replica.
  • To prevent any negative impact of the backup process on performance, Amazon RDS creates a backup from the standby replica.

Amazon RDS does not failover automatically in response to database operations such as long-running queries, deadlocks or database corruption errors. Also, the Multi-AZ deployments are limited to a single region only, cross-region Multi-AZ is not currently supported.

Can I use an RDS standby replica for read scaling?

The Multi-AZ deployments are not a read scaling solution, you cannot use a standby replica to serve read traffic. Multi-AZ maintains a standby replica for HA/failover. It is available for use only when RDS promotes the standby instance as the primary. To service read-only traffic, you should use a Read Replica instead.

What is Read Replica?

Read replicas allow you to have a read-only copy of your database.

When you create a Read Replica, you first specify an existing DB instance as the source. Then Amazon RDS takes a snapshot of the source instance and creates a read-only instance from the snapshot. You can use MySQL native asynchronous replication to keep Read Replica up-to-date with the changes. The source DB must have automatic backups enabled for setting up read replica.

Benefits of Read Replica
  • Read Replica helps in decreasing load on the primary DB by serving read-only traffic.
  • A Read Replica can be manually promoted as a standalone database instance.
  • You can create Read Replicas within AZ, Cross-AZ or Cross-Region.
  • You can have up to five Read Replicas per master, each with own DNS endpoint. Unlike a Multi-AZ standby replica, you can connect to each Read Replica and use them for read scaling.
  • You can have Read Replicas of Read Replicas.
  • Read Replicas can be Multi-AZ enabled.
  • You can use Read Replicas to take logical backups (mysqldump/mydumper) if you want to store the backups externally to RDS.
  • Read Replica helps to maintain a copy of databases in a different region for disaster recovery.

At AWS re:Invent 2017, AWS announced the preview for Amazon Aurora Multi-Master, this will allow users to create multiple Aurora writer nodes and helps in scaling reads/writes across multiple AZs. You can sign up for preview here.

Conclusion

While both (Multi-AZ and Read replica) maintain a copy of database but they are different in nature. Use Multi-AZ deployments for High Availability and Read Replica for read scalability. You can further set up a cross-region read replica for disaster recovery.

The post Amazon RDS Multi-AZ Deployments and Read Replicas appeared first on Percona Database Performance Blog.

Categories: Web Technologies

Configuring the MySQL Shell Prompt

Planet MySQL - Thu, 08/02/2018 - 04:34

With the introduction of MySQL Shell 8.0, the second major version of the new command-line tool for MySQL, a new and rich featured prompt was introduced. Unlike the prompt of the traditional mysql command-line client, it does not just say mysql> by default. Instead it comes in a colour coded spectacle.

The default prompt is great, but for one reason or another it may be that you want to change the prompt. Before getting to that, let’s take a look at the default prompt, so the starting point is clear.

The Default Prompt

An example of the default prompt can be seen in the screen shot below. As you can see, there are several parts to the prompt, each carrying its information.

MySQL Shell with the default font.

There are six parts. From left to right, they are:

  • Status: Whether it is a production system or whether the connection is lost. This part is not included in the above screen shot.
  • MySQL: Just a reminder that you are working with a MySQL database.
  • Connection: Which host you are connected to (localhost), which port (33060 – to the X protocol port), and that SSL is being used.
  • Schema: The current default schema.
  • Mode: Whether you are using JavaScript (JS), Python (Py), or SQL (SQL) to enter commands.
  • End: As per tradition, the prompt ends with a >.

Depending on your current status one or more of the parts may be missing. For example, the configuration options will only be present, when you have an active connection to a MySQL Server instance.

The prompt works well on a black background and thus brightly coloured text as in the screen shot, but for some other background and text colours, it is not so – or you may simply want different colours to signify which whether you are connected to a development or production system. You may also find the prompt too verbose, if you are recording a video or writing training material. So, let’s move on and find out how the prompt is configured.

The Prompt Configuration

Since the prompt is not just a simple string, it is also somewhat more complex to configure it than just setting an option. The configuration is done in a JSON object stored in a file named prompt.json (by default – you can change this – more about that later).

The location of prompt.json depends on your operating system:

  • Linux and macOS: ~/.mysqlsh/prompt.json – that is in the .mysqlsh directory in the user’s home directory.
  • Microsoft Windows: %AppData%\MySQL\mysqlsh\prompt.json – that is in AppData\Roaming\MySQL\mysqlsh directory from the user’s home directory.

If the file does not exist, MySQL Shell falls back on a system default. For example, on Oracle Linux 7 installation, the file /usr/share/mysqlsh/prompt/prompt_256.json is used. This is also the template that is copied to %AppData%\MySQL\mysqlsh\prompt.json on Microsoft Windows 10 installation.

The MySQL Shell installation includes several templates that you can choose from. These are:

  • prompt_16.json: A coloured prompt limited to use 16/8 color ANSI colours and attributes.
  • prompt_256.json: The prompt uses 256 indexed colours. This is the one that are used by default both on Oracle Linux 7 and Microsoft Windows 10.
  • prompt_256inv.json: Similar to prompt_256.json, but with an “invisible” background colour (it just uses the same as for the terminal) and with different foreground colours.
  • prompt_256pl.json: Same as prompt_256.json but with extra symbols. This Powerline patched font such as the one that is installed with the Powerline project. This will add a padlock with the prompt when you use SSL to connect to MySQL and use “arrow” separators.
  • prompt_256pl+aw.json: Same as prompt_256pl.json but with “awesome symbols”. This additionally requires the awesome symbols to be included in the Powerline font.
  • prompt_classic.json: This is a very basic prompt that just shows mysql-js>, mysql-py>, or mysql-sql> based on the mode in use.
  • prompt_nocolor.json: Gives the full prompt information, but completely without colours. An example of a prompt is: MySQL [localhost+ ssl/world] JS>

These are templates that you can use as is or modify to suite yours needs and preferences. One way to pick a theme is to copy the template file into the location of your user’s prompt definition. The templates can be found in the prompt directory of the installation, for example:

  • Oracle Linux 7 RPM: /usr/share/mysqlsh/prompt/
  • Microsoft Windows: C:\Program Files\MySQL\MySQL Shell 8.0\share\mysqlsh\prompt

Another option is to define the MYSQLSH_PROMPT_THEME environment variable to point to the file you want to use. The value should be the full path to the file. This is particularly useful if you want to try the different template to see what works best for you. For example, to use the prompt_256inv.json template from the command prompt on Microsoft Windows:

C:\>set MYSQLSH_PROMPT_THEME=C:\Program Files\MySQL\MySQL Shell 8.0\share\mysqlsh\prompt\prompt_256inv.json

Which gives the prompt:

The prompt when using the prompt_256inv.json template.

If none of the templates work for you, you can also dive in at the deep end of the pool and create your own configuration.

Creating Your Own Configuration

It is not completely trivial to create your own configuration, but if you use the template that is closest to the configuration you want as a starting point, it is not difficult either.

A good source of help to create the perfect prompt is also the README.prompt file that is located in the same directory as the template files. The README.prompt file contains the specification for the configuration.

Instead of going through the specification in details, let’s take a look at the prompt_256.json template and discuss some parts of it. Let’s start at the end of the file:

"segments": [ { "classes": ["disconnected%host%", "%is_production%"] }, { "text": " My", "bg": 254, "fg": 23 }, { "separator": "", "text": "SQL ", "bg": 254, "fg": 166 }, { "classes": ["disconnected%host%", "%ssl%host%session%"], "shrink": "truncate_on_dot", "bg": 237, "fg": 15, "weight": 10, "padding" : 1 }, { "classes": ["noschema%schema%", "schema"], "bg": 242, "fg": 15, "shrink": "ellipsize", "weight": -1, "padding" : 1 }, { "classes": ["%Mode%"], "text": "%Mode%", "padding" : 1 } ] }

This is where the elements of the prompt is defined. There are a few things that is interesting to note here.

First, notice that there is an object with the classes disconnected%host% and %is_production%. The names inside the %s are variables defined in the same file or that comes from MySQL Shell itself (it has variables such as the host and port). For example, is_production is defined as:

"variables" : { "is_production": { "match" : { "pattern": "*;%host%;*", "value": ";%env:PRODUCTION_SERVERS%;" }, "if_true" : "production", "if_false" : "" },

So, a host is considered to be a production instance if it is included in the environment variable PRODUCTION_SERVERS. When there is a match, and additional element is inserted at the beginning of the prompt to make it clear, you are working on with a production system:

Connected to a production system.

The second thing to note about the list of elements is that there are some special functions such as shrink which can be used to define how the text is kept relatively short. For example, the host uses truncate_on_dot, so only the part before the first dot in the hostname is displayed if the full hostname is too long. Alternatively ellipsize can be used to add … after the truncated value.

Third, the background and foreground colours are defined using the bg and fg elements respectively. This allows you to completely customize the prompt to your liking with respect to colours. The colour can be specified in one of the following ways:

  • By Name: There are a few colours that are known by name: black, red, green, yellow, blue, magenta, cyan, white.
  • By Index: A value between 0 and 255 (both inclusive) where 0 is black, 63 light blue, 127 magenta, 193 yellow, and 255 is white.
  • By RGB: Use a value in the #rrggbb format. Requires the terminal supports TrueColor colours.
Tip: If you want to do more than make a few tweaks to an existing template, read the README.prompt file to see the full specification including a list of supported attributes and built-in variables. These may change in the future as more features are added.

One group of built-in variables that deserve an example are the ones that in some way depend on the environment or the MySQL instance you are connected to. These are:

  • %env:varname%: This uses an environment variable. The way that it is determined whether you are connected to a production server is an example of how an environment variable
  • %sysvar:varname%: This uses the value of a global system variable from MySQL. That is, the value returned by SELECT @@global.varname.
  • %sessvar:varname%: Similar to the previous but using a session system variable.
  • %status:varname%: This uses the value of a global status variable from MySQL. That is, the value returned by SELECT VARIABLE_VALUE FROM performance_schema.global_status WHERE VARIABLE_NAME = ‘varname’.
  • %status:varname%: Similar to the previous, but using a session status variable.

If you for example want to include the MySQL version (of the instance you are connected to) in the prompt, you can add an element like:

{ "separator": "", "text": "%sysvar:version%", "bg": 250, "fg": 166 },

The resulting prompt is:

Including the MySQL Server version in the prompt.

What next? Now it is your turn to play with MySQL Shell. Enjoy.

Categories: Web Technologies

PHP 7.3.0.beta1 Released - PHP: Hypertext Preprocessor

Planet PHP - Wed, 08/01/2018 - 17:00
The PHP team is glad to announce the release of the fifth PHP 7.3.0 version, PHP 7.3.0beta1. The rough outline of the PHP 7.3 release cycle is specified in the PHP Wiki. For source downloads of PHP 7.3.0beta1 please visit the download page. Windows sources and binaries can be found on windows.php.net/qa/. Please carefully test this version and report any issues found in the bug reporting system. THIS IS A DEVELOPMENT PREVIEW - DO NOT USE IT IN PRODUCTION! For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. These files can also be found in the release archive. The next release would be Beta 2, planned for August 16th. The signatures for the release can be found in the manifest or on the QA site. Thank you for helping us make PHP better.
Categories: Web Technologies

Pages