emGee Software Solutions Custom Database Applications

Share this

Feed aggregator

Appnovation Technologies: Simple Website Approach Using a Headless CMS: Part 1

Drupal.org aggregator - Wed, 02/06/2019 - 00:00
Simple Website Approach Using a Headless CMS: Part 1 I strongly believe that the path for innovation requires a mix of experimentation, sweat, and failure. Without experimenting with new solutions, new technologies, new tools, we are limiting our ability to improve, arresting our potential to be better, to be faster, and sadly ensuring that we stay rooted in systems, processes and...
Categories: Drupal CMS

Importing Data from MongoDB to MySQL using Python

Planet MySQL - 2 hours 13 min ago

MySQL Shell 8.0.13 (GA) introduced a new feature to allow you to easily import JSON documents to MySQL. The basics of this new feature were described in a previous blog post. In this blog we we will provide more details about this feature, focusing on a practical use case of interest for to many: How to import JSON data from MongoDB to MySQL.…

Categories: Web Technologies

What is the MEAN stack? JavaScript web applications

InfoWorld JavaScript - 4 hours 29 min ago

Most anyone who has developed web applications knows the acronym LAMP, which is used to describe web stacks made with Linux, Apache (web server), MySQL (database server), and PHP, Perl, or Python (programming language).

Another web-stack acronym has come to prominence in the last few years: MEAN—signifying a stack that uses MongoDB (database server), Express (server-side JavaScript framework), Angular (client-side JavaScript framework), and Node.js (JavaScript runtime).

[ Getting to know Node? Don’t miss: Node.js tutorial: Get started with Node.js10 JavaScript concepts every Node developer must master.The complete guide to Node.js frameworks7 keys to structuring your Node app. | Keep up with hot topics in programming with InfoWorld’s App Dev Report newsletter. ]

MEAN is one manifestation of the rise of JavaScript as a “full-stack development” language. Node.js provides a JavaScript runtime on the server; Angular and Express are JavaScript frameworks used to build web clients and Node.js applications, respectively; and MongoDB’s data structures are stored in a binary JSON (JavaScript Object Notation) format, while its queries are expressed in JSON.

To read this article in full, please click here

Categories: Web Technologies

Issue 365

TheWeeklyDrop - 5 hours 42 min ago
Issue 365 - November, 15th 2018
Categories: Drupal CMS

MySQL X DevAPI Connection Pool with Connector/Python

Planet MySQL - 7 hours 25 min ago

If you have an application that need to use multiple connections to the MySQL database for short periods of times, it can be a good to use a connection pool to avoid creating a new connection and going through the whole authentication process every time a connection is needed. For the Python Database API (PEP249), MySQL Connector/Python has had support for connection pools for a long time. With the release of MySQL 8.0.13, the new X DevAPI also has support for connection pools.

This blog will first cover the background of the X DevAPI connection pool feature in MySQL Connector/Python. Then provide an example.

Background

You create a connection pool using the mysqlx.get_client() function. You may wonder why you are creating a client and not a pool? As will be shown later, there is a little more to this feature than just a connection pool. So, it makes sense to use a more generic term.

The get_client() function takes two arguments: The connection options and the client options. The connection options are the usual arguments defining which MySQL instance to connect to, authentication related options, how to connect, etc. The client options are the interesting ones in the discussion of a connection pool.

The client options is a dictionary or a JSON document written as a string. Currently, the only supported client options are the ones defining the connection pool. These are specified under the pooling field (and example will be provided shortly). This leaves room for the possibility to expand get_client() later with other features than a connection pool.

There are currently four connection pool options:

  • enabled: Whether the connection pool is enabled. The default is True.
  • max_size: The maximum number of connections that can be in the pool. The default is 25.
  • max_idle_time: How long time in milliseconds a connection can be idle before it is closed. The default is 0 which means “infinite” (in practice 2147483000 milliseconds).
  • queue_timeout: The maximum amount of time in milliseconds that an attempt to get a connection from the pool will block. If no connections have become available before the timeout, a mysqlx.errors.PoolError exception is raised. The default is 0 which means “infinite” (in practice 2147483000 milliseconds).

What happens if you disable the connection pool? In that case the client that is returned simply work as a template for connections and you can keep creating connections until MySQL Server runs out of connections. In that case, the session you end up with is a regular connection, so it when you close it, it will disconnect to MySQL.

Back to the case where the connection pool is enabled. Once you have the client object, you can start using the pool. You retrieve a connection from the pool with the get_session() method. No arguments are used. After this you can use the session just as a regular standalone connection. The only difference is that when you close the session, it is returned to the pool rather than disconnected.

Enough background. Let’s see an example.

Example

The following example creates a connection pool with at most two connections. Then two sessions are fetched from the pool and their connection IDs are printed. A third session will be requested before one of the original sessions is returned to the pool. Finally, a session is reused and its connection ID is printed.

import mysqlx from datetime import datetime cnxid_sql = "SELECT CONNECTION_ID() AS ConnectionID" fmt_id = "Connection {0} ID ..........................: {1}" connect_args = { "host": "127.0.0.1", "port": 33060, "user": "pyuser", "password": "Py@pp4Demo", }; client_options = { "pooling": { "enabled": True, "max_size": 2, "max_idle_time": 60000, "queue_timeout": 3000, } } # Create the connection pool pool = mysqlx.get_client(connect_args, client_options) # Fetch two connections (exhausting the pool) # and get the connection ID for each connection1 = pool.get_session() id1_row = connection1.sql(cnxid_sql).execute().fetch_one() print(fmt_id.format(1, id1_row["ConnectionID"])) connection2 = pool.get_session() id2_row = connection2.sql(cnxid_sql).execute().fetch_one() print(fmt_id.format(2, id2_row["ConnectionID"])) # Attempt to get a third connection time = datetime.now().strftime('%H:%M:%S') print("Starting to request connection 3 .........: {0}".format(time)) try: connection3 = pool.get_session() except mysqlx.errors.PoolError as err: print("Unable to fetch connection 3 .............: {0}".format(err)) time = datetime.now().strftime('%H:%M:%S') print("Request for connection 3 completed .......: {0}".format(time)) # Return connection 1 to the pool connection1.close() # Try to get connection 3 again connection3 = pool.get_session() id3_row = connection3.sql(cnxid_sql).execute().fetch_one() print(fmt_id.format(3, id3_row["ConnectionID"])) # Close all connetions pool.close()

The first thing to notice is the client options defined in lines 14-21. In this case all four options are set, but you only need to set those where you do not want the default value. The settings allow for at most two connections in the pool, when requesting a session it is allowed to take at most 3 seconds, and idle sessions should be disconnected after 60 seconds.

In line 24 the connection pool (client) is created and subsequent two sessions are fetched from the pool. When a third session is requested, it will trigger a PoolError exception as the pool is exhausted. Lines 38-42 shows how to handle the exception.

Finally the first connection is returned to the pool and it is possible to get the third request to complete.

An example of the output is (the connection IDs and timestamps will differ from execution to execution):

Connection 1 ID ..........................: 239 Connection 2 ID ..........................: 240 Starting to request connection 3 .........: 18:23:14 Unable to fetch connection 3 .............: pool max size has been reached Request for connection 3 completed .......: 18:23:44 Connection 3 ID ..........................: 241

From the output you can see that the first attempt to fetch connection 3 takes three seconds before it times out and raises the exception – just as specified by the queue_timeout setting.

What may surprise you (at least if you have studied Chapter 5 from MySQL Connector/Python Revealed) from this output is that once connection 1 has been returned to the pool and connection 3 fetches the session again, it has a new connection ID. Does that mean the pool is not working? No, the pool is working alright. However, the X Plugin (the plugin in MySQL Server handling connections using the X Protocol) works differently than the connection handling for the traditional MySQL protocol.

The X Plugin distinguishes between the connection to the application and the thread inside MySQL. So, when the session is returned to the pool and the session is reset (to set the session variables back to the defaults and remove user variables) the thread inside MySQL is removed. As MySQL uses threads, it is cheap to create a new thread as it is needed, so this is not a performance problem. However, the connection to the application is maintained. This means you safe the expensive steps of creating the connection and authenticating, while the threads only actually exists inside MySQL while it is out of the pool.

If you are interested in learning more about MySQL Connector/Python 8 including how to use the X DevAPI, then I am the author of MySQL Connector/Python Revealed (Apress). It is available from Apress, Amazon, and other book stores.

Categories: Web Technologies

MySQL 8.0.13: Change Current Password Policy

Planet MySQL - Wed, 11/14/2018 - 23:30

We have introduced a new policy for you to enforce on your non-privileged users. It requires their current password at the time they set a new password. It is optional and off by default. You can control it globally (for all non-privileged users) or on a per-user basis.…

Categories: Web Technologies

MySQL 8.0.13: Change Current Password Policy

MySQL Server Blog - Wed, 11/14/2018 - 23:30

We have introduced a new policy for you to enforce on your non-privileged users. It requires their current password at the time they set a new password. It is optional and off by default. You can control it globally (for all non-privileged users) or on a per-user basis.…

Categories: Web Technologies

MySQL NDB Cluster row level locks and write scalability

Planet MySQL - Wed, 11/14/2018 - 23:08
MySQL NDB Cluster uses row level locks instead of a single shared commit lock in order to prevent inconsistency in simultaneous distributed transactions. This gives NDB a great advantage over all other MySQL clustering solutions and is one reason behind cluster’s unmatched ability to scale both reads and writes. 
NDB is a transactional data store. The lowest and only isolation level available in NDB is Read Committed. There are no dirty reads in NDB and only committed rows can be read by other transactions. 
All write transactions in NDB will result in exclusive row locks of all individual rows changed during the transaction. Any other transaction is allowed to read any committed row independent of their lock status. Reads are lock-free reads.
The great advantage is that committed reads in NDB never block during writes to the same data and always the latest committed changes are read. A select doesn't block concurrent writes and vice versa. 
This is extremely beneficial for write scalability. No shared global commit synchronization step is needed to ensure transaction consistency across distributed data store instances. Each instance instead handles its own row locks - usually only locking a few out of many rows. Due to NDB’s highly parallel and asynchronous design many rows can be committed in parallel within a distributed instance and across multiple instances. 
As a side effect interleaved reading transactions can read committed rows of write transactions before all rows of that writing transaction are committed. The set of rows returned may represent a partially committed transaction and not a snapshot of a single point in time. Pending transactions never change the state of the data before they are committed. All rows of committed transactions are atomically guaranteed to be network durable and consistent in all distributed instances of the data.
If more consistent reads are needed then read locks used in SELECT... IN SHARE MODE / SELECT .... FOR UPDATE can be used to get a serialized view of a set of rows.

Categories: Web Technologies

MySQL Master Replication Crash Safety Part #2: lagging slaves

Planet MySQL - Wed, 11/14/2018 - 21:59
This is Part #2 of the MySQL Master Replication Crash Safety series.  In the previous post, we explored the consequence of reducing durability on masters with slaves using legacy file+position replication.  The consequences are data inconsistencies with a clear warning sign: the slaves stop replicating and report an error.  In this post, we extend our understanding of the impact of running a
Categories: Web Technologies

Drupal blog: Thirteen recommendations for how to evolve Drupal's governance

Drupal.org aggregator - Wed, 11/14/2018 - 15:31

This blog has been re-posted and edited with permission from Dries Buytaert's blog. Please leave your comments on the original post.

After months of hard work, the Drupal Governance Task Force made thirteen recommendations for how to evolve Drupal's governance.

Drupal exists because of its community. What started from humble beginnings has grown into one of the largest Open Source communities in the world. This is due to the collective effort of thousands of community members.

What distinguishes Drupal from other open source projects is both the size and diversity of our community, and the many ways in which thousands of contributors and organizations give back. It's a community I'm very proud to be a part of.

Without the Drupal community, the Drupal project wouldn't be where it is today and perhaps would even cease to exist. That is why we are always investing in our community and why we constantly evolve how we work with one another.

The last time we made significant changes to Drupal's governance was over five years ago when we launched a variety of working groups. Five years is a long time. The time had come to take a step back and to look at Drupal's governance with fresh eyes.

Throughout 2017, we did a lot of listening. We organized both in-person and virtual roundtables to gather feedback on how we can improve our community governance. This led me to invest a lot of time and effort in documenting Drupal's Values and Principles.

In 2018, we transitioned from listening to planning. Earlier this year, I chartered the Drupal Governance Task Force. The goal of the task force was to draft a set of recommendations for how to evolve and strengthen Drupal's governance based on all of the feedback we received. Last week, after months of work and community collaboration, the task force shared thirteen recommendations (PDF).

Me reviewing the Drupal Governance proposal on a recent trip.

Before any of us jump to action, the Drupal Governance Task Force recommended a thirty-day, open commentary period to give community members time to read the proposal and to provide more feedback. After the thirty-day commentary period, I will work with the community, various stakeholders, and the Drupal Association to see how we can move these recommendations forward. During the thirty-day open commentary period, you can then get involved by collaborating and responding to each of the individual recommendations below:

I'm impressed by the thought and care that went into writing the recommendations, and I'm excited to help move them forward.

Some of the recommendations are not new and are ideas that either the Drupal Association, myself or others have been working on, but that none of us have been able to move forward without a significant amount of funding or collaboration.

I hope that 2019 will be a year of organizing and finding resources that allow us to take action and implement a number of the recommendations. I'm convinced we can make valuable progress.

I want to thank everyone who has participated in this process. This includes community members who shared information and insight, facilitated conversations around governance, were interviewed by the task force, and supported the task force's efforts. Special thanks to all the members of the task force who worked on this with great care and determination for six straight months: Adam BergsteinLyndsey JacksonEla MeierStella PowerRachel LawsonDavid Hernandez and Hussain Abbas.

Categories: Drupal CMS

National Apprenticeship Week: The Time to Rethink Apprenticeships is Now

Department of Education - Wed, 11/14/2018 - 15:28
In June of 2017, President Donald Trump signed an Executive Order titled, “Expanding Apprenticeships in America.” This order called for the creation of a special Task Force to identify strategies and proposals to promote apprenticeships in the United States. To meet this challenge, Department of Labor Secretary Alex Acosta brought together representatives from companies, labor unions, trade associations, educational institutions and public agencies.

Migrating to Amazon Aurora: Reduce the Unknowns

Planet MySQL - Wed, 11/14/2018 - 13:28

Migrating to Amazon Aurora. Shutterstock.com

In this Checklist for Success series, we will discuss reducing unknowns when hosting in the cloud using and migrating to Amazon Aurora. These tips might also apply to other database as a service (DBaaS) offerings.

While DBaaS encapsulates a lot of the moving pieces, it also means relying on this approach for your long-term stability. This encapsulation is a two-edged sword that takes away your visibility into performance outside of the service layer.

Shine a Light on Bad Queries

Bad queries are one of the top offenders of downtime. Aurora doesn’t protect you against them. Performing a query review as part of a routine health check of your workload helps ensure that you do not miss looming issues. It also helps you predict the workload on specific times and events. For example, if you already know your top three queries tend to exponentially increase, and are read bound, you can easily decide to increase the number of read-replicas on your cluster.

Having historical query performance data helps makes this task easier and less stressful. While historical data allows you to look backward, it’s also very valuable to have a tool that lets you look at active incident scenarios in progress. Knowing what queries are currently running when suffering from performance issues reduces guesswork and helps solve problems faster.

Pick Your Tool(s)

There are a number of ways you can achieve query performance excellence. Performance Insights is a built-in offering from AWS that is tightly integrated with RDS. It has a seven-day free retention period, with an extra cost beyond that. It is available for each instance in a cluster. Performance Insights takes most of its metrics from the Performance_Schema. It includes specific metrics from the operating system that may not be available from regular Cloudwatch metrics.

Query Analytics from Percona Monitoring and Management (PMM) also uses the same source as Performance Insights: the Performance Schema. Unlike Performance Insights though, PMM is deployed separately from the cluster. This means you can keep your metrics even if you keep recycling your cluster instances. With PMM, you can also consolidate your query reviews from a single location, and you can monitor your cluster instances from the same location – including an extensive list of performance metrics.

You can enable Performance Insights and configure for the default seven-day retention period, and then combine with PMM for longer retention period across all your cluster instances. Note though that PMM may incur a cost for additional API calls to retrieve performance insight metrics.

Outside of the built-in and open source alternative, VividCortex, NewRelic and Datadog are excellent tools that do everything we discussed above and more. NewRelic, for example, allows you to take a good view of the database, application and external requests timing. This, in my opinion, is so very valuable.

Bad queries are not only the potential unknowns. Deleted rows, dropped tables, crippling schema changes, and even AZ/Region failures are realities in the cloud. We will discuss them next! Stay “tuned” for part two.

Meanwhile, we’d like to hear your success stories in Amazon Aurora in the comments below!

Categories: Web Technologies

Drupixels: Start, stop or restart Apache Web Server from terminal on Mac OS

Drupal.org aggregator - Wed, 11/14/2018 - 10:00
Start, stop or stop Apache web server from the terminal on Mac OS to make your life easier.
Categories: Drupal CMS

Pages

1 2 3 4 5 6 7 8 9 next › last »