emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Deep dive into JS asynchronicity

Echo JS - Wed, 03/14/2018 - 17:32
Categories: Web Technologies

Short note on what CSS display properties do to table semantics

CSS-Tricks - Wed, 03/14/2018 - 13:14

We've blogged about responsive tables a number of times over the years. There's a variety of techniques, and which you choose should be based on the data in the table and the UX you're going for. But many of them rely upon resetting a table element's natural display value to something else, for example display: block. Steve Faulkner warns us:

When CSS display: block or display: grid or display: flex is set on the table element, bad things happen. The table is no longer represented as a table in the accessibility tree, row elements/semantics are no longer represented in any form.

He argues that the browser is making a mistake here by altering those semantics, but since they do, it's good to know it's fixable with (a slew of) ARIA roles.

Direct Link to ArticlePermalink

The post Short note on what CSS display properties do to table semantics appeared first on CSS-Tricks.

Categories: Web Technologies

A Browser-Based, Open Source Tool for Alternative Communication

CSS-Tricks - Wed, 03/14/2018 - 08:15

Shay Cojocaru contributed to this post.

Have you ever lost your voice? How did you handle that? Perhaps you carried a notebook and pen to scribble notes. Or jotted quick texts on your phone.

Have you ever traveled somewhere that you didn't speak or understand the language everyone around you was speaking? How did you order food, or buy a train ticket? Perhaps you used a translation phrasebook, or Google translate. Perhaps you relied mostly on physical gestures.

All of these solutions are examples of communication methods — tools and strategies — that you may have used before to solve everyday communicative challenges. The preceding examples are temporary solutions to temporary challenges. Your laryngitis cleared up. You returned home, where accomplishing daily tasks in your native tongue is almost effortless. Now imagine that these situational obstacles were somehow permanent.

I grew up knowing the challenges and creativity needed for effective communication when verbal speech is impeded. My younger sister speaks one word: “mama.” When we were little, I remember our mom laying a white sheet over a chair to take pictures of everyday items — an apple, a fork, a toothbrush. She painstakingly printed, cut out, laminated, and organized these images for my sister to use to point at what she wanted to say. We carried her words in plastic baggies.

As we both grew up, and technology evolved, her communication options expanded exponentially. From laminated paper, to a proprietary touchscreen device with text-to-speech functionality, to a communication app on the iTouch, and later the iPad.

Different people experience difficulty verbalizing speech for a multitude of reasons. As in my sister’s case, sometimes it’s genetic. Sometimes it’s situational. The reasons may be temporary, chronic, or permanent. Augmentative and alternative communication (AAC) is an umbrella term encompassing various communication methods used to supplement or replace speech. The United States Society for Augmentative and Alternative Communication (USAAC) defines AAC-devices as including “all forms of communication (other than oral speech) that are used to express thoughts, needs, wants, and ideas.” The fact that you’re reading these words is an example of AAC — writing is a mechanism that augments my verbal communication.

The range of communication solutions people employ are as varied as the reasons they are needed. Examples range from printed picture grids, to text-to-speech apps, to switches which enable typing using morse code, to software that tracks eye movement or detects facial movements. (The software behind Stephen Hawking’s AAC is open source!)

The Convention on the Rights of Persons with Disabilities (CRPD), an international human rights treaty intended to protect the rights and dignity of people with disabilities, includes accessibility to communication and information. Huge challenges exist in making this access universal. Current solutions can be prohibitively expensive. According to the World Health Organization, in many low-income and middle-income countries, only 5-15% of the people who need assistive devices and technologies are able to obtain them. Additionally, many apps come in a limited number of languages. Many require a specific app store or proprietary device. Commercial AAC solutions can be expensive, and/or have limited language support, which can render them largely inaccessible to many people in low-income countries.

Enter Cboard, an open source project (recently supported by the UNICEF Innovation Fund!) powered by people dedicated to the idea of providing a solution that works for everyone, everywhere; a free, web-based communication board that leverages the thriving open source ecosystem and the built-in functionality of modern browsers.

It’s a complex problem, but, by taking advantage of available open source software and key ways in which the web has evolved over the last couple of years (in terms of modern browser API development and web standards), we can build a free, multilingual, open source, web-based alternative. Let’s talk about a few of those pieces — the Web Speech API, React, the Internationalization API, and the “progressive web app” concept.

Web Speech API

The challenge of artificially producing human speech is not new. Speech recognition and synthesis tools have been available for quite some time already — from voice dictation software to accessibility tools like screen readers. But the availability of a browser-based API makes it possible to start looking toward producing web services that have a low barrier to entry to provide speech synthesis, and that provide a consistent experience of that speech synthesis.

The Web Speech API provides an interface for speech recognition (speech-to-text) and speech synthesis (text-to-speech) in the browser. With Cboard, we are primarily concerned with the SpeechSynthesis interface, which is used to produce text-to-speech (TTS) output. Using the API, we can retrieve information about the synthesis voices available on the device (which varies by browser and operating system), start and pause speech, etc. Browsers tend to use the speech services available by default on the device’s operating system — the API exposes methods to interact with these services. We’ve done our own mapping of some of the voice and language offerings by digesting data returned from the SpeechSynthesis interface on different devices running different operating systems, using different browsers:

You can see, for example, Chrome on MacOS shows 66 voices — that’s because it uses MacOS native voices, as well as 19 additional voices provided from the browser. (Interested to see what voices and languages are available to you? Check out browserspeechsupport.me.)

Comprehensive support for the Web Speech API is still getting there, but most major modern browsers support it. (The Speech Synthesis API is available to 78.81% of users globally at time of writing). The draft specification was introduced in 2012, and is not yet a standard.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari332749No147Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox7.0-7.1NoNoNo64No React

React is a JavaScript library for building user interfaces. One of the most unambiguous insights from the 2017 “State of JavaScript” — a survey of over 23,000 developers — was that React is currently the “dominant front-end library” in terms of sheer numbers, and with high marks for usage level and developer satisfaction.

That doesn’t mean it’s the best for every situation, and it doesn’t mean it will be dominant in the long-term. But its features, and the relative ubiquity of adoption (at this point, at least), make it a great option for our project, because there is a lower barrier to entry for people to begin contributing — there is a strong community for learning and troubleshooting.

React makes use of the concept of the “virtual” DOM, where a virtual representation of UI is kept in memory. Any changes to the state of your application are compared against the state of the “real” DOM, using a “diffing” algorithm. This allows us to make efficient changes to the view layer of an application, and represent the state of our application in a predictable way, without requiring manual DOM manipulation (for the most part). React also emphasizes the use of component-based architecture.

React is supported by all popular browsers. (For some older browsers like IE 9 / IE 10, polyfills are required).

ECMAScript Internationalization API

As noted earlier, one area in which current AAC offerings fall short is broad multi-language support. The combination of the Web Speech API, the Internationalization API (and the open source offerings around it), and React, allow us to support up to 33 languages. (For reasons described earlier, this support varies between operating systems).

Internationalization is the process of designing and developing an application and its content “in a way that ensures it will work well for, or can be easily adapted for, users from any culture, region, or language.” The Internationalization API provides functionality for three key areas: string comparison, number formatting, and date and time formatting. The API is exposed on the global Intl object.

FormatJS, a collection of libraries that build on the Intl standard, has a suite of integrations with common component libraries (like React!) that ship with the FormatJS core libraries built in. We use the React integration, react-intl, which provides bindings to internationalize React apps.

Most browsers in the world support the ES Intl API (84.16% of users globally at time of writing).

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari241529111210Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox10.0-10.237No4.46457 Progressive Web Apps

Progressive Web Apps (PWAs) are regular websites that take advantage of modern browser features to deliver a web experience with the same benefits (or even better ones) as native mobile apps. Any website is technically a PWA if it fulfills three requirements: it runs under HTTPS, has a Web App Manifest, and has a service worker. A service worker essentially acts as a proxy, sitting between web applications, the browser, and the network. It runs in the background, deciding to serve network or cached content based on connectivity, allowing for management of an offline experience.

Beyond those three base requirements, things get a bit more murky. When Alex Russell and Frances Berriman introduced and named “progressive web app” they enumerated ten attributes that characterize a PWA — responsive, connectivity independent, app-like, fresh, safe, discoverable, re-engageable, installable, and linkable.

This concept ends up as the key puzzle piece in our attempt to build something that solves the problems described earlier — that most existing AAC solutions can be prohibitively expensive, offer limited languages, or remain stuck in an app store or proprietary device. Using the PWA approach we can tie together the features modern browsers have to offer — the Web Speech API, Internationalization API, etc — coupled with an app-like experience regardless of operating systems, un-beholden to centralized app distribution methods, and with support for seamlessly continued offline use.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari453244No1711.1Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox11.337No626457

The current state of the web provides all the foundational ingredients we need to build a more inclusive, more broadly accessible AAC solution for people around the world. In the spirit of the open web, and with a great nod to the work that has been done to codify web standards, we know that a free and open communication solution is in sight.

Sound interesting to you? We invite you to come take a look and perhaps even dig in as a contributor!


The post A Browser-Based, Open Source Tool for Alternative Communication appeared first on CSS-Tricks.

Categories: Web Technologies

Chrome DevTools “Local Overrides”

CSS-Tricks - Tue, 03/13/2018 - 10:38

There's been two really interesting videos released recently that use the "Local Overrides" feature of Chrome DevTools in order to play with web performance without even touching the original source code.

The big idea is that you can literally edit CSS and reload the page and your changes stick. Meaning you can use the other performance testing tools inside DevTools to see if your changes had the effect you wanted them to have. Great for showing a client a change without them having to set up a whole dev environment for you.

Direct Link to ArticlePermalink

The post Chrome DevTools “Local Overrides” appeared first on CSS-Tricks.

Categories: Web Technologies

Notched Boxes

CSS-Tricks - Tue, 03/13/2018 - 10:30

Say you're trying to pull off a design effect where the corner of an element are cut off. Maybe you're a Battlestar Galactica fan? Or maybe you just like the unusual effect of it, since it avoids looking like a typical rectangle.

I suspect there are many ways to do it. Certainly, you could use multiple backgrounds to place images in the corners. You could just as well use a flexible SVG shape placed in the background. I bet there is also an exotic way to use gradients to pull it off.

But, I like the idea of simply taking some scissors and clipping off the dang corners. We essentially can do just that thanks to clip-path. We can use the polygon() function, provide it a list of X and Y coordinates and clip away what is outside of them.

Check out what happens if we list three points: middle top, bottom right, bottom left.

.module { clip-path: polygon( 50% 0, 100% 100%, 0 100% ); }

Instead of just three points, let's list all eight points needed for our notched corners. We could use pixels, but that would be dangerous. We probably don't really know the pixel width or height of the element. Even if we did, it could change. So, here it is using percentages:

.module { clip-path: polygon( 0% 5%, /* top left */ 5% 0%, /* top left */ 95% 0%, /* top right */ 100% 5%, /* top right */ 100% 95%, /* bottom right */ 95% 100%, /* bottom right */ 5% 100%, /* bottom left */ 0 95% /* bottom left */ ); }

That's OK, but notice how the notches aren't at perfect 45 degree angles. That's because the element itself isn't a square. That gets worse the less square the element is.

We can use the calc() function to fix that. We'll use percentages when we have to, but just subtract from a percentage to get the position and angle we need.

.module { clip-path: polygon( 0% 20px, /* top left */ 20px 0%, /* top left */ calc(100% - 20px) 0%, /* top right */ 100% 20px, /* top right */ 100% calc(100% - 20px), /* bottom right */ calc(100% - 20px) 100%, /* bottom right */ 20px 100%, /* bottom left */ 0 calc(100% - 20px) /* bottom left */ ); }

And you know what? That number is repeated so many times that we may as well make it a variable. If we ever need to update the number later, then all it takes is changing it once instead of all those individual times.

.module { --notchSize: 20px; clip-path: polygon( 0% var(--notchSize), var(--notchSize) 0%, calc(100% - var(--notchSize)) 0%, 100% var(--notchSize), 100% calc(100% - var(--notchSize)), calc(100% - var(--notchSize)) 100%, var(--notchSize) 100%, 0% calc(100% - var(--notchSize)) ); }

Ship it.

See the Pen Notched Boxes by Chris Coyier (@chriscoyier) on CodePen.

This may go without saying, but make sure you have enough padding to handle the clipping. If you wanna get really fancy, you might use CSS variables in your padding value as well, so the more you notch, the more padding there is.

The post Notched Boxes appeared first on CSS-Tricks.

Categories: Web Technologies

Defining multiple similar services with Docker Compose - Matthias Noback

Planet PHP - Tue, 03/13/2018 - 01:09

For my new workshop - "Building Autonomous Services" - I needed to define several Docker containers/services with more or less the same setup:

  1. A PHP-FPM process for running the service's PHP code.
  2. An Nginx process for serving static and dynamic requests (using the PHP-FPM process as backend).

To route requests properly, every Nginx service would have its own hostvcname. I didn't want to do complicated things with ports though - the Nginx services should all listen to port 80. However, on the host machine, only one service can listen on port 80. This is where reverse HTTP proxy Traefik did a good job: it is the only service listening on the host on port 80, and it forwards requests to the right service based on the host name from the request.

This is the configuration I came up with, but this is only for the "purchase" service. Eventually I'd need this configuration about 4 times.

services: purchase_web: image: matthiasnoback/building_autonomous_services_purchase_web restart: on-failure networks: - traefik - default labels: - "traefik.enable=true" - "traefik.docker.network=traefik" - "traefik.port=80" volumes: - ./:/opt:cached depends_on: - purchase_php labels: - "traefik.backend=purchase_web" - "traefik.frontend.rule=Host:purchase.localhost" purchase_php_fpm: image: matthiasnoback/building_autonomous_services_php_fpm restart: on-failure env_file: .env user: ${HOST_UID}:${HOST_GID} networks: - traefik - default environment: XDEBUG_CONFIG: "remote_host=${DOCKER_HOST_NAME_OR_IP}" volumes: - ./:/opt:cached Using Docker Compose's extend functionality

Even though I usually favor composition over inheritance, also for configuration, in this case I thought I'd be better of with inheriting some configuration instead of copying it. These services don't accidentally share some setting, in the context of this workshop, these services are meant to be more or less identical, except for some variables, like the host name.

So I decided to define a "template" for each service in docker/templates.yml:

version: '2' services: web: restart: on-failure networks: - traefik - default labels: - "traefik.enable=true" - "traefik.docker.network=traefik" - "traefik.port=80" volumes: - ${PWD}:/opt:cached php-fpm: image: matthiasnoback/building_autonomous_services_php_fpm restart: on-failure env_file: .env user: ${HOST_UID}:${HOST_GID} networks: - traefik - default environment: XDEBUG_CONFIG: "remote_host=${DOCKER_HOST_NAME_OR_IP}" volumes: - ${PWD}:/opt:cached

Then in docker-compose.yml you can fill in the details of these templates by using the extends key (please note that you'd have to use "version 2" for that):

services: purchase_web: image: matthiasnoback/building_autonomous_services_purchase_web extends: file: docker/templates.yml service: web depends_on: - purchase_php labels: - "traefik.backend=purchase_web" - "traefik.frontend.rule=Host:purchase.localhost" purchase_php_fpm: extends: file: docker/templates.yml service: php-fpm

We only define the things that can't be inherited (like depends_on), or that are specific to the actual service (host name).

Dynamically generate Nginx configuration

Finally, I was looking for a way to get rid of specific Nginx images for every one of those "web" services. I started with a Dockerfile for every one of them, and a specific Nginx configuration file for each:

server { listen 80 default_server; index index.php; server_name purchase.localhost; root /opt/src/Purchase/public; location / { # try to serve file directly, fallback to index.php try_files $uri /index.php$is_args$args; } location ~ ^/index\.php(/|$) { fastcgi_pass purchase_php_fpm:9000; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params;

Truncated by Planet PHP, read more at the original (another 3311 bytes)

Categories: Web Technologies

Password reuse policy in MySQL 8.0

Planet MySQL - Tue, 03/13/2018 - 00:39

MySQL has various kinds of password policy enforcement tools: a password can expire (even automatically), can be forced to be of a certain length, contain amounts of various types of characters and be checked against a dictionary of common passwords or the user account name itself.…

Categories: Web Technologies

Password reuse policy in MySQL 8.0

MySQL Server Blog - Tue, 03/13/2018 - 00:39

MySQL has various kinds of password policy enforcement tools: a password can expire (even automatically), can be forced to be of a certain length, contain amounts of various types of characters and be checked against a dictionary of common passwords or the user account name itself.…

Categories: Web Technologies

MySQL Connector/Java 5.1.46 GA has been released

Planet MySQL - Mon, 03/12/2018 - 16:36

Dear MySQL Users,

MySQL Connector/J 5.1.46, a maintenance release of the production 5.1
branch has been released. Connector/J is the Type-IV pure-Java JDBC
driver for MySQL.

MySQL Connector Java is available in source and binary form from the
Connector/J download pages at
and mirror sites as well as Maven-2 repositories.

MySQL Connector Java (Commercial) is already available for download on the
My Oracle Support (MOS) website. This release will shortly be available on
eDelivery (OSDC).

As always, we recommend that you check the “CHANGES” file in the
download archive to be aware of changes in behavior that might affect
your application.

MySQL Connector/J 5.1.46 includes the following general bug fixes and
improvements, also available in more detail on

Changes in MySQL Connector/J 5.1.46 (2018-03-12)

Version 5.1.46 is a maintenance release of the production 5.1
branch. It is suitable for use with MySQL Server versions
5.5, 5.6, 5.7, and 8.0. It supports the Java Database
Connectivity (JDBC) 4.2 API.

Functionality Added or Changed

* Because Connector/J restricted TLS versions to v1.1 and
below by default when connecting to MySQL Community
Server 8.0 (which used to be compiled with yaSSL by
default and thus supporting only TLS v1.1 and below), it
failed to connect to to a MySQL 8.0.4 Community Server
(which has been compiled with OpenSSL by default and thus
supports TLS v1.2) that was configured to only allow TLS
v1.2 connections. TLS v1.2 is now enabled for connections
with MySQL Community Server 8.0.4 and later. (Bug

* The bundle for Connector/J 5.1 delivered by Oracle now
contains an additional jar package with the name
(mysql-connector-java-commercial-5.1.ver.jar for
commercial bundles). It is identical with the other jar
package with the original package named
(mysql-connector-java-commercial-5.1.ver-bin.jar for
commercial bundles), except for its more Maven-friendly
file name. (Bug #27231383)

* The lower bound for the connection property
packetDebugBufferSize has been changed to 1, to avoid the
connection errors that occur when the value is set to 0.
(Bug #26819691)

* For multi-host connections, when a MySQL Server was
configured with autocommit=0, Connection.getAutoCommit()
did not return the correct value. This was because
useLocalSessionState=true was assumed for multi-host
connections, which might not be the case, resulting thus
in inconsistent session states.
With this fix, by default, Connector/J executes some
extra queries in the connection synchronization process
to guarantee consistent session states between the client
and the server at any connection switch. This would mean,
however, that when none of the hosts are available during
an attempted server switch, an exception for closed
connection will be thrown immediately while, in earlier
Connector/J versions, there would be a connection error
thrown first before a closed connection error. Error
handling in some applications might need to be adjusted
Applications can skip the new session state
synchronization mechanism by having
useLocalSessionState=true. (Bug #26314325, Bug #86741)

* Connector/J now supports the new caching_sha2_password
authentication plugin for MySQL 8.0, which is the default
authentication plugin for MySQL 8.0.4 and later (see
Caching SHA-2 Pluggable Authentication
gable-authentication.html) for details).
To authenticate accounts with the caching_sha2_password
plugin, either a secure connection to the server using
reference-using-ssl.html) or an unencrypted connection
that supports password exchange using an RSA key pair
(enabled by setting one or both of the connecting
properties allowPublicKeyRetrieval and
serverRSAPublicKeyFile) must be used.
Because earlier versions of Connector/J 5.1 do not
support the caching_sha2_password authentication plugin
and therefore will not be able to connect to accounts
that authenticate with the new plugin (which might
include the root account created by default during a new
installation of a MySQL 8.0 Server), it is highly
recommended that you upgrade now to Connector/J 5.1.46,
to help ensure that your applications continue to work
smoothly with the latest MySQL 8.0 Server.

Bugs Fixed

* When Connector/J 5.1.44 or earlier connected to MySQL
5.7.20 or later, warnings are issued because Connector/J
used the deprecated system variables tx_isolation and
tx_read_only. These SQL-level warnings, returned from a
SHOW WARNINGS statement, might cause some applications to
throw errors and stop working. With this fix, the
deprecated variables are no longer used for MySQL 5.7.20
and later; also, to avoid similar issues, a SHOW WARNINGS
statement is no longer issued for the use of deprecated
variables. (Bug #27029657, Bug #88227)

* When the default database was not specified for a
connection, the connection attributes did not get stored
in the session_connect_attrs table in the Performance
Schema of the MySQL Server. (Bug #22362474, Bug #79612)

On Behalf of Oracle/MySQL Release Engineering Team
Hery Ramilison

Categories: Web Technologies

MySQL 8.0 : meta-data added to Performance_Schema’s Instruments

Planet MySQL - Mon, 03/12/2018 - 13:56

In MySQL 8.0, the engineers have added useful meta-data to the table SETUP_INSTRUMENT. This table lists the classes of instrumented objects for which events can be collected.


Let’s have a quick look at these new columns:

PROPERTIES can have the following values

  • global_statistics: only global summaries are available for this instrument. Example: memory/performance_schema/metadata_locks that return the memory used for table performance_schema.metadata_locks
  • mutable: only applicable for statement instruments as they can “mutate” into a more specific one. Example: statement/abstract/relay_log that returns the new event just read from the relay log.
  • progress: applied only to stage instruments, it reports progress data. Example: stage/sql/copy to tmp table
  • singleton: instruments having a single instance. Example: wait/synch/mutex/sql/LOCK_error_log, like most global mutex locks this lock on error log is a singleton.
  • user: instrument related to user workload. Example: the instrument on idle


This define the life or creation occurrence of the instrument. The possible values from low to high are:

  • 0 : unknown
  • 1 : permanent
  • 2 : provisioning
  • 3 : ddl
  • 4 : cache
  • 5 : session
  • 6 : transaction
  • 7 : query
  • 8 : intra_query

For example, wait/synch/mutex/sql/THD::LOCK_thd_query as a volatility of 5, which means this mutex is created each time a session connects and destroyed when the session disconnects.

There is no point then to enable an instrument for an object already created.


Finally, now there is a documentation column describing the purpose of the instrument. Currently there are 80 instruments documented with the help of that column.

This is an example:

NAME: memory/performance_schema/prepared_statements_instances ENABLED: YES TIMED: NULL PROPERTIES: global_statistics VOLATILITY: 1 DOCUMENTATION: Memory used for table performance_schema.prepared_statements_instances

All this is explained in details in the MySQL’s manual.

Enjoy MySQL 8.0 and I wish you a pleasant discovery of all the new features !

Categories: Web Technologies

dbdeployer release candidate

Planet MySQL - Mon, 03/12/2018 - 13:22

The latest release of dbdeployer is possibly the last one with a leading 0. If no serious bugs are found in the next two weeks, the next release will bear a glorious 1.0.

Latest news

The decision to get out of the stream of pre-releases that were published until now comes because I have implemented all the features that I wanted to add: mainly, all the ones that I wished to add to MySQL-Sandbox but it would have been too hard:

The latest addition is the ability of running multi-source topologies. Now we can run four topologies:

  • master-slave is the default topology. It will install one master and two slaves. More slaves can be added with the option --nodes.
  • group will deploy three peer nodes in group replication. If you want to use a single primary deployment, add the option --single-primary. Available for MySQL 5.7 and later.
  • fan-in is the opposite of master-slave. Here we have one slave and several masters. This topology requires MySQL 5.7 or higher.
    all-masters is a special case of fan-in, where all nodes are masters and are also slaves of all nodes.

It is possible to tune the flow of data in multi-source topologies. The default for fan-in is three nodes, where 1 and 2 are masters, and 2 are slaves. You can change the predefined settings by providing the list of components:

$ dbdeployer deploy replication \
--topology=fan-in \
--nodes=5 \
--master-list="1 2 3" \
--slave-list="4 5" \
8.0.4 \

In the above example, we get 5 nodes instead of 3. The first three are master (--master-list="1 2 3") and the last two are slaves (--slave-list="4 5") which will receive data from all the masters. There is a test automatically generated to test replication flow. In our case it shows the following:

$ ~/sandboxes/fan_in_msb_8_0_4/test_replication
# master 1
# master 2
# master 3
# slave 4
ok - '3' == '3' - Slaves received tables from all masters
# slave 5
ok - '3' == '3' - Slaves received tables from all masters
# pass: 2
# fail: 0

The first three lines show that each master has done something. In our case, each master has created a different table. Slaves in nodes 5 and 6 then count how many tables they found, and if they got the tables from all masters, the test succeeds.
Note that for all-masters topology there is no need to specify master-list or slave-list. In fact, those lists will be auto-generated, and they will both include all deployed nodes.

What now?

Once I make sure that the current features are reasonably safe (I will only write more tests for the next 10~15 days) I will publish the first (non-pre) release of dbdeployer. From that moment, I'd like to follow the recommendations of the Semantic Versioning:

  • The initial version will be 1.0.0 (major, minor, revision);
  • The spects for 1.0 will be the API that needs to be maintained.
  • Bug fixes will increment the revision counter.
  • New features that don't break compatibility with the API will increment the minor counter;
  • New features or changes that break compatibility will trigger a major counter increment.

Using this method will give users a better idea of what to expect. If we get a revision number increase, it is only bug fixes. An increase in the minor counter means that there are new features, but all previous features work as before. An increase in the major counter means that something will break, either because of changed interface or because of changed behavior.
In practice, the tests released with 1.0.0 should run with any 1.x subsequent version. When those tests need changes to run correctly, we will need to bump up the major version.

Let's see if this method is sustainable. So far, I haven't had need to do behavioural changes, which are usually provoked by new versions of MySQL that introduce incompatible behavior (definitely MySQL does not follow the Semantic Versioning principles.) When the next version becomes available, I will see if this RC of dbdeployer can stand its ground.

Categories: Web Technologies

A Better Sketch File, a Better Designer, a Better You

CSS-Tricks - Mon, 03/12/2018 - 12:27

I’ve been thinking about this post by Isabel Lee for the last couple of weeks — it’s all about how we should be more considerate when making designs in Sketch. They argue that we’re more likely to see real efficiency and organizational improvements in our work if we name our layers, artboards, and pages properly. Isabel writes:

Keeping a super organized Sketch file has helped me smooth out my design process and saved me time when I was trying to find a specific component or understand an archived design. For instance, I was looking for an icon that I used six months ago and it was (relatively) easy to find because all my artboards and layers were well-named and grouped reverse-chronologically. I was also able to cross-reference it with my meeting notes from around that time. If I hadn’t done any of that work (thanks Past Isabel!), I probably would’ve had to dig through all my old designs and look at each layer. Or worse — I would’ve had to recreate that icon.

Since I read this I’ve been doing the same thing and effectively making “daily commits” with the naming of my pages and it’s been genuinely helpful when looking back through work that I’ve forgotten about. But what I really like about this tidy-up process is how Isabel describes the way in which they could easily look back on their work, identify weaknesses in their design process, and how to become a better designer:

Aside from making it easier to find things, it’s also helped me cultivate good documentation habits when I want to analyze my old work and see where I could’ve made improvements. I revisited one of my old Sketch files and realized that I didn’t do enough research before diving into a million iterations for an initial idea I had.

Direct Link to ArticlePermalink

The post A Better Sketch File, a Better Designer, a Better You appeared first on CSS-Tricks.

Categories: Web Technologies

Consistent Design Systems in Sketch With Atomic Design and the Auto-Layout Plugin

CSS-Tricks - Mon, 03/12/2018 - 06:15

Do you design digital products (or websites) and hand design files off to developers for implementation? If you answered yes, settle in! While the should-designers-code debate rages on, we're going to look at how adding a methodology to your design workflow can make you faster, more consistent, and loved by all developers... even if you don't code.

Let's dig in!

Why a methodology?

In the development world, it seems like at least half of your career is about staying up to date with new tech and leveling up your skills. While the pace may not be quite as frantic in the design landscape as it is in development, there definitely has been a huge shift in tools over the past three years or so.

Tools like Sketch have made a lot of the old pain of working in design files a thing of the past. Smart combinations of text styles, object styles, symbols, and libraries now mean sweeping changes are just one click away. No more picking through 40 Photoshop layers to make a single color change.

Yet, sweeping color changes in a marketing landing page is no longer the biggest design challenge. Design and development teams are now expected to deliver complex interfaces loaded with interaction and conditional states... for every device available now and the next day. Working as both a designer and developer, I have seen the workflow friction from both sides.

Beyond tools, designers need an approach.

Thinking in terms of "components"

If you work in the tech space in any capacity, you have likely heard of development frameworks such as React, Angular, or Vue.

Um yeah, I'm a designer so that doesn't really concern me, bye.

Kinda. But if you're hoping to do design work for modern digital products, there is a pretty big chance that said products will be built using one of these frameworks. No one expects an architect to build the structures themselves, but they better have a high-level understanding of the what the tools are and how they will be used.

So here's what you need to know about modern front-end frameworks:

They have brought on a paradigm shift for developers in which products are built by combining a series of smaller components into complex systems which can adapt to different contexts and devices. This makes the code easier to maintain, and the entire system more flexible.

For a variety of legitimate reasons, many designers have not caught on to this paradigm shift as quickly as developers. We are missing a mental model for creating the pieces that make up these interfaces independently from their environment/pages/screens/viewports etc.

One such approach is Atomic Design.

What is Atomic Design?

First coined by Brad Frost, Atomic Design is a methodology which tries to answer a simple question: if hundreds of device sizes mean we can no longer effectively design "pages," then how do we even design?

The answer lies in breaking down everything that could make up a "page" or "view" into smaller and smaller pieces, creating a "system" of blocks which can then be recombined into an infinite number of variations for our project.

You can think of it like the ingredients in a recipe. Sure, you could make muffins, but you could just as easily make a cake with the same list of ingredients.

Brad decided to use the chemistry analogy, and so he proposes that our systems are made up of:

  • Atoms
  • Molecules
  • Organisms

For the sake of simplicity, let's take a look at how we might apply this to a website.


Atoms represent the smallest building blocks which make up our user interfaces. Imagine a form label, a form input, and a button. Each one of those represents an atom:

A header, text block, and link each serve as atoms. Molecules

Molecules are simply groups of atoms combined to create something new. For our purposes, we can think of molecules as groups of disjointed UI pieces which are combined to create a meaningful component.

The atoms come together to form a "card" component. Organisms

Organisms are made up of groups of molecules (or atoms or other organisms). These are the pieces of an interface which we might think of as a "section." In an invoicing application, an organism could be a dashboard combining a "new invoice" button (atom), a search form (molecule), a "total open" card (molecule), and table listing overdue invoices. You get the idea.

Let's look at what a "featured block" organism might look like in our simple website:

A header (atom), three cards (molecules), an image (atom), and a teaser (molecule) are combined to form one featured block organism. Using stacks for consistency

So, now that we have a mental model for the "stuff," how do we go about creating these building blocks? Sketch is great out of the box, but the plugins available for it provide huge performance tools… just like text editors for developers. We will be using Sketch's built-in symbols tools, as well as the amazing Stacks feature from Anima App's Auto-Layout plugin.

Using these tools will bring some priceless benefits which I will point out as we go, but at the very least you can count on:

  • better design consistency and faster iteration
  • a sanity check from using consistent spacing multipliers
  • faster reordering of content
  • help identifying design problems quickly and early on
What exactly are stacks?

If you've ever heard developers excitedly talk about flexbox for building layouts in CSS, then you can think of stacks as the Sketch equivalent. Stacks (like flexbox) allow you to group a series of layers together and then define their spacing and alignment on a vertical or horizontal axis.

Here we group three items, align them through their center, and set 48px vertical space between each one:

A simple stacked folder aligning and distributing three items.

The layers will automatically be group into a blue folder with an icon of vertical or horizontal dots to show what kind of stack you have.

Look at that! You just learned flexbox without touching a line of code. 😂

Nested stacks

The real power of stacks comes from nesting stacks inside other stacks:

Stacks can be nested inside of each other to create complex spacing systems.]

Here, we can see a card component made up of multiple stacks:

  • card__cta link from the previous example.
  • card__copy stack which handles the alignment & space for the header and text.
  • card__content which controls the spacing and alignment between the card__cta and card__copy stacks.
A quick note about layer naming

I almost always use the BEM naming convention for my components. Developers appreciate the logic when I have to to hand off design files because it often aligns with the naming conventions they are using in code. In the case where I'm developing the project myself, it gives me a bit of a head start as I've started thinking about the markup at the design stage.

If that's super confusing, don't worry about it. Please just make your colleagues' job a little easier by organizing your layers and giving them descriptive names.

Stacks shmacks, I have great attention to detail and can do all this manually!

You're absolutely right! But what about when you have carefully laid out 10 items, all of varying sizes, and now they need extra space between them? Or, you need to add a line of text to one of those items? Or, you need to split content into three columns rather than four?

That never happens, right? 😱

One of two things usually happens at this stage:

  1. You start manually reorganizing all the things and someone's paying for wasted time (whether it’s you or the client).
  2. You kinda fudge it… after all, the developer surely knows what your original intentions were before you left every margin off by a couple pixels in your layout. ¯\_(ツ)_/¯

Here's what stacks get you:

  • You can change the alignment or spacing options as much as you like with a simple value adjustment and things will just magically update.
  • You can resize elements without having to pick at your artboard to rejig all the things.
  • You can reorder, add, or remove items from the stack folder and watch the items redistribute themselves according to your settings—just like code. 🎉

Notice how fast it is to edit content and experiment with different layouts all while maintaining consistency:

Stacks and symbols make experimentation cheap and consistent.

OK, so now we know why stacks are amazing, how do we actually use them?

Creating stacks

Creating a stack is a matter of selecting two (or more) layers and hitting the stacks folder icon in the inspector. From there, you can decide if you are stacking your layers horizontally or vertically, and set the distance between the items.

Here’s an example of how we’d create an atom component using stacks:

Creating a horizontal stack with 20px spacing between the text and icon.

And, now let’s apply the stacks concept to a more complex molecule component:

Creating a card molecule using nested stacks. Creating symbols from stacks

We’ve talked about the many benefits of stacks, but we can make them even more efficient by applying Sketch’s symbol tool to them. The result we get is a stack that can be managed from one source instance and reused anywhere.

Creating an atom symbol

We can grab that call-to-action stack we just created and make it a symbol. Then, we can use overrides for the text and know that stacks will honor the spacing:

Creating a symbol from a stack is great for ensuring space consistency with overrides.

If I decide later that I want to change the space by a few pixels, I can just tweak the stack spacing on the symbol and have it update on every instance 🎉

Creating a molecule symbol

Similarly, I can group multiple stacked atoms into a component and make that into a symbol:

Creating a card symbol from our stacks and call-to-action symbol. Symbols + stacks = 💪

Wouldn't it be nice if we could maintain the spacial requirements we set regardless of the tweaks we might bring to them down the line? Yes!

Replacing one item with another inside a component

Let's assume our card component now requires a button rather than a link as a call-to-action. As long as we place that button in the correct stack folder, all the pixel-nudging happens automagically:

Because our symbol uses stacks, the distance between the copy and the call-to-action will automatically be respected. Editing molecules and organisms on the fly 🔥

You might be thinking that this isn't a huge deal and that adjusting the tiny spacial issue from the previous example would have taken just a moment without stacks. And you wouldn't be wrong. But let's refer back to our notions about atomic design for a moment, and ask what happens when we have far more complex "organisms" (groups of atoms and molecules) to deal with?

Similar to our mobile example from the beginning, this is where the built-in power of stacks really shines:

Stacks and symbols makes experimentation cheap and consistent.

With a library of basic symbols (atoms) at our fingertips, and powerful spacing/alignment tools, we can experiment and create endless variations of components. The key is that we can do it quickly and without sacrificing design consistency.

Complex layouts and mega stacks

Keeping with the elements we have already designed, let's see what a layer stack might look like for a simple marketing page:

An example of an expanded layer stack.

Don't let the initial impression of complexity of those stacks scare you. A non-stacked version of this artboard would not look so different aside from the color of the folder icons.

However it's those very folders that give us all the power:

Layout experimentation can be fast and cheap! We may not need to code, but we have a responsibility to master our tools for efficiency and figure out new workflows with our developer colleagues.

This means moving away from thinking of design in the context of pages, and creating collections of components… modules… ingredients… legos… figure out a metaphor that works for you and then make sure the whole team shares the vocabulary.

Once we do this, issues around workflow and collaboration melt away:

Speed and Flexibility

Carefully building components with symbols and using automated and consistent spacing/alignment with stacks does require a bit of time investment upfront. However, the return of easy experimentation and ability to change course quickly and at low cost is well worth it.

Consistency and UX

Having to think about how elements work as combinations and in different contexts will catch UX-smells early and expose issues around consistency before you’re 13 screens in. Changing direction by adjusting a few variables/components/spacing units beats nudging elements around an artboard all day.

Responsibility and Governance

A single 1440px page view of the thing you are building simply does not provided a developer with enough context for multiple screens and interaction. At the same time, crafting multiple high fidelity comps one tiny element at a time is a budget buster (and this is particularly true of app designs). So, what tends to happen on small teams? The developer gets the one gorgeous 1440px view… aaaaand all the cognitive overhead of filling in the gaps for everything else.

Providing the details is our job.

Atomic design gave us speed, creative freedom, and flexibility. It changed everything.”

—From the forward of Atomic Design

If we work with developers on digital products, we should be excited about learning how the sausage is made and adapt our approach to design accordingly. Our tools may not evolve quite as quickly as JavaScript frameworks, but if you haven’t taken a peek under to hood of some of these apps in the last couple of years, this is a great time to dig in!

The post Consistent Design Systems in Sketch With Atomic Design and the Auto-Layout Plugin appeared first on CSS-Tricks.

Categories: Web Technologies

Online Schema Change for Tables with Triggers.

Planet MySQL - Sun, 03/11/2018 - 22:53

In this post, We will learn how to handle online schema change if the table has triggers.

In PXC, an alter can be made directly ( TOI ) on tables with less than a 1G ( by default) , but on a 20GB or 200GB table we need some downtime to do ( RSU ).

Pt-osc is a good choice for Percona Cluster/Galera. By default percona toolkit’s pt-online-schema-change will create After “insert / update / delete” triggers for maintaining the sync between the shadow and the original table.

pt-online-schema-change process flow:

Check out the complete slides for effective MySQL administration here

If the tables has triggers already then pt-osc wont work well in those cases. It was an limitation with online schema changes.

Still MySQL 5.6, We cannot create multiple triggers for the same event and type.

From Documentation:

There cannot be multiple triggers for a given table that have the same trigger event and action time. For example, you cannot have two BEFORE UPDATE triggers for a table. But you can have a BEFORE UPDATE and a BEFORE INSERT trigger, or a BEFORE UPDATE and an AFTER UPDATE trigger.

On this case, We will have to drop the triggers before starting online schema change. And re-create the triggers after completion of online schema change. In a production environment it’s a complex operation to perform and requires a downtime.

On MySQL 5.6.32:

[root@mysql-5.6.32 ~]# pt-online-schema-change --version pt-online-schema-change 3.0.6 [root@mysql-5.6.32 ~]# pt-online-schema-change D=mydbops,t=employees,h=localhost \ --user=root --alter "drop column test,add column test text" \ --no-version-check --execute The table `mydbops`.`employees` has triggers. This tool needs to create its own triggers, so the table cannot already have triggers.

From MySQL 5.7.2, A table can hold multiple triggers.

From Documentation:

It is possible to define multiple triggers for a given table that have the same trigger event and action time. For example, you can have two BEFORE UPDATE triggers for a table.

Complete list of new features of MySQL 5.7 here. This relaxed the complexity of implementation of pt-osc support for tables with triggers.

pt-online-schema-change – v3.0.4, released on 2017-08-02 came with an option –preserve-triggers. Which added a 5.7 only feature, To allow pt-osc to handle OSC operation even the table has triggers.

We can find interesting discussions and implementation complexities in the following ticket https://jira.percona.com/browse/PT-91

Even Gh-ost won’t work for PXC without locking the table in MySQL 5.7. Issues

On MySQL 5.7.19:

[root@mysql-5.7.19 ~]# pt-online-schema-change --version pt-online-schema-change 3.0.6 [root@mysql-5.7.19 ~]# pt-online-schema-change D=mydbops,t=employees,h=localhost \ --user=root --alter "drop column test,add column test text" \ --no-version-check --preserve-triggers --execute Operation, tries, wait: analyze_table, 10, 1 copy_rows, 10, 0.25 create_triggers, 10, 1 drop_triggers, 10, 1 swap_tables, 10, 1 update_foreign_keys, 10, 1 Altering `mydbops`.`employees`... Creating new table... Created new table mydbops._employees_new OK. Altering new table... Altered `mydbops`.`_employees_new` OK. 2018-03-02T07:27:35 Creating triggers... 2018-03-02T07:27:35 Created triggers OK. 2018-03-02T07:27:35 Copying approximately 10777 rows... 2018-03-02T07:27:35 Copied rows OK. 2018-03-02T07:27:35 Adding original triggers to new table. 2018-03-02T07:27:35 Analyzing new table... 2018-03-02T07:27:35 Swapping tables... 2018-03-02T07:27:36 Swapped original and new tables OK. 2018-03-02T07:27:36 Dropping old table... 2018-03-02T07:27:36 Dropped old table `mydbops`.`_employees_old` OK. 2018-03-02T07:27:36 Dropping triggers... 2018-03-02T07:27:36 Dropped triggers OK. Successfully altered `mydbops`.`employees`.

–preserve-triggers If this option is enabled, pt-online-schema-change will create all the existing triggers to the new table (mydbops._employees_new) after copying rows from the original table (mydbops.employees).

Explained the output with PTDEBUG=1 enabled for better understanding.

1. pt-osc created similar table and applied modifications on it.

# Cxn:3953 2845 DBI::db=HASH(0x2db1260) Connected dbh to mysql-5.7.19 h=localhost # TableParser:3265 2845 SHOW CREATE TABLE `mydbops`.`employees` Creating new table... # pt_online_schema_change:10392 2845 CREATE TABLE `mydbops`.`_employees_new` ( # `employeeNumber` int(11) NOT NULL, # `lastName` varchar(50) DEFAULT NULL, # `firstName` varchar(50) DEFAULT NULL, # `extension` varchar(10) DEFAULT NULL, # `email` varchar(100) DEFAULT NULL, # `officeCode` varchar(10) DEFAULT NULL, # `reportsTo` int(11) DEFAULT NULL, # `jobTitle` varchar(50) DEFAULT NULL, # `test` text, # PRIMARY KEY (`employeeNumber`), # KEY `reportsTo` (`reportsTo`), # KEY `officeCode` (`officeCode`) # ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Created new table mydbops._employees_new OK. Altering new table... # pt_online_schema_change:9192 2845 ALTER TABLE `mydbops`.`_employees_new` drop column test,add column test text Altered `mydbops`.`_employees_new` OK.

2. pt-osc created After [insert / update / delete] triggers to sync the upcoming data between the source table and new table.

# pt_online_schema_change:11058 2845 CREATE TRIGGER `pt_osc_mydbops_employees_del` AFTER DELETE ON `mydbops`.`employees` FOR EACH ROW DELETE IGNORE FROM `mydbops`.`_employees_new` WHERE `mydbops`.`_employees_new`.`employeenumber` <=> OLD.`employeenumber` # pt_online_schema_change:11058 2845 CREATE TRIGGER `pt_osc_mydbops_employees_upd` AFTER UPDATE ON `mydbops`.`employees` FOR EACH ROW BEGIN DELETE IGNORE FROM `mydbops`.`_employees_new` WHERE !(OLD.`employeenumber` <=> NEW.`employeenumber`) AND `mydbops`.`_employees_new`.`employeenumber` <=> OLD.`employeenumber`;REPLACE INTO `mydbops`.`_employees_new` (`employeenumber`, `lastname`, `firstname`, `extension`, `email`, `officecode`, `reportsto`, `jobtitle`, `test`) VALUES (NEW.`employeenumber`, NEW.`lastname`, NEW.`firstname`, NEW.`extension`, NEW.`email`, NEW.`officecode`, NEW.`reportsto`, NEW.`jobtitle`, NEW.`test`);END # pt_online_schema_change:11058 2845 CREATE TRIGGER `pt_osc_mydbops_employees_ins` AFTER INSERT ON `mydbops`.`employees` FOR EACH ROW REPLACE INTO `mydbops`.`_employees_new` (`employeenumber`, `lastname`, `firstname`, `extension`, `email`, `officecode`, `reportsto`, `jobtitle`, `test`) VALUES (NEW.`employeenumber`, NEW.`lastname`, NEW.`firstname`, NEW.`extension`, NEW.`email`, NEW.`officecode`, NEW.`reportsto`, NEW.`jobtitle`, NEW.`test`) 2018-03-02T05:56:46 Created triggers OK.

3. pt-osc copied the existing records.

# pt_online_schema_change:11332 2845 INSERT LOW_PRIORITY IGNORE INTO `mydbops`.`_employees_new` (`employeenumber`, `lastname`, `firstname`, `extension`, `email`, `officecode`, `reportsto`, `jobtitle`, `test`) SELECT `employeenumber`, `lastname`, `firstname`, `extension`, `email`, `officecode`, `reportsto`, `jobtitle`, `test` FROM `mydbops`.`employees` FORCE INDEX(`PRIMARY`) WHERE ((`employeenumber` >= ?)) AND ((`employeenumber` <= ?)) LOCK IN SHARE MODE /*pt-online-schema-change 2845 copy nibble*/ lower boundary: 0 upper boundary: 2335

4. After existing row copied, pt-osc created the trigger present on the source table to the newly created table (mydbops._employees_new).

# pt_online_schema_change:9795 2845 CREATE DEFINER=`root`@`%` TRIGGER `mydbops`.`mydbops_employee_update` BEFORE UPDATE ON _employees_new # FOR EACH ROW # BEGIN # INSERT INTO employees_audit # SET action = 'update', # employeeNumber = OLD.employeeNumber, # lastname = OLD.lastname, # changedat = NOW(); # END

5. Swapping the table and dropping the triggers.

2018-03-02T05:56:47 Analyzing new table... # pt_online_schema_change:10465 2836 ANALYZE TABLE `mydbops`.`_employees_new` /* pt-online-schema-change */ 2018-03-02T05:56:47 Swapping tables... # pt_online_schema_change:10503 2836 RENAME TABLE `mydbops`.`employees` TO `mydbops`.`_employees_old`, `mydbops`.`_employees_new` TO `mydbops`.`employees` 2018-03-02T05:56:47 Swapped original and new tables OK. 2018-03-02T05:56:47 Dropping old table... # pt_online_schema_change:9937 2845 DROP TABLE IF EXISTS `mydbops`.`_employees_old` 2018-03-02T05:56:47 Dropped old table `mydbops`.`_employees_old` OK. 2018-03-02T05:56:47 Dropping triggers... # pt_online_schema_change:11182 2845 DROP TRIGGER IF EXISTS `mydbops`.`pt_osc_mydbops_employees_del` # pt_online_schema_change:11182 2845 DROP TRIGGER IF EXISTS `mydbops`.`pt_osc_mydbops_employees_upd` # pt_online_schema_change:11182 2845 DROP TRIGGER IF EXISTS `mydbops`.`pt_osc_mydbops_employees_ins` 2018-03-02T05:56:47 Dropped triggers OK. Successfully altered `mydbops`.`employees`.

I hope this gives you a better idea about –preserve-triggers.

Key Takeaways:

  • Upto MySQL 5.6, Only way to alter the tables using pt-osc is to drop the existing triggers and create after the alter done.
  • From MySQL 5.7, We can use –preserve-triggers option of pt-osc for seamless schema changes though we have triggers present on our table.

It gives us one more reason to recommend MySQL 5.7 upgrade. I also feel pt-osc can be provided with support for the tables with before triggers at least for MySQL versions until 5.6.


Categories: Web Technologies

External Tables + Merge

Planet MySQL - Sun, 03/11/2018 - 20:16

This is an example of how you would upload data from a flat file, or Comma Separated Value (CSV) file. It’s important to note that in the file upload you are transferring information that doesn’t have surrogate key values by leveraing joins inside a MERGE statement.

Step #1 : Create a virtual directory

You can create a virtual directory without a physical directory but it won’t work when you try to access it. Therefore, you should create the physical directory first. Assuming you’ve created a /u01/app/oracle/upload file directory on the Windows platform, you can then create a virtual directory and grant permissions to the student user as the SYS privileged user.

The syntax for these steps is:

CREATE DIRECTORY upload AS '/u01/app/oracle/upload'; GRANT READ, WRITE ON DIRECTORY upload TO student;

Step #2 : Position your CSV file in the physical directory

After creating the virtual directory, copy the following contents into a file named kingdom_import.csv in the /u01/app/oracle/upload directory or folder. If you attempt to do this in Windows, you need to disable Windows UAC before performing this step.

Place the following in the kingdom_import.csv file. The trailing commas aren’t too meaningful in Oracle but they’re very helpful if you use the file in MySQL. A key element in creating this files requires that you avoid trailing line returns at the bottom of the file because they’re inserted as null values. There should be no lines after the last row of data.

'Narnia',77600,'Peter the Magnificent','20-MAR-1272','19-JUN-1292', 'Narnia',77600,'Edmund the Just','20-MAR-1272','19-JUN-1292', 'Narnia',77600,'Susan the Gentle','20-MAR-1272','19-JUN-1292', 'Narnia',77600,'Lucy the Valiant','20-MAR-1272','19-JUN-1292', 'Narnia',42100,'Peter the Magnificent','12-APR-1531','31-MAY-1531', 'Narnia',42100,'Edmund the Just','12-APR-1531','31-MAY-1531', 'Narnia',42100,'Susan the Gentle','12-APR-1531','31-MAY-1531', 'Narnia',42100,'Lucy the Valiant','12-APR-1531','31-MAY-1531', 'Camelot',15200,'King Arthur','10-MAR-0631','12-DEC-0686', 'Camelot',15200,'Sir Lionel','10-MAR-0631','12-DEC-0686', 'Camelot',15200,'Sir Bors','10-MAR-0631','12-DEC-0635', 'Camelot',15200,'Sir Bors','10-MAR-0640','12-DEC-0686', 'Camelot',15200,'Sir Galahad','10-MAR-0631','12-DEC-0686', 'Camelot',15200,'Sir Gawain','10-MAR-0631','12-DEC-0686', 'Camelot',15200,'Sir Tristram','10-MAR-0631','12-DEC-0686', 'Camelot',15200,'Sir Percival','10-MAR-0631','12-DEC-0686', 'Camelot',15200,'Sir Lancelot','30-SEP-0670','12-DEC-0682',

Step #3 : Reconnect as the student user

Disconnect and connect as the student user, or reconnect as the student user. The reconnect syntax that protects your password is:

CONNECT student@xe

Step #4 : Run the script that creates tables and sequences

Copy the following into a create_kingdom_upload.sql file within a directory of your choice. Then, run it as the student account.

-- Conditionally drop tables and sequences. BEGIN FOR i IN (SELECT table_name FROM user_tables WHERE table_name IN ('KINGDOM','KNIGHT','KINGDOM_KNIGHT_IMPORT')) LOOP EXECUTE IMMEDIATE 'DROP TABLE '||i.table_name||' CASCADE CONSTRAINTS'; END LOOP; FOR i IN (SELECT sequence_name FROM user_sequences WHERE sequence_name IN ('KINGDOM_S1','KNIGHT_S1')) LOOP EXECUTE IMMEDIATE 'DROP SEQUENCE '||i.sequence_name; END LOOP; END; / -- Create normalized kingdom table. CREATE TABLE kingdom ( kingdom_id NUMBER , kingdom_name VARCHAR2(20) , population NUMBER); -- Create a sequence for the kingdom table. CREATE SEQUENCE kingdom_s1; -- Create normalized knight table. CREATE TABLE knight ( knight_id NUMBER , knight_name VARCHAR2(24) , kingdom_allegiance_id NUMBER , allegiance_start_date DATE , allegiance_end_date DATE); -- Create a sequence for the knight table. CREATE SEQUENCE knight_s1; -- Create external import table. CREATE TABLE kingdom_knight_import ( kingdom_name VARCHAR2(20) , population NUMBER , knight_name VARCHAR2(24) , allegiance_start_date DATE , allegiance_end_date DATE) ORGANIZATION EXTERNAL ( TYPE oracle_loader DEFAULT DIRECTORY upload ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII BAFFLE 'UPLOAD':'kingdom_import.bad' DISCARDFILE 'UPLOAD':'kingdom_import.dis' LOGFILE 'UPLOAD':'kingdom_import.log' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY "'" MISSING FIELD VALUES ARE NULL ) LOCATION ('kingdom_import.csv')) REJECT LIMIT UNLIMITED;

Step #5 : Test your access to the external table

There a number of things that could go wrong with setting up an external table, such as file permissions. Before moving on to the balance of the steps, you should test what you’ve done. Run the following query from the student account to check whether or not you can access the kingdom_import.csv file.

COL kingdom_name FORMAT A8 HEADING "Kingdom|Name" COL population FORMAT 99999999 HEADING "Population" COL knight_name FORMAT A30 HEADING "Knight Name" SELECT kingdom_name , population , knight_name , TO_CHAR(allegiance_start_date,'DD-MON-YYYY') AS allegiance_start_date , TO_CHAR(allegiance_end_date,'DD-MON-YYYY') AS allegiance_end_date FROM kingdom_knight_import;

Step #6 : Create the upload procedure

Copy the following into a create_upload_procedure.sql file within a directory of your choice. Then, run it as the student account.

-- Create a procedure to wrap the transaction. CREATE OR REPLACE PROCEDURE upload_kingdom IS BEGIN -- Set save point for an all or nothing transaction. SAVEPOINT starting_point; -- Insert or update the table, which makes this rerunnable when the file hasn't been updated. MERGE INTO kingdom target USING (SELECT DISTINCT k.kingdom_id , kki.kingdom_name , kki.population FROM kingdom_knight_import kki LEFT JOIN kingdom k ON kki.kingdom_name = k.kingdom_name AND kki.population = k.population) source ON (target.kingdom_id = source.kingdom_id) WHEN MATCHED THEN UPDATE SET kingdom_name = source.kingdom_name WHEN NOT MATCHED THEN INSERT VALUES ( kingdom_s1.nextval , source.kingdom_name , source.population); -- Insert or update the table, which makes this rerunnable when the file hasn't been updated. MERGE INTO knight target USING (SELECT kn.knight_id , kki.knight_name , k.kingdom_id , kki.allegiance_start_date AS start_date , kki.allegiance_end_date AS end_date FROM kingdom_knight_import kki INNER JOIN kingdom k ON kki.kingdom_name = k.kingdom_name AND kki.population = k.population LEFT JOIN knight kn ON k.kingdom_id = kn.kingdom_allegiance_id AND kki.knight_name = kn.knight_name AND kki.allegiance_start_date = kn.allegiance_start_date AND kki.allegiance_end_date = kn.allegiance_end_date) source ON (target.kingdom_allegiance_id = source.kingdom_id) WHEN MATCHED THEN UPDATE SET allegiance_start_date = source.start_date , allegiance_end_date = source.end_date WHEN NOT MATCHED THEN INSERT VALUES ( knight_s1.nextval , source.knight_name , source.kingdom_id , source.start_date , source.end_date); -- Save the changes. COMMIT; EXCEPTION WHEN OTHERS THEN ROLLBACK TO starting_point; RETURN; END; /

Step #7 : Run the upload procedure

You can run the file by calling the stored procedure built by the script. The procedure ensures that records are inserted or updated into their respective tables.

EXECUTE upload_kingdom;

Step #8 : Test the results of the upload procedure

You can test whether or not it worked by running the following queries.

-- Check the kingdom table. SELECT * FROM kingdom; -- Format Oracle output. COLUMN knight_id FORMAT 999 HEADING "Knight|ID #" COLUMN knight_name FORMAT A23 HEADING "Knight Name" COLUMN kingdom_allegiance_id FORMAT 999 HEADING "Kingdom|Allegiance|ID #" COLUMN allegiance_start_date FORMAT A11 HEADING "Allegiance|Start Date" COLUMN allegiance_end_date FORMAT A11 HEADING "Allegiance|End Date" SET PAGESIZE 999 -- Check the knight table. SELECT knight_id , knight_name , kingdom_allegiance_id , TO_CHAR(allegiance_start_date,'DD-MON-YYYY') AS allegiance_start_date , TO_CHAR(allegiance_end_date,'DD-MON-YYYY') AS allegiance_end_date FROM knight;

It should display the following information:

KINGDOM_ID KINGDOM_NAME POPULATION ---------- -------------------- ---------- 1 Narnia 42100 2 Narnia 77600 3 Camelot 15200 Kingdom Knight Allegiance Allegiance Allegiance ID # Knight Name ID # Start Date End Date ------ ----------------------- ---------- ----------- ----------- 1 Peter the Magnificent 2 20-MAR-1272 19-JUN-1292 2 Edmund the Just 2 20-MAR-1272 19-JUN-1292 3 Susan the Gentle 2 20-MAR-1272 19-JUN-1292 4 Lucy the Valiant 2 20-MAR-1272 19-JUN-1292 5 Peter the Magnificent 1 12-APR-1531 31-MAY-1531 6 Edmund the Just 1 12-APR-1531 31-MAY-1531 7 Susan the Gentle 1 12-APR-1531 31-MAY-1531 8 Lucy the Valiant 1 12-APR-1531 31-MAY-1531 9 King Arthur 3 10-MAR-0631 12-DEC-0686 10 Sir Lionel 3 10-MAR-0631 12-DEC-0686 11 Sir Bors 3 10-MAR-0631 12-DEC-0635 12 Sir Bors 3 10-MAR-0640 12-DEC-0686 13 Sir Galahad 3 10-MAR-0631 12-DEC-0686 14 Sir Gawain 3 10-MAR-0631 12-DEC-0686 15 Sir Tristram 3 10-MAR-0631 12-DEC-0686 16 Sir Percival 3 10-MAR-0631 12-DEC-0686 17 Sir Lancelot 3 30-SEP-0670 12-DEC-0682

You can rerun the procedure to check that it doesn’t alter any information, then you could add a new knight to test the insertion portion.

Categories: Web Technologies