emGee Software Solutions Custom Database Applications

Share this

Web Technologies

PHP 7.1.21 Released - PHP: Hypertext Preprocessor

Planet PHP - Thu, 08/16/2018 - 17:00
The PHP development team announces the immediate availability of PHP 7.1.21. This is a bugfix release. All PHP 7.1 users are encouraged to upgrade to this version. For source downloads of PHP 7.1.21 please visit our downloads page, Windows source and binaries can be found on windows.php.net/download/. The list of changes is recorded in the ChangeLog.
Categories: Web Technologies

Working with refs in React

CSS-Tricks - Thu, 08/16/2018 - 14:09

Refs make it possible to access DOM nodes directly within React. This comes in handy in situations where, just as one example, you want to change the child of a component. Let’s say you want to change the value of an <input> element, but without using props or re-rendering the whole component.

That’s the sort of thing refs are good for and what we’ll be digging into in this post.

How to create a ref

createRef() is a new API that shipped with React 16.3. You can create a ref by calling React.createRef() and attaching a React element to it using the ref attribute on the element.

class Example extends React.Component { constructor(props) { super(props) // Create the ref this.exampleRef = React.createRef() } render() { return ( <div> // Call the ref with the `ref` attribute <input type="text" ref={this.exampleRef} /> </div> ) } }

We can "refer" to the node of the ref created in the render method with access to the current attribute of the ref. From the example above, that would be this.exampleRef.current.

Here’s an example:

See the Pen React Ref - createRef by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

class App extends React.Component { constructor(props) { super(props) // Create the ref this.textInput = React.createRef(); this.state = { value: '' } } // Set the state for the ref handleSubmit = e => { e.preventDefault(); this.setState({ value: this.textInput.current.value}) }; render() { return ( <div> <h1>React Ref - createRef</h1> // This is what will update <h3>Value: {this.state.value}</h3> <form onSubmit={this.handleSubmit}> // Call the ref on <input> so we can use it to update the <h3> value <input type="text" ref={this.textInput} /> <button>Submit</button> </form> </div> ); } } How a conversation between a child component and an element containing the ref might go down.

This is a component that renders some text, an input field and a button. The ref is created in the constructor and then attached to the input element when it renders. When the button is clicked, the value submitted from the input element (which has the ref attached) is used to update the state of the text (contained in an H3 tag). We make use of this.textInput.current.value to access the value and the new state is then rendered to the screen.

Passing a callback function to ref

React allows you to create a ref by passing a callback function to the ref attribute of a component. Here is how it looks:

<input type="text" ref={element => this.textInput = element} />

The callback is used to store a reference to the DOM node in an instance property. When we want to make use of this reference, we access it using:


Let's see how that looks in the same example we used before.

See the Pen React Ref - Callback Ref by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

class App extends React.Component { state = { value: '' } handleSubmit = e => { e.preventDefault(); this.setState({ value: this.textInput.value}) }; render() { return ( <div> <h1>React Ref - Callback Ref</h1> <h3>Value: {this.state.value}</h3> <form onSubmit={this.handleSubmit}> <input type="text" ref={element => this.textInput = element} /> <button>Submit</button> </form> </div> ); } }

When you make use of callback like we did above, React will call the ref callback with the DOM node when the component mounts, when the component un-mounts, it will call it with null.

It is also possible to pass ref from a parent component to a child component using callbacks.

See the Pen React Ref - Callback Ref 2 by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

Let’s create our "dumb" component that will render a simple input:

const Input = props => { return ( <div> <input type="text" ref={props.inputRef} /> </div> ); };

This component is expecting inputRef props from its parent component which is then used to create a ref to the DOM node.

Here’s the parent component:

class App extends React.Component { state = { value: '' }; handleSubmit = event => { this.setState({ value: this.inputElement.value }); }; render() { return ( <div> <h1>React Ref - Callback Ref</h1> <h3>Value: {this.state.value}</h3> <Input inputRef={el => (this.inputElement = el)} /> <button onClick={this.handleSubmit}>Submit</button> </div> ); } }

In the App component, we want to obtain the text that is entered in the input field (which is in the child component) so we can render it. The ref is created using a callback like we did in the first example of this section. The key lies in how we access the DOM of the input element in the Input component from the App component. If you look closely, we access it using this.inputElement. So, when updating the state of value in the App component, we get the text that was entered in the input field using this.inputElement.value.

The ref attribute as a string

This is the old way of creating a ref and it will likely be removed in a future release because of some issues associated with it. The React team advises against using it, going so far as to label it as "legacy" in the documentation. We’re including it here anyway because there’s a chance you could come across it in a codebase.

See the Pen React Ref - String Ref by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

Going back to our example of an input whose value is used to update text value on submit:

class App extends React.Component { state = { value: '' } handleSubmit = e => { e.preventDefault(); this.setState({ value: this.refs.textInput.value}) }; render() { return ( <div> <h1>React Ref - String Ref</h1> <h3>Value: {this.state.value}</h3> <form onSubmit={this.handleSubmit}> <input type="text" ref="textInput" /> <button>Submit</button> </form> </div> ); } }

The component is initialized and we start with a default state value set to an empty string (value='’). The component renders the text and form, as usual and, like before, the H3 text updates its state when the form is submitted with the contents entered in the input field.

We created a ref by setting the ref prop of the input field to textInput. That gives us access to the value of the input in the handleSubmit() method using this.refs.textInput.value.

Forwarding a ref from one component to another

**Ref forwarding is the technique of passing a ref from a component to a child component by making use of the React.forwardRef() method.

See the Pen React Ref - forward Ref by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

Back to our running example of a input field that updates the value of text when submitted:

class App extends React.Component { constructor(props) { super(props) this.inputRef = React.createRef(); this.state = { value: '' } } handleSubmit = e => { e.preventDefault(); this.setState({ value: this.inputRef.current.value}) }; render() { return ( <div> <h1>React Ref - createRef</h1> <h3>Value: {this.state.value}</h3> <form onSubmit={this.handleSubmit}> <Input ref={this.inputRef} /> <button>Submit</button> </form> </div> ); } }

We’ve created the ref in this example with inputRef, which we want to pass to the child component as a ref attribute that we can use to update the state of our text.

const Input = React.forwardRef((props, ref) => ( <input type="text" ref={ref} /> ));

Here is an alternative way to do it by defining the ref outside of the App component:

const Input = React.forwardRef((props, ref) => ( <input type="text" ref={ref} /> )); const inputRef = React.createRef(); class App extends React.Component { constructor(props) { super(props) this.state = { value: '' } } handleSubmit = e => { e.preventDefault(); this.setState({ value: inputRef.current.value}) }; render() { return ( <div> <h1>React Ref - createRef</h1> <h3>Value: {this.state.value}</h3> <form onSubmit={this.handleSubmit}> <Input ref={inputRef} /> <button>Submit</button> </form> </div> ); } } Using ref for form validation

We all know that form validation is super difficult but something React is well-suited for. You know, things like making sure a form cannot be submitted with an empty input value. Or requiring a password with at least six characters. Refs can come in handy for these types of situations.

See the Pen React ref Pen - validation by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

class App extends React.Component { constructor(props) { super(props); this.username = React.createRef(); this.password = React.createRef(); this.state = { errors: [] }; } handleSubmit = (event) => { event.preventDefault(); const username = this.username.current.value; const password = this.password.current.value; const errors = this.handleValidation(username, password); if (errors.length > 0) { this.setState({ errors }); return; } // Submit data }; handleValidation = (username, password) => { const errors = []; // Require username to have a value on submit if (username.length === 0) { errors.push("Username cannot be empty"); } // Require at least six characters for the password if (password.length < 6) { errors.push("Password should be at least 6 characters long"); } // If those conditions are met, then return error messaging return errors; }; render() { const { errors } = this.state; return ( <div> <h1>React Ref Example</h1> <form onSubmit={this.handleSubmit}> // If requirements are not met, then display errors {errors.map(error => <p key={error}>{error}</p>)} <div> <label>Username:</label> // Input for username containing the ref <input type="text" ref={this.username} /> </div> <div> <label>Password:</label> // Input for password containing the ref <input type="text" ref={this.password} /> </div> <div> <button>Submit</button> </div> </form> </div> ); } }

We used the createRef() to create refs for the inputs which we then passed as parameters to the validation method. We populate the errors array when either of the input has an error, which we then display to the user.

That’s a ref... er, a wrap!

Hopefully this walkthrough gives you a good understanding of how powerful refs can be. They’re an excellent way to update part of a component without the need to re-render the entire thing. That’s convenient for writing leaner code and getting better performance.

At the same time, it’s worth heeding the advice of the React docs themselves and avoid using ref too much:

Your first inclination may be to use refs to "make things happen" in your app. If this is the case, take a moment and think more critically about where state should be owned in the component hierarchy. Often, it becomes clear that the proper place to "own" that state is at a higher level in the hierarchy.

Get it? Got it? Good.

The post Working with refs in React appeared first on CSS-Tricks.

Categories: Web Technologies

Building Battleship in CSS

CSS-Tricks - Thu, 08/16/2018 - 06:55

This is an experiment to see how far into an interactive experience I can get using only CSS. What better project to attempt than a game? Battleship seemed like a good challenge and a step up from the CSS games I’ve seen so far because it has the complexity of multiple areas that have to interact with two players.

Wanna see the complete game?

View Repo View Demo

Oh, you wanna learn how it works? Let’s dig in.

I could tell right away there was going to be a lot of repetitive HTML and very long CSS selectors coming, so I set up Pug to compile HTML and Less to compile CSS. This is what all the code from here on is going to be written in.

Interactive elements in CSS

In order to get the game mechanics working, we need some interactive elements. We’re going to walk through each one.

HTML checkboxes and :checked

Battleship involves a lot of checking to see whether a field contains a ship or not, so we’re going to use a boatload of checkboxes.

[type*='checkbox'] { // inactive style &:checked { // active style } }

To style checkboxes, we would first need to reset them using appearance: none; which is only poorly supported right now and needs browser prefixes. The better solution here is to add helper elements. <input> tags can’t have children, including pseudo elements (even though Chrome will render them anyway), so we need to work around that using the adjacent sibling selector.

[type*='checkbox'] { position: relative; opacity: none; + .check-helper { position: absolute; top: 0; left: 0; pointer-events: none; // further inactive styles } &:checked { + .check-helper { // active styles } } }

If you use a <label> for the helper element, you will also extend the click area of the checkbox onto the helper, allowing you to position it more freely. Also, you can use multiple labels for the same checkbox. Multiple checkboxes for the same label are not supported, however, since you would have to assign the same ID for each checkbox.


We’re making a local multiplayer game, so we need to hide one player’s battlefield from the other and we need a pause mode allowing for a player to switch without glancing at the other player's ships. A start screen explaining the rules would also be nice.

HTML already gives us the option to link to a given ID in the document. Using :target we can select the element that we just jumped to. That allows us to create an Single Page Application-like behavior in a completely static document (and without even breaking the back button).

- var screens = ['screen1', 'screen2', 'screen3']; body nav each screen in screens a(href='#' + screen) each screen in screens .screen(id=screen) p #{screen} .screen { display: none; &:target { display: block; } } Visibility and pointer events

Rendering elements inactive is usually done by using pointer-events: none; The cool thing about pointer-events is that you can reverse it on the child elements. That will leave only the selected child clickable, but the parent stays click-through. This will come in handy later in combination with checkbox helper elements.

The same goes for visibility: hidden; While display: none; and opacity: 0; make the element and all it’s children disappear, visibility can be reversed.

Note that a hidden visibility also disables any pointer events, unlike opacity: 0;, but stays in the document flow, unlike display: none;.

.foo { display: none; // invisible and unclickable .bar { display: block; // invisible and unclickable } } .foo { visibility: hidden; // invisible and unclickable .bar { visibility: visible; // visible and clickable } } .foo { opacity: 0; pointer-evens: none; // invisible and unclickable .bar { opacity: 1; pointer-events: all; // still invisible, but clickable } } CSS Rule Reversible opacity Reversible pointer events display: none; ❌ ❌ visibility: hidden; ✅ ✅ opacity: 0;
pointer-events: none; ❌ ✅

OK, now that we’ve established the strategy for our interactive elements, let’s turn to the setup of the game itself.

Setting up

We have some global static variables and the size of our battlefields to define before we actually start:

@gridSize: 12; @zSea: 1; @zShips: 1000; @zAbove: 2000; @seaColor: #123; @enemyColor: #f0a; @playerColor: #0c8; @hitColor: #f27; body { --grid-measurements: 70vw; @media (min-aspect-ratio: 1/2) { --grid-measurements: 35vh; } }

The grid size is the size of the battlefield: 12x12 fields in this case. Next, we define some z-indexes and colors.

Here’s the Pug skeleton:

doctype html head title Ships! link(rel="stylesheet", href="style.css") meta(charset="UTF-8") meta(name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no") meta(name="theme-color" content="#000000") body

Everything HTML from this point on will be in the body.

Implementing the states

We need to build the states for Player 1, Player 2, pause, and a start screen. We’ll do this like it was explained above with target selectors. Here’s a little sketch of what we’re trying to achieve:

We have a few modes, each in its own container with an ID. Only one mode is to be displayed in the viewport—the others are hidden via display: none;, except for player modes. If one player is active, the other needs to be outside of the viewport, but still have pointer events so the players can interact with each other.

.mode#pause each party in ['p1', 'p2'] .mode(id=party) .mode#start .status each party in ['p1', 'p2'] a.player-link.switch(href='#' + party) a.status-link.playpause(href='#pause') End Turn h1 Ships!

The .status div contains the main navigation. Its entries will change depending on the active mode, so in order to select it properly, we’ll need put it after our .mode elements. The same goes for the <h1>, so it ends up at the end of the document (don’t tell the SEO people).

.mode { opacity: 0; pointer-events: none; &:target, &#start { opacity: 1; pointer-events: all; z-index: 1; } &#p1, &#p2 { position: absolute; transform: translateX(0); opacity: 1; z-index: 2; } &#p1:target { transform: translateX(50vw); +#p2 { transform: translateX(50vw); z-index: 2; } } &#p2 { transform: translateX(50vw); z-index: 1; } &#pause:target { ~ #p1, ~ #p2 { opacity: 0; } } } #start { .mode:target ~ & { display: none; } }

The .mode div never has pointer events and always is fully transparent (read: inactive), except for the start screen, which is enabled by default and the currently targeted screen. I don’t simply set it to display: none; because I still need it to be in the document flow. Hiding the visibility won’t work because I need to activate pointer events individually later on, when hitting enemy ships.

I need #p1 and #p2 to be next to each other because that’s what’s going to enable the interaction between one players hits and the other players ships.

Implementing the battlefields

We need two sets of two battlefields for a total of four battlefields. Each set contains one battlefield for the current player and another for the opposite player. One set is going to be in #p1 and the other one in #p2. Only one of the players will be in the viewport, but both retain their pointer events and their flow in the document. Here’s a little sketch:

Now we need lots of HTML. Each player needs two battlefields, which need to have 12x12 fields. That’s 576 fields in total, so we’re going to loop around a bit.

The fields are going to have their own class declaring their position in the grid. Also, fields in the first row or line get a position indicator, so you get to say something cool like "Fire at C6."

each party in 'p1', 'p2'] .mode(id=party) each faction in 'enemy', 'player'] .battlefield(class=faction, class=party) each line in 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] each col, colI in 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L'] div(class='field-x' + (colI+1) + '-y' + line) if (col === 'A') .indicator-col #{line} if (line === 1) .indicator-line #{col}

The battlefield itself is going to be set in a CSS grid, with its template and measurements coming from the variables we set before. We’ll position them absolutely within our .mode divs and switch the enemy position with the player. In the actual board game, you have your own ships on the bottom as well. Note that we need to escape the calc on the top value, or Less will try to calculate it for you and fail.

.battlefield { position: absolute; display: grid; grid-template-columns: repeat(@gridSize, 1fr); width: var(--grid-measurements); height: var(--grid-measurements); margin: 0 auto 5vw; border: 2px solid; transform: translate(-50%, 0); z-index: @zSea; &.player { top: calc(var(--grid-measurements) ~"+" 150px); border-color: transparent; :target & { border-color: @playerColor; } } &.enemy { top: 100px; border-color: transparent; :target & { border-color: @enemyColor; } } }

We want the tiles of the battlefield to be a nice checkerboard pattern. I wrote a mixin to calculate the colors, and since I like my mixins separated from the rest, this is going into a components.less file.

.checkerboard(@counter) when (@counter > 0) { .checkerboard(@counter - 2); &[class^='field-'][class$='-y@{counter}'] { &:nth-of-type(odd) { background-color: transparent; :target & { background-color: darken(@seaColor, 3%); } } &:nth-of-type(even) { background-color: transparent; :target & { background-color: darken(@seaColor, 4%); } } } }

When we call it with .checkerboard(@gridSize);, it will iterate through every second line of the grid and set background colors for odd and even instances of the current element. We can color the remaining fields with an ordinary :odd and :even.

Next, we place the indicators outside of the battlefields.

[class^='field-'] { position: relative; display: flex; justify-content: center; align-items: center; width: 100%; height: 100%; background-color: transparent; .checkerboard(@gridSize); :target &:nth-of-type(even) { background-color: darken(@seaColor, 2%); } :target &:nth-of-type(odd) { background-color: darken(@seaColor, 1%); } [class^='indicator-'] { display: none; :target & { position: absolute; display: flex; justify-content: center; width: calc(var(--grid-measurements)~"/"@gridSize); height: calc(var(--grid-measurements)~"/"@gridSize); color: lighten(@seaColor, 10%); pointer-events: none; } &.indicator-line { top: -1.5em; align-items: flex-start; } &.indicator-col { left: -2.3em; align-items: center; } } } Implementing the ships

Let’s get to the tricky part and place some ships. Those need to be clickable and interactive, so they’re going to be checkboxes. Actually, we need two checkboxes for one ship: miss and hit.

  • Miss is the bottom one. If nothing else is on that field, your shot hits the water and triggers a miss-animation. The exception is when a player clicks on their own battlefield. In that case, the ship animation plays.
  • When an own ships spawns, it activates a new checkbox. This one is called hit. It’s placed at the exact same coordinates as its corresponding ship, but in the other players attack field and above the checkbox helper for the miss. If a hit is activated, it displays a hit animation on the current player’s attack field as well as well as on the opponent’s own ship.

This is why we need to position our battlefields absolutely next to each other. We need them aligned at all times in order to let them interact with each other.

First, we’re going to set some styles that apply to both checkboxes. We still need the pointer events, but want to visually hide the checkbox and work with helper elements instead.

.check { position: absolute; top: 0; left: 0; width: 100%; height: 100%; margin: 0; opacity: 0; + .check-helper { position: absolute; top: 0; left: 0; width: 100%; height: 100%; pointer-events: none; } }

We’ll also write some classes for our events for later use right now. This will also go into components.less:

.hit-obj { position: absolute; visibility: visible; left: 0; top: 0; width: 100%; height: 100%; border-radius: 50%; animation: hit 1s forwards; } .ship-obj { position: absolute; left: 0; top: 0; width: 90%; height: 90%; border-radius: 15%; animation: setShip 0.5s forwards; } .miss-obj { position: absolute; left: 0; top: 0; width: 100%; height: 100%; border-radius: 50%; animation: miss 1s forwards; } Spawning and missing ships

Those two events are basically the same. If you hit the sea in your own battlefield, you create a ship. If you hit the sea in the enemy battlefield, you trigger a miss. That happens by calling the respective class from our components.less file within a pseudo element of the helper class. We use pseudo elements here because we need to place two objects in one helper later on.

If you spawn a ship, you shouldn’t be able to un-spawn it, so we make it lose its pointer events after being checked. However, the next hit-checkbox gains it pointer events, enabling the enemy to hit spawned ships.

.check { &.ship { &:checked { pointer-events: none; } &:checked + .check-helper { :target .player & { &::after { content: ""; .ship-obj; // set own ship } } :target .enemy & { &::after { content: ""; .miss-obj; // miss enemy ship } } } &:checked ~ .hit { pointer-events: all; } } } Hitting ships

That new hit checkbox is positioned absolutely on top of the other player's attack field. For Player 1 that means by 50vw to the right and by the grid height + 50px margin to the top. It has no pointer events by default, they are going to be overwritten by those set in .ship:check ~ .hit, so only ships that are actually set, can be hit.

To display a hit event, we need two pseudo elements: one that confirms the hit on the attack field; and one that shows the victim where they have been hit. :checked + .check-helper::after calls a .hit-obj from components.less onto the attacker’s field and the corresponding ::before pseudo element gets translated back to the victim’s own battlefield.

Since the display of hit events isn’t scoped to the active player, we need to remove all unnecessary instances manually using display: none;.

.check { &.hit { position: absolute; top: ~"calc(-1 * (var(--grid-measurements) + 50px))"; left: 50vw; width: 100%; height: 100%; pointer-events: none; #p2 &, #p1:target & { left: 0; } #p1:not(:target) & + .check-helper::before { left: 50vw; } &:checked { opacity: 1; visibility: hidden; pointer-events: none; + .check-helper { &::before { content: ""; .hit-obj; // hit enemy ships top: ~"calc(-1 * (var(--grid-measurements) + 50px))"; } &::after { content: ""; .hit-obj; // hit own ships top: -2px; left: -2px; } #p1:target &::before, #p1:target ~ #p2 &::after, #p1:not(:target) &::after, #p2:target &::before { display: none; } } } #p1:target .battlefield.p1 &, #p2:target .battlefield.p2 & { display: none; } } } Animating the events

While we did style our miss, ship and hit objects, there’s nothing to be seen yet. That’s because we are still missing the animations making those objects visible. Those are simple keyframe animations that I put into a new Less file called animations.less.

@keyframes setShip { 0% { transform: scale(0, 0); background-color: transparent; } 100% { transform: scale(1, 1); background-color: @playerColor; } } @keyframes hit { 0% { transform: scale(0, 0); opacity: 0; background-color: transparent; } 10% { transform: scale(1.2, 1.2); opacity: 1; background-color: spin(@hitColor, 40); box-shadow: 0 0 0 0.5em var(--shadowColor); } 100% { transform: scale(.7, .7); opacity: .7; background-color: @hitColor; box-shadow: 0 0 0 0.5em var(--shadowColor); } } @keyframes miss { 0% { transform: scale(0, 0); opacity: 1; background-color: lighten(@seaColor, 50); } 100% { transform: scale(1, 1); opacity: .8; background-color: lighten(@seaColor, 10); } } Add customizable player names

This isn’t really necessary for functionality, but it’s a nice little extra. Instead of being called "Player 1" and "Player 2," you can enter your own name. We do this by adding two <input type="text"> to .status, one for each player. They have placeholders in case the players don’t want to enter their names and want to skip to the game right away.

.status input(type="text" placeholder="1st Player").player-name#name1 input(type="text" placeholder="2nd Player").player-name#name2 each party in ['p1', 'p2'] a.player-link.switch(href='#' + party) a.status-link.playpause(href='#pause') End Turn

Because we put them into .status, we can display them on every screen. On the start screen, we leave them as normal input fields, for the players to enter their names. We style their placeholders to look like the actual text input, so it doesn’t really matter if players enter their names or not.

.status { .player-name { position: relative; padding: 3px; border: 1px solid @enemyColor; background: transparent; color: @playerColor; &::placeholder { color: @playerColor; opacity: 1; // Reset Firefox user agent styles } } }

On the other screens, we remove their typical input field styles as well as their pointer events, making they appear as normal, non-changeable text. .status also contains empty links to select players. We style those links to have actual measurements and display the name inputs without pointer events above them. Clicking a name triggers the link now, targeting the corresponding mode.

.status { .mode#pause:target ~ & { top: 40vh; width: calc(100% ~"-" 40px); padding: 0 20px; text-align: center; z-index: @zAbove; .player-name, .player-link { position: absolute; display: block; width: 80%; max-width: 500px; height: 40px; margin: 0; padding: 0; &:nth-of-type(even) { top: 60px; } } .player-name { border: 0; text-align: center; pointer-events: none; } } }

The player screens only need to display the active player, so we remove the other one.

.status { .mode#p1:target ~ & #name2 { display: none; } .mode#p2:target ~ & #name1 { display: none; } }

Some notes on the Internet Explorer and Edge: Microsoft browsers haven’t implemented the ::placeholder pseudo element. While they do support :-ms-input-placeholder for IE and ::-ms-input-placeholder, as well as the webkit-prefix for Edge, those prefixes only work if ::placeholder is not set. As far as I played around with placeholders, I only managed to style them properly in either the Microsoft browsers, or all the other ones. If someone else has a workaround, please share it!

Putting it all together

What we have so far is a functional, but not very handsome game. I use the start screen to clarify some basic rules. Since we don’t have a hard-coded win condition and nothing to prevent players to place their ships wildly all over the place, I created a "Play fair" note that encourages the good ol' honor system.

.mode#start .battlefield.enemy ol li span You are this color. li span Your enemy is span this span color li span You may place your ships as follows: ul li 1 x 5 blocks li 2 x 4 blocks li 3 x 3 blocks li 4 x 2 blocks

I’m not going into the detail of how I went about getting things exactly to my liking since most of that is very basic CSS. You can go through the end result to pick them out.

When we finally connect all the pieces, we get this:

See the Pen CSS Game: Battleships by Daniel Schulz (@iamschulz) on CodePen.

Wrapping up

Let’s look back at what we’ve accomplished.

HTML and CSS may not be programming languages, but they are mighty tools in their own domain. We can manage states with pseudo classes and manipulate the DOM with pseudo elements.

While most of us use :hover and :focus all the time, :checked goes by largely unnoticed, only for styling actual checkboxes and radio buttons at best. Checkboxes are handy little tools that can help us to get rid of unnecessary JavaScript in our more simple front end features. I wouldn’t hesitate to build dropdown or off-canvas menus in pure CSS in real life projects, as long as the requirements don’t get too complicated.

I’d be a bit more cautious when using the :target selector. Since it uses the URL hash value, it’s only usable for a global value. I think I’d use it for, say, highlighting the current paragraph on a content page, but not for reusable elements like a slider or an accordion menu. It can also quickly get messy on larger projects, especially when other parts of it start controlling the hash value.

Building the game was a learning experience for me, dealing with pseudo selectors interacting with each other and playing around with lots of pointer events. If I had to build it again, I’d surely choose another path, which is a good outcome for me. I definitely don’t see it as a production-ready or even clean solution, and those super specific selectors are a nightmare to maintain, but it has some good parts in it that I can transition to real life projects.

Most importantly though, it was a fun thing to do.

The post Building Battleship in CSS appeared first on CSS-Tricks.

Categories: Web Technologies

​Task management has never been easier

CSS-Tricks - Thu, 08/16/2018 - 06:53

(This is a sponsored post.)

monday.com is a team management tool that is exceptionally suitable for any industry sector and by any sized team. It will perfectly serve a team of two or a team of hundreds spread around the globe, and it can manage multiple projects at once.

monday.com promotes effortless collaboration and transparency, it's "cheetah fast," it displays status in as many as 20 different colors, and its status board can be customized to fit your needs and your workflow.

It serves as an excellent alternative to having to force fit your work to meet the demands of your project management tools or systems.

Direct Link to ArticlePermalink

The post ​Task management has never been easier appeared first on CSS-Tricks.

Categories: Web Technologies

What is special with MySQL Cluster

Planet MySQL - Thu, 08/16/2018 - 04:40
The first chapter from the book "MySQL Cluster 7.5 inside and out".
This chapter presents a number of key features that makes NDB
Categories: Web Technologies

MySQL on Docker: How to Monitor MySQL Containers with Prometheus - Part 1 - Deployment on Standalone and Swarm

Planet MySQL - Thu, 08/16/2018 - 03:18

Monitoring is a concern for containers, as the infrastructure is dynamic. Containers can be routinely created and destroyed, and are ephemeral. So how do you keep track of your MySQL instances running on Docker?

As with any software component, there are many options out there that can be used. We’ll look at Prometheus as a solution built for distributed infrastructure, and works very well with Docker.

This is a two-part blog. In this part 1 blog, we are going to cover the deployment aspect of our MySQL containers with Prometheus and its components, running as standalone Docker containers and Docker Swarm services. In part 2, we will look at the important metrics to monitor from our MySQL containers, as well as integration with the paging and notification systems.

Introduction to Prometheus

Prometheus is a full monitoring and trending system that includes built-in and active scraping, storing, querying, graphing, and alerting based on time series data. Prometheus collects metrics through pull mechanism from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true. It supports all the target metrics that we want to measure if one would like to run MySQL as Docker containers. Those metrics include physical hosts metrics, Docker container metrics and MySQL server metrics.

Take a look at the following diagram which illustrates Prometheus architecture (taken from Prometheus official documentation):

We are going to deploy some MySQL containers (standalone and Docker Swarm) complete with a Prometheus server, MySQL exporter (i.e., a Prometheus agent to expose MySQL metrics, that can then be scraped by the Prometheus server) and also Alertmanager to handle alerts based on the collected metrics.

For more details check out the Prometheus documentation. In this example, we are going to use the official Docker images provided by the Prometheus team.

Standalone Docker Deploying MySQL Containers

Let's run two standalone MySQL servers on Docker to simplify our deployment walkthrough. One container will be using the latest MySQL 8.0 and the other one is MySQL 5.7. Both containers are in the same Docker network called "db_network":

$ docker network create db_network $ docker run -d \ --name mysql80 \ --publish 3306 \ --network db_network \ --restart unless-stopped \ --env MYSQL_ROOT_PASSWORD=mypassword \ --volume mysql80-datadir:/var/lib/mysql \ mysql:8 \ --default-authentication-plugin=mysql_native_password

MySQL 8 defaults to a new authentication plugin called caching_sha2_password. For compatibility with Prometheus MySQL exporter container, let's use the widely-used mysql_native_password plugin whenever we create a new MySQL user on this server.

For the second MySQL container running 5.7, we execute the following:

$ docker run -d \ --name mysql57 \ --publish 3306 \ --network db_network \ --restart unless-stopped \ --env MYSQL_ROOT_PASSWORD=mypassword \ --volume mysql57-datadir:/var/lib/mysql \ mysql:5.7

Verify if our MySQL servers are running OK:

[root@docker1 mysql]# docker ps | grep mysql cc3cd3c4022a mysql:5.7 "docker-entrypoint.s…" 12 minutes ago Up 12 minutes>3306/tcp mysql57 9b7857c5b6a1 mysql:8 "docker-entrypoint.s…" 14 minutes ago Up 14 minutes>3306/tcp mysql80

At this point, our architecture is looking something like this:

Let's get started to monitor them.

Exposing Docker Metrics to Prometheus

Docker has built-in support as Prometheus target, where we can use to monitor the Docker engine statistics. We can simply enable it by creating a text file called "daemon.json" inside the Docker host:

$ vim /etc/docker/daemon.json

And add the following lines:

{ "metrics-addr" : "", "experimental" : true }

Where is the Docker host primary IP address. Then, restart Docker daemon to load the change:

$ systemctl restart docker

Since we have defined --restart=unless-stopped in our MySQL containers' run command, the containers will be automatically started after Docker is running.

Deploying MySQL Exporter

Before we move further, the mysqld exporter requires a MySQL user to be used for monitoring purposes. On our MySQL containers, create the monitoring user:

$ docker exec -it mysql80 mysql -uroot -p Enter password: mysql> CREATE USER 'exporter'@'%' IDENTIFIED BY 'exporterpassword' WITH MAX_USER_CONNECTIONS 3; mysql> GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'%';

Take note that it is recommended to set a max connection limit for the user to avoid overloading the server with monitoring scrapes under heavy load. Repeat the above statements onto the second container, mysql57:

$ docker exec -it mysql57 mysql -uroot -p Enter password: mysql> CREATE USER 'exporter'@'%' IDENTIFIED BY 'exporterpassword' WITH MAX_USER_CONNECTIONS 3; mysql> GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'%';

Let's run the mysqld exporter container called "mysql8-exporter" to expose the metrics for our MySQL 8.0 instance as below:

$ docker run -d \ --name mysql80-exporter \ --publish 9104 \ --network db_network \ --restart always \ --env DATA_SOURCE_NAME="exporter:exporterpassword@(mysql80:3306)/" \ prom/mysqld-exporter:latest \ --collect.info_schema.processlist \ --collect.info_schema.innodb_metrics \ --collect.info_schema.tablestats \ --collect.info_schema.tables \ --collect.info_schema.userstats \ --collect.engine_innodb_status

And also another exporter container for our MySQL 5.7 instance:

$ docker run -d \ --name mysql57-exporter \ --publish 9104 \ --network db_network \ --restart always \ -e DATA_SOURCE_NAME="exporter:exporterpassword@(mysql57:3306)/" \ prom/mysqld-exporter:latest \ --collect.info_schema.processlist \ --collect.info_schema.innodb_metrics \ --collect.info_schema.tablestats \ --collect.info_schema.tables \ --collect.info_schema.userstats \ --collect.engine_innodb_status

We enabled a bunch of collector flags for the container to expose the MySQL metrics. You can also enable --collect.slave_status, --collect.slave_hosts if you have a MySQL replication running on containers.

We should be able to retrieve the MySQL metrics via curl from the Docker host directly (port 32771 is the published port assigned automatically by Docker for container mysql80-exporter):

$ curl ... mysql_info_schema_threads_seconds{state="waiting for lock"} 0 mysql_info_schema_threads_seconds{state="waiting for table flush"} 0 mysql_info_schema_threads_seconds{state="waiting for tables"} 0 mysql_info_schema_threads_seconds{state="waiting on cond"} 0 mysql_info_schema_threads_seconds{state="writing to net"} 0 ... process_virtual_memory_bytes 1.9390464e+07

At this point, our architecture is looking something like this:

We are now good to setup the Prometheus server.

Deploying Prometheus Server

Firstly, create Prometheus configuration file at ~/prometheus.yml and add the following lines:

$ vim ~/prometheus.yml global: scrape_interval: 5s scrape_timeout: 3s evaluation_interval: 5s # Our alerting rule files rule_files: - "alert.rules" # Scrape endpoints scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] - job_name: 'mysql' static_configs: - targets: ['mysql57-exporter:9104','mysql80-exporter:9104'] - job_name: 'docker' static_configs: - targets: ['']

From the Prometheus configuration file, we have defined three jobs - "prometheus", "mysql" and "docker". The first one is the job to monitor the Prometheus server itself. The next one is the job to monitor our MySQL containers named "mysql". We define the endpoints on our MySQL exporters on port 9104, which exposed the Prometheus-compatible metrics from the MySQL 8.0 and 5.7 instances respectively. The "alert.rules" is the rule file that we will include later in the next blog post for alerting purposes.

We can then map the configuration with the Prometheus container. We also need to create a Docker volume for Prometheus data for persistency and also expose port 9090 publicly:

$ docker run -d \ --name prometheus-server \ --publish 9090:9090 \ --network db_network \ --restart unless-stopped \ --mount type=volume,src=prometheus-data,target=/prometheus \ --mount type=bind,src="$(pwd)"/prometheus.yml,target=/etc/prometheus/prometheus.yml \ --mount type=bind,src="$(pwd) prom/prometheus

Now our Prometheus server is already running and can be accessed directly on port 9090 of the Docker host. Open a web browser and go to to access the Prometheus web UI. Verify the target status under Status -> Targets and make sure they are all green:

At this point, our container architecture is looking something like this:

Our Prometheus monitoring system for our standalone MySQL containers are now deployed.

Docker Swarm Deploying a 3-node Galera Cluster

Supposed we want to deploy a three-node Galera Cluster in Docker Swarm, we would have to create 3 different services, each service representing one Galera node. Using this approach, we can keep a static resolvable hostname for our Galera container, together with MySQL exporter containers that will accompany each of them. We will be using MariaDB 10.2 image maintained by the Docker team to run our Galera cluster.

Firstly, create a MySQL configuration file to be used by our Swarm service:

$ vim ~/my.cnf [mysqld] default_storage_engine = InnoDB binlog_format = ROW innodb_flush_log_at_trx_commit = 0 innodb_flush_method = O_DIRECT innodb_file_per_table = 1 innodb_autoinc_lock_mode = 2 innodb_lock_schedule_algorithm = FCFS # MariaDB >10.1.19 and >10.2.3 only wsrep_on = ON wsrep_provider = /usr/lib/galera/libgalera_smm.so wsrep_sst_method = mariabackup

Create a dedicated database network in our Swarm called "db_swarm":

$ docker network create --driver overlay db_swarm

Import our MySQL configuration file into Docker config so we can load it into our Swarm service when we create it later:

$ cat ~/my.cnf | docker config create my-cnf -

Create the first Galera bootstrap service, with "gcomm://" as the cluster address called "galera0". This is a transient service for bootstrapping process only. We will delete this service once we have gotten 3 other Galera services running:

$ docker service create \ --name galera0 \ --replicas 1 \ --hostname galera0 \ --network db_swarm \ --publish 3306 \ --publish 4444 \ --publish 4567 \ --publish 4568 \ --config src=my-cnf,target=/etc/mysql/mariadb.conf.d/my.cnf \ --env MYSQL_ROOT_PASSWORD=mypassword \ --mount type=volume,src=galera0-datadir,dst=/var/lib/mysql \ mariadb:10.2 \ --wsrep_cluster_address=gcomm:// \ --wsrep_sst_auth="root:mypassword" \ --wsrep_node_address=galera0

At this point, our database architecture can be illustrated as below:

Then, repeat the following command for 3 times to create 3 different Galera services. Replace {name} with galera1, galera2 and galera3 respectively:

$ docker service create \ --name {name} \ --replicas 1 \ --hostname {name} \ --network db_swarm \ --publish 3306 \ --publish 4444 \ --publish 4567 \ --publish 4568 \ --config src=my-cnf,target=/etc/mysql/mariadb.conf.d/my.cnf \ --env MYSQL_ROOT_PASSWORD=mypassword \ --mount type=volume,src={name}-datadir,dst=/var/lib/mysql \ mariadb:10.2 \ --wsrep_cluster_address=gcomm://galera0,galera1,galera2,galera3 \ --wsrep_sst_auth="root:mypassword" \ --wsrep_node_address={name}

Verify our current Docker services:

$ docker service ls ID NAME MODE REPLICAS IMAGE PORTS wpcxye3c4e9d galera0 replicated 1/1 mariadb:10.2 *:30022->3306/tcp, *:30023->4444/tcp, *:30024-30025->4567-4568/tcp jsamvxw9tqpw galera1 replicated 1/1 mariadb:10.2 *:30026->3306/tcp, *:30027->4444/tcp, *:30028-30029->4567-4568/tcp otbwnb3ridg0 galera2 replicated 1/1 mariadb:10.2 *:30030->3306/tcp, *:30031->4444/tcp, *:30032-30033->4567-4568/tcp 5jp9dpv5twy3 galera3 replicated 1/1 mariadb:10.2 *:30034->3306/tcp, *:30035->4444/tcp, *:30036-30037->4567-4568/tcp

Our architecture is now looking something like this:

We need to remove the Galera bootstrap Swarm service, galera0, to stop it from running because if the container is being rescheduled by Docker Swarm, a new replica will be started with a fresh new volume. We run the risk of data loss because the --wsrep_cluster_address contains "galera0" in the other Galera nodes (or Swarm services). So, let's remove it:

$ docker service rm galera0

At this point, we have our three-node Galera Cluster:

We are now ready to deploy our MySQL exporter and Prometheus Server.

MySQL Exporter Swarm Service

Login to one of the Galera nodes and create the exporter user with proper privileges:

$ docker exec -it {galera1} mysql -uroot -p Enter password: mysql> CREATE USER 'exporter'@'%' IDENTIFIED BY 'exporterpassword' WITH MAX_USER_CONNECTIONS 3; mysql> GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'%';

Then, create the exporter service for each of the Galera services (replace {name} with galera1, galera2 and galera3 respectively):

$ docker service create \ --name {name}-exporter \ --network db_swarm \ --replicas 1 \ -p 9104 \ -e DATA_SOURCE_NAME="exporter:exporterpassword@({name}:3306)/" \ prom/mysqld-exporter:latest \ --collect.info_schema.processlist \ --collect.info_schema.innodb_metrics \ --collect.info_schema.tablestats \ --collect.info_schema.tables \ --collect.info_schema.userstats \ --collect.engine_innodb_status

At this point, our architecture is looking something like this with exporter services in the picture:

Prometheus Server Swarm Service

Finally, let's deploy our Prometheus server. Similar to the Galera deployment, we have to prepare the Prometheus configuration file first before importing it into Swarm using Docker config command:

$ vim ~/prometheus.yml global: scrape_interval: 5s scrape_timeout: 3s evaluation_interval: 5s # Our alerting rule files rule_files: - "alert.rules" # Scrape endpoints scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] - job_name: 'galera' static_configs: - targets: ['galera1-exporter:9104','galera2-exporter:9104', 'galera3-exporter:9104']

From the Prometheus configuration file, we have defined three jobs - "prometheus" and "galera". The first one is the job to monitor the Prometheus server itself. The next one is the job to monitor our MySQL containers named "galera". We define the endpoints on our MySQL exporters on port 9104, which expose the Prometheus-compatible metrics from the three Galera nodes respectively. The "alert.rules" is the rule file that we will include later in the next blog post for alerting purposes.

Import the configuration file into Docker config to be used with Prometheus container later:

$ cat ~/prometheus.yml | docker config create prometheus-yml -

Let's run the Prometheus server container, and publish port 9090 of all Docker hosts for the Prometheus web UI service:

$ docker service create \ --name prometheus-server \ --publish 9090:9090 \ --network db_swarm \ --replicas 1 \ --config src=prometheus-yml,target=/etc/prometheus/prometheus.yml \ --mount type=volume,src=prometheus-data,dst=/prometheus \ prom/prometheus

Verify with the Docker service command that we have 3 Galera services, 3 exporter services and 1 Prometheus service:

$ docker service ls ID NAME MODE REPLICAS IMAGE PORTS jsamvxw9tqpw galera1 replicated 1/1 mariadb:10.2 *:30026->3306/tcp, *:30027->4444/tcp, *:30028-30029->4567-4568/tcp hbh1dtljn535 galera1-exporter replicated 1/1 prom/mysqld-exporter:latest *:30038->9104/tcp otbwnb3ridg0 galera2 replicated 1/1 mariadb:10.2 *:30030->3306/tcp, *:30031->4444/tcp, *:30032-30033->4567-4568/tcp jq8i77ch5oi3 galera2-exporter replicated 1/1 prom/mysqld-exporter:latest *:30039->9104/tcp 5jp9dpv5twy3 galera3 replicated 1/1 mariadb:10.2 *:30034->3306/tcp, *:30035->4444/tcp, *:30036-30037->4567-4568/tcp 10gdkm1ypkav galera3-exporter replicated 1/1 prom/mysqld-exporter:latest *:30040->9104/tcp gv9llxrig30e prometheus-server replicated 1/1 prom/prometheus:latest *:9090->9090/tcp

Now our Prometheus server is already running and can be accessed directly on port 9090 from any Docker node. Open a web browser and go to to access the Prometheus web UI. Verify the target status under Status -> Targets and make sure they are all green:

At this point, our Swarm architecture is looking something like this:

To be continued..

We now have our database and monitoring stack deployed on Docker. In part 2 of the blog, we will look into the different MySQL metrics to keep an eye on. We’ll also see how to configure alerting with Prometheus.

Upcoming webinar   MySQL on Docker: Running a MariaDB Galera Cluster without Container Orchestration Tools - Part 1 Read the blog   MySQL on Docker: Running a MariaDB Galera Cluster without Orchestration Tools - DB Container Management - Part 2 Read the blog   MySQL on Docker - How to Containerize Your Database Download Tags:  MySQL docker galera MariaDB prometheus monitoring
Categories: Web Technologies

MySQL Shell: Using External Python Modules

Planet MySQL - Thu, 08/16/2018 - 02:32

MySQL Shell is a great tool for working with MySQL. One of the features that make it stand out compared to the traditional mysql command-line client is the support for JavaScript and Python in addition to SQL statements. This allows you to write code you otherwise would have had to write outside the client. I showed a simple example of this in my post about the instant ALTER TABLE feature in MySQL 8.0.12 where a Python loop was used to populate a table with 1 million rows This blog will look further into the use of Python and more specifically external modules.

Using a customer table_tools module in MySQL Shell.Using Standard Modules

Aforementioned loop that was used to populate a test table also showed another feature of MySQL Shell: You can use the standard Python modules just as you would do in any other Python script. For example, if you need to create UUIDs you can use the uuid module:

mysql-py> import uuid mysql-py> print(uuid.uuid1().hex) 9e8ef45ea12911e8a8a6b0359feab2bb

This on its own is great, but what about your own modules? Sure, that is supported as well. Before showing how you can access your own modules, let’s create a simple module to use as an example.

Example Module

For the purpose of this blog, the following code should be saved in the file table_tools.py. You can save it in whatever directory you keep your Python libraries. The code is:

def describe(table): fmt = "{0:<11} {1:<8} {2:<4} {3:<3} {4:<9} {5:<14}" # Create query against information_schema.COLUMNS session = table.get_session() i_s = session.get_schema("information_schema") i_s_columns = i_s.get_table("COLUMNS") query = i_s_columns.select( "COLUMN_NAME AS Field", "COLUMN_TYPE AS Type", "IS_NULLABLE AS `Null`", "COLUMN_KEY AS Key", "COLUMN_DEFAULT AS Default", "EXTRA AS Extra" ) query = query.where("TABLE_SCHEMA = :schema AND TABLE_NAME = :table") query = query.order_by("ORDINAL_POSITION") query = query.bind("schema", table.schema.name) query = query.bind("table", table.name) result = query.execute() # Print the column names column_names = [column.column_name for column in result.get_columns()] print(fmt.format(*column_names)) print("-"*67) for row in result.fetch_all(): print(fmt.format(*row))

The describe function takes a Table object from which it works backwards to get the session object. It then queries the information_schema.COLUMNS view to get the same information about the table as the DESC SQL command. Both the table and schema name can be found through the table object. Finally, the information is printed.

The example is overly simplified for general usage as it does not change the width of the output based on the length of the data, and there is no error handling whatsoever. However, this is on purpose to focus on the usage of the code from within MySQL Shell rather than on the code.

Note: The same code works in a MySQL Connector/Python script except that the rows are returned as mysqlx.result.Row objects. So, the loop printing the rows look a little different:
for row in result.fetch_all(): values = [row[name] or "" for name in column_names] print(fmt.format(*values))

With the function ready, it is time to look at how you can import it into MySQL Shell.

Importing Modules Into MySQL Shell

In order to be able to import a module into MySQL Shell, it must be in the path searched by Python. If you have saved table_tools.py into a location already searched, then that is it. However, a likely more common scenario is that you have saved the file in a custom location. In that case, you need to tell Python where to look for the files.

You modify the search path in MySQL Shell just as you would in a regular Python program. If you for example have saved the file to D:\MySQL\Shell\Python, then you can add that to the path using the following code:

import sys sys.path.append("D:\MySQL\Shell\Python")

If this is something you need as a one off, then it is fine just to modify the path directly in MySQL Shell. However, if you are working on some utilities that you want to reuse, it becomes tedious. MySQL Shell has support for configuration files where commands can be executed. The one for Python is named mysqlshrc.py (and mysqlshrc.js for JavaScript).

MySQL Shell searches for the mysqlshrc.py file in four locations including global locations as well as user specific locations. You can see the full list and the search order in the MySQL Shell User Guide. The user specific file is %APPDATA%\MySQL\mysqlsh\mysqlshrc.py on Microsoft Windows and $HOME/.mysqlsh/mysqlshrc.py on Linux and macOS.

You can do more than just changing the search path in the mysqlshrc.py file. However, for this example nothing else is needed.

Using the Module

Now that MySQL Shell has been set up to search in the path where your module is saved, you can use it in MySQL Shell. For example to get the description of the world.city table, you can use the following commands:

mysql-py> import table_tools mysql-py> \use world Default schema `world` accessible through db. mysql-py> table_tools.describe(db.city) Field Type Null Key Default Extra ------------------------------------------------------------------- ID int(11) NO PRI None auto_increment Name char(35) NO CountryCode char(3) NO MUL District char(20) NO Population int(11) NO 0

The \use world command sets the default schema to the world database. As a side effect, it also makes the tables in the world database available as properties of the db object. So, it possible to pass an object for the world.city table as db.city to table_tools.describe() function.

That is it. Now it is your turn to explore the possibilities that have been opened with MySQL Shell.

Categories: Web Technologies

Hibernate database catalog multitenancy

Planet MySQL - Thu, 08/16/2018 - 00:15

Introduction As I explained in this article, multitenancy is an architectural pattern which allows you to isolate customers even if they are using the same hardware or software components. There are multiple ways you can achieve multitenancy, and in this article, we are going to see how you can implement a multitenancy architecture using the … Continue reading Hibernate database catalog multitenancy →

The post Hibernate database catalog multitenancy appeared first on Vlad Mihalcea.

Categories: Web Technologies

PHP 7.3.0.beta2 Released - PHP: Hypertext Preprocessor

Planet PHP - Wed, 08/15/2018 - 17:00
The PHP team is glad to announce the release of the sixth PHP 7.3.0 version, PHP 7.3.0beta2. The rough outline of the PHP 7.3 release cycle is specified in the PHP Wiki. For source downloads of PHP 7.3.0beta2 please visit the download page. Windows sources and binaries can be found on windows.php.net/qa/. Please carefully test this version and report any issues found in the bug reporting system. THIS IS A DEVELOPMENT PREVIEW - DO NOT USE IT IN PRODUCTION! For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. Internal changes are listed in the UPGRADING.INTERNALS file. These files can also be found in the release archive. The next release would be Beta 3, planned for August 30th. The signatures for the release can be found in the manifest or on the QA site. Thank you for helping us make PHP better.
Categories: Web Technologies

How to reset your `root` password on your MySQL server

Planet MySQL - Wed, 08/15/2018 - 14:48

You don’t need this tutorial if you have access to the root user or another one with SUPER and GRANT privileges.

The following instructions works for MySQL 5.7. You will need to stop the MySQL server and start it with mysqld_safe with the option skip-grant-tables:

sudo service mysql stop sudo mysqld_safe --skip-grant-tables & mysql -u root mysql

If you get an error on start, chances are there is no folder created for the mysqld_safe executable to run, on my tests I was able to solve by doing:

sudo mkdir /var/run/mysqld sudo chown -R mysql:mysql /var/run/mysqld

And then trying to start the mysqld_safe process again.

After this, the MySQL console will pop up, and you need to set up a new password for root. The second line is necessary due to a MySQL bug #79027:

UPDATE mysql.user SET authentication_string=PASSWORD('mypassword') WHERE User='root'; UPDATE mysql.user SET plugin="mysql_native_password" WHERE User='root'; FLUSH PRIVILEGES;

Once finished, kill all MySQL processes and start the service again:

ps aux | grep mysql sudo kill -9 [pid] sudo service mysql start

Done, you have reset the root password! Make sure to keep it safe this time around!

See something wrong in this tutorial? Please don’t hesitate to message me through the comments or the contact page.

Categories: Web Technologies

Understanding why Semantic HTML is important, as told by TypeScript

CSS-Tricks - Wed, 08/15/2018 - 12:43

What a great technological analogy by Mandy Michael. A reminder that TypeScript...

makes use of static typing so, for example, you can give your variables a type when you write your code and then TypeScript checks the types at compile time and will throw an error if the variable is given a value of a different type.

In other words, you have a variable age that you declare to be a number, the value for age has to stay a number otherwise TypeScript will yell at you. That type checking is a valuable thing that helps thwart bugs and keep code robust.

This is the same with HTML. If you use the <div> everywhere, you aren’t making the most of language. Because of this it’s important that you actively choose what the right element is and don’t just use the default <div>.

And hey, if you're into TypeScript, it's notable it just went 3.0.

Direct Link to ArticlePermalink

The post Understanding why Semantic HTML is important, as told by TypeScript appeared first on CSS-Tricks.

Categories: Web Technologies

Creating the “Perfect” CSS System

CSS-Tricks - Wed, 08/15/2018 - 12:19

My pal Lindsay Grizzard wrote about creating a CSS system that works across an organization and all of the things to keep in mind when starting a new project:

Getting other developers and designers to use the standardized rules is essential. When starting a project, get developers onboard with your CSS, JS and even HTML conventions from the start. Meet early and often to discuss every library, framework, mental model, and gem you are interested in using and take feedback seriously. Simply put, if they absolutely hate BEM and refuse to write it, don’t use BEM. You can explore working around this with linters, but forcing people to use a naming convention they hate isn’t going to make your job any easier. Hopefully, you will be able to convince them why the extra underscores are useful, but finding a middle ground where everyone will participate in some type of system is the priority.

I totally agree and the important point I noted here is that all of this work is a collaborative process and compromise is vital when making a system of styles that are scalable and cohesive. In my experience, at least, it’s real easy to walk into a room with all the rules written down and new guidelines ready to be enforced, but that never works out in the end.

Ethan Marcotte riffed on Lindsay’s post in an article called Weft and described why that’s not always a successful approach:

Broad strokes here, but I feel our industry tends to invert Lindsay’s model: we often start by identifying a technical solution to a problem, before understanding its broader context. Put another way, I think we often focus on designing or building an element, without researching the other elements it should connect to—without understanding the system it lives in.

Direct Link to ArticlePermalink

The post Creating the “Perfect” CSS System appeared first on CSS-Tricks.

Categories: Web Technologies

Capturing Data Evolution in a Service-Oriented Architecture

Planet MySQL - Wed, 08/15/2018 - 08:01
Building Airbnb’s Change Data Capture system (SpinalTap), to enable propagating & reacting to data mutations in real time. The dining hall in the San Francisco Office is always gleeming with natural sunlight!Labor day weekend is just around the corner! Karim is amped for a well deserved vacation. He logs in to Airbnb to start planning a trip to San Francisco, and stumbles upon a great listing hosted by Dany. He books it. A moment later, Dany receives a notification that his home has been booked. He checks his listing calendar and sure enough, those dates are reserved. He also notices the recommended daily price has increased for that time period. “Hmm, must be a lot of folks looking to visit the city over that time” he mumbles. Dany marks his listing as available for the rest of that week... All the way on the east coast, Sara is sipping tea in her cozy Chelsea apartment in New York, preparing for a business trip to her company’s HQ in San Francisco. She’s been out of luck for a while and about to take a break, when Dany’s listing pops up on her search map. She checks out the details and it looks great! She starts writing a message: “Dear Dany, I’m traveling to San Francisco and your place looks perfect for my stay…”

Adapting to data evolution has presented itself as a recurrent need for many emerging applications at Airbnb over the last few years. The above scenario depicts examples of that, where dynamic pricing, availability, and reservation workflows need to react to changes from different components in our system in near real-time. From an infrastructure perspective, designing our architecture to scale is a necessity as we continually grow both in terms of data and number of services. Yet, as part of striving towards a service-oriented architecture, an efficient manner of propagating meaningful data model mutations between microservices while maintaining a decoupled architecture that preserved data ownership boundaries was just as important.

In response, we created SpinalTap; a scalable, performant, reliable, lossless Change Data Capture service capable of detecting data mutations with low latency across different data source types, and propagating them as standardized events to consumers downstream. SpinalTap has become an integral component in Airbnb’s infrastructure and derived data processing platform, on which several critical pipelines rely. In this blog, we will present an overview of the system architecture, use cases, guarantees, and how it was designed to scale.


Change Data Capture (CDC) is a design pattern that enables capturing changes to data and notifying actors so they can react accordingly. This follows a publish-subscribe model where change to a data set is the topic of interest.


Certain high-level requirements for the system were desirable to accommodate for our use cases:

  • Lossless: Zero tolerance to data loss, a requirement for on-boarding critical applications such as a stream-based accounting audit pipeline
  • Scalable: Horizontally scalable with increased load and data cluster size, to avoid recurrent re-design of the system with incremental growth
  • Performant: Changes are propagated to subscribed consumers in near real-time (sub-second)
  • Consistent: Ordering and timeline consistency are enforced to retain sequence of changes for a specific data record
  • Fault Tolerant: Highly available with a configurable degree of redundancy to be resilient to failure
  • Extensible: A generic framework that can accommodate for different data source and sink types
Solutions Considered

There are several solutions promoted in literature for building a CDC system, the most referenced of which are:

  • Polling: A time-driven strategy can be used to periodically check whether any changes have been committed to the records of a data store, by keeping track of a status attribute (such as last updated or version)
  • Triggers: For storage engines that support database triggers (ex: MySQL), stored procedures triggered on row-based operations, those can be employed to propagate changes to other data tables in a seamless manner
  • Dual Writes: Data changes can be communicated to subscribed consumers in the application layer during the request, such as by emitting an event or scheduling an RPC after write commit
  • Audit Trail: Most data storage solutions maintain a transaction log (or changelog) to record and track changes committed to the database. This is commonly used for replication between cluster nodes, and recovery operations (such as unexpected server shutdown or failover).

There are several desirable features of employing the database changelog for detecting changes: reading from the logs allows for an asynchronous non-intrusive approach to capturing changes, as compared to triggers and polling strategies. It also supports strong consistency and ordering guarantees on commit time, and retains transaction boundary information, both of which are not achievable with dual writes. This allows to replay events from a certain point-in-time. With this in mind, SpinalTap was designed based on this approach.

Architecture High level workflow overview

At a high-level, SpinalTap was designed to be a general purpose solution that abstracts the change capture workflow, enough to be easily adaptable with different infrastructure dependencies (data stores, event bus, consumer services). The architecture is comprised of 3 main components that aid in providing sufficient abstraction to achieve these qualities:

Source Left: Source component diagram; Right: Source event workflow

The source represents the origin of the change event stream from a specific data store. The source abstraction can be easily extended with different data source types, as long as there is an accessible changelog to stream events from. Events parsed from the changelog are filtered, processed, and transformed to corresponding mutations. A mutation is an application layer construct that represents a single change (insert, update, or delete) to a data entity. It includes the entity values before & after the change, a globally unique identifier, transaction information, and metadata derived from the originating source event. The source is also responsible for detecting data schema evolution, and propagating the schema information accordingly with the corresponding mutations. This is important to ensure consistency when deserializing the entity values on the client side or replaying events from an earlier state.

Destination Left: Destination component diagram; Right: Destination event workflow

The destination represents a sink for mutations, after being processed and converted to standardized event. The destination also keeps track of the last successfully published mutation, which is employed to derive the source state position to checkpoint on. The component abstracts away the transport medium and format used. At Airbnb, we employ Apache Kafka as event bus, given its wide usage within our infrastructure. Apache Thrift is used as the data format to offer a standardized mutation schema definition and cross-language support (Ruby & Java).

A major performance bottleneck identified through benchmarking on the system was mutation publishing. The situation was aggravated given our system settings were chosen to favor strong consistency over latency. To relieve the situation, we incorporated a few optimizations:

Buffered Destination: To avoid the source being blocked while waiting for mutations to be published, we employ an in-memory bounded queue to buffer events emitted from the source (consumer-producer pattern). The source would add events to the buffer while the destination is publishing the mutations. Once available, the destination would drain the buffer and process the next batch of mutations.

Destination Pool: For sources that display erratic spiky behavior in incoming event rate, the in-memory buffer gets saturated occasionally causing intermittent degradation in performance. To relieve the system from irregular load patterns, we employed application-level partitioning of the source events to a configurable set of buffered destinations managed by a thread pool. Events are multiplexed to thread destinations while retaining the ordering schema. This enabled us to achieve high throughput while not compromising latency or consistency.

Pipe Left: Pipe component diagram; Right: Pipe manager coordinating the pipe lifecycles

The pipe coordinates the workflow between a given source and destination. It represents the basic unit of parallelism. It’s also responsible for periodically checkpointing source state, and managing the lifecycle of event streaming. In case of erroneous behavior, the pipe performs graceful shutdown and initiates the failure recovery process. A keep-alive mechanism is employed to ensure source streaming is restarted in event of failure, according to last state checkpoint. This allows to auto-remediate from intermittent failures while maintaining data integrity. The pipe manager is responsible for creating, updating, and removing pipes, as well as the pipe lifecycle (start/stop), on a given cluster node. It also ensures any changes to pipe configuration are propagated accordingly in run-time.

Cluster Management Left: Cluster resource management; Middle: Node failure recovery; Right: Isolation with instance tagging

To achieve certain desirable architectural aspects — such as scalability, fault-tolerance, and isolation — we adopted a cluster management framework (Apache Helix) to coordinate distribution of stream processing across compute resources. This helped us achieve deterministic load balancing, and horizontal scaling with automatic redistribution of source processors across the cluster.

To promote high availability with configurable fault tolerance, each source is appointed a certain subset of cluster nodes to process event streaming. We use a Leader-Standby state model, where only one node streams events from a source at any given point, while the remaining nodes in the sub cluster are on standby. If the leader is down, then one of the standby nodes will assume leadership.

To support isolation between source type processing, each node in the cluster is tagged with the source type(s) that can be delegated to it. Stream processing is distributed across cluster nodes while maintaining this isolation criteria.

For resolving inconsistencies from network partition, in particular the case where more than one node assume leadership over streaming from a specific source (split brain), we maintain a global leader epoch per source that is atomically incremented on leader transition. The leader epoch is propagated with each mutation and inconsistencies are consequently mitigated with client-side filtering, by disregarding events that have a smaller epoch than the latest observed.


Certain guarantees were essential for the system to uphold, to accommodate for all downstream uses cases.

Data Integrity: The system maintains an at-least-once delivery guarantee, where any change to the underlying data store is eventually propagated to clients. This dictates that no event present in the changelog is permanently lost, and is delivered within the time window specified by our SLA. We also ensure there is no data corruption incurred, and mutation content maintains parity that of the source event .

Event Ordering: Ordering is enforced according to the defined partitioning scheme. We maintain ordering per data record (row), i.e. all changes to a specific row in a given database table will be received in commit order.

Timeline Consistency: Being consistent across a timeline demands that changes are received chronologically within a given time frame, i.e. two sequences of a given mutation set are not sent interleaved. A split brain scenario can potentially compromise this guarantee, but is mitigated with epoch fencing as explained earlier.

Validation The SpinalTap validation framework

Justifying there is no breach in SpinalTap’s guarantees by virtue of design was not sufficient, and we wanted a more pragmatic data-driven approach to validate our assumptions. To address this, we developed a continuous online end-to-end validation pipeline, responsible for validating the mutations received on the consumer side against the source of truth, and asserting no erroneous behavior is detected in both pre-production and production environments.

To achieve a reliable validation workflow, consumed mutations are partitioned and stored on local disk, with the same partitioning scheme applied to source events. Once all mutations corresponding to events of a partition are received, the partition file is validated against with the originating source partition through a list of tests that asserted the guarantees described earlier. For MySQL specifically, the binlog file was considered a clean partition boundary.

We set up offline integration testing in a sandbox environment to prevent any regression from being deployed to production. The validator is also employed online in production by consuming live events for each source stream. This aids as a safeguard to detect any breaches that are not caught within our testing pipeline, and automatically remediate by rolling back source state to a previous checkpoint. This enforces that streaming does not proceed until any issues are resolved, and eventually guarantee consistency and data integrity.

Model Mutations Left: Sync vs Async app workflow; Right: Propagating model mutations to downstream consumers

A shortcomings of consumer services tapping directly into SpinalTap events for a given service’s database is that the data schema is leaked, creating unnecessary coupling. Furthermore, domain logic for processing data mutations encapsulated in the owning service needs to be replicated to consumer services as well.

To mitigate the situation, we built a model streaming library on top of SpinalTap, which allowed services to listen to events from a service’s data store, transform them to domain model mutations, and re-inject them in the message bus. This effectively allowed data model mutations to become part of the service’s interface, and segregation of the request/response cycle from asynchronous data ingestion and event propagation. It also helped decouple domain dependencies, facilitate event-driven communication, and provide performance & fault tolerance improvements to services by isolating synchronous & asynchronous application workflows.

Use Cases

SpinalTap is employed for numerous use cases within our infrastructure, the most prominent of which are:

Cache Invalidation: A common application for CDC systems is cache invalidation, where changes to the backing data store are detected by a cache invalidator service or process that consequently evicts (or updates) the corresponding cache entries. Preferring an asynchronous approach allowed us to decouple our caching mechanism from the request path, and application code that serves production traffic. This pattern is widely used amongst services to maintain consistency between the source of truth data stores and our distributed cache clusters (e.g. Memcached, Redis).

Search Indexing: There are multiple search products at Airbnb that use real-time indexing (e.g. review search, inbox search, support ticket search). SpinalTap proved to be a good fit for building the indexing pipeline from data stores to the search backends (e.g. ElasticSearch), particularly due to its in-order and at least once delivery semantics. Services can easily consume events for the corresponding topics and convert the mutations to update the indices, which helps ensure search freshness with low latency.

Offline Processing: SpinalTap is also employed to export the online datastores to our offline big data processing systems (e.g. Hive, Airstream) in a streaming manner, which requires high throughput, low latency, and proper scalability. The system was also used historically for our database snapshot pipeline, to continuously construct backups of our online database and store them in HBase. This dramatically reduced the time to land our daily backups, and allowed for taking snapshots at a finer time granularity (ex: hourly).

Signaling: Another recurrent use cases for propagating data changes in a distributed architecture is as a signaling mechanism, where depending services can subscribe and react to data changes from another service in near real time. For example, the Availability service would block a listing’s dates by subscribing to changes from the Reservation service to be notified when a booking was made. Risk, security, payments, search, and pricing workflows are a few examples of where this pattern is employed within our ecosystem.


SpinalTap has become an integral part of our infrastructure over the last few years, and a system fueling many of our core workflows. It can be particularly useful for platforms looking for a reliable general purpose framework that can be easily integrated with your infrastructure. At Airbnb, SpinalTap is used to propagate data mutations from MySQL, DynamoDB, and our in-house storage solution. Kafka is currently the event bus of choice, but the system’s extensibility has allowed us to consider other mediums as well (ex: Kinesis).

Lastly, we have open-sourced several of our library components, and are in the process of reviewing the remaining modules for general release as well. Contributions are more than welcome!

Yours Truly,

The SpinalTap Team (Jad Abi-Samra, Litao Deng, Zuofei Wang)

Capturing Data Evolution in a Service-Oriented Architecture was originally published in Airbnb Engineering & Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categories: Web Technologies

Practical CSS Scroll Snapping

CSS-Tricks - Wed, 08/15/2018 - 07:05

CSS scroll snapping allows you to lock the viewport to certain elements or locations after a user has finished scrolling. It’s great for building interactions like this one:

Live Demo Browser support and basic usage

Browser support for CSS scroll snapping has improved significantly since it was introduced in 2016, with Google Chrome (69+), Firefox, Edge, and Safari all supporting some version of it.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari69No6311*18*11Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox11.0-11.2NoNoNoNo60

Scroll snapping is used by setting the scroll-snap-type property on a container element and the scroll-snap-align property on elements inside it. When the container element is scrolled, it will snap to the child elements you’ve defined. In its most basic form, it looks like this:

<div class='container'> <section class='child'></section> <section class='child'></section> <section class='child'></section> ... </div> .container { scroll-snap-type: y mandatory; } .child { scroll-snap-align: start; }

This is different to the first version of the spec, which allowed you to set snap-points manually using the repeat keyword:

.container { scroll-snap-points-y: repeat(300px); }

This method is pretty limited. Since it only allows evenly-spaced snap points, you can’t really build an interface that snaps to different-sized elements. If elements change their shape across different screen sizes, you’re also bound to run into issues.

At the time of this writing, Firefox, Internet Explorer, and Edge support the older version of the spec, while Chrome (69+) and Safari support the newer, element-based method.

You can use both methods alongside each other (if your layout allows it) to support both groups of browsers:

.container { scroll-snap-type: mandatory; scroll-snap-points-y: repeat(300px); scroll-snap-type: y mandatory; } .child { scroll-snap-align: start; }

I’d argue a more flexible option is to use the element-based syntax exclusively and loading a polyfill to support browsers that don’t yet support it. This is the method I’m using in the examples below.

Unfortunately the polyfill doesn't come with a browser bundle, so it's a bit tricky to use if you're not using a build process. The easiest way around this I've found is to link to the script on bundle.run and initializing it using cssScrollSnapPolyfill() once the DOM is loaded. It’s also worth pointing out that this polyfill only supports the element-based syntax, not the repeat-method.

Parent container properties

As with any property, it’s a good idea to get familiar with the values they accept. Scroll snap properties are applied to both parent and child elements, with specific values for each. Sort of the same way flexbox and grid do, where the parent becomes a "flex" or "grid" container. In this case, the parent becomes a snap container, if you will.

Here are the properties and values for the parent container and how they work.

scroll-snap-type “mandatory” vs. “proximity”

The mandatory value means the browser has to snap to a snap point whenever the user stops scrolling. The proximity property is less strict—it means the browser may snap to a snap point if it seems appropriate. In my experience this tends to kick in when you stop scrolling within a few hundred pixels of a snap point.

See the Pen Scroll-snap-type "Mandatory" vs "Proximity" by Max Kohler (@maxakohler) on CodePen.

In my own work, I’ve found that mandatory makes for a more consistent user experience, but it can also be dangerous, as the spec points out. Picture a scenario where an element inside a scrolling container is taller than the viewport:

If that container is set to scroll-snap-type: mandatory, it will always snap either to the top of the element or the top of the one below, making the middle part of the tall element impossible to scroll to.


By default content will snap to the very edges of the container. You can change that by setting the scroll-padding property on the container. It follows the same syntax as the regular padding property.

This can be useful if your layout has elements that could get in the way of the content, like a fixed header.

Properties on the children

Now let’s move on over to the properties for child elements.


This lets you specify which part of the element is supposed to snap to the container. It has three possible values: start, center, and end.

These are relative to the scroll direction. If you’re scrolling vertically, start refers to the top edge of the element. If you’re scrolling horizontally, it refers to the left edge. center and end follow the same principle. You can set a different values for each scroll direction separated by a space.

scroll-snap-stop “normal” vs. “always”

By default, scroll snapping only kicks in when the user stops scrolling, meaning they can skip over several snap points before coming to a stop.

You can change this by setting scroll-snap-stop: always on any child element. This forces the scroll container to stop on that element before the user can continue to scroll.

At the time of this writing no browser supports scroll-snap-stop natively, though there is a tracking bug for Chrome.

Let’s look at some examples of scroll snap in use.

Example 1: Vertical list

To make a vertical list snap to each list element only takes a few lines of CSS. First, we tell the container to snap along its vertical axis:

.container { scroll-snap-type: y mandatory; }

Then, we define the snap points. Here, we're specifying that the top of each list element is going to be a snap point:

.child { scroll-snap-align: start; }

See the Pen Vertical List by Max Kohler (@maxakohler) on CodePen.

Example 2: Horizontal slider

To make a horizontal slider, we tell the container to snap along its x-axis. We’re also using scroll-padding to make sure the child elements snap to the center of the container.

.container { scroll-snap-type: x mandatory; scroll-padding: 50%; }

Then, we tell the container which points to snap to. To center the gallery, we define the center point of each element as a snap point.

.child { scroll-snap-align: center; }

See the Pen Horizontal, different sized images by Max Kohler (@maxakohler) on CodePen.

Example 3: Vertical full screen

We can set the snap points directly on the <body> element:

body { scroll-snap-type: y mandatory; }

Then, we make each section the size of the viewport and define the top edge as a snap point:

section { height: 100vh; width: 100vw; scroll-snap-align: start; }

See the Pen Vertical Full-Screen by Max Kohler (@maxakohler) on CodePen.

Example 4: Horizontal full screen

This is the same sort of concept as the vertical version, but with the snap point on the x-axis instead.

body { scroll-snap-type: x mandatory; } section { height: 100vh; width: 100vw; scroll-snap-align: start; }

See the Pen Horizontal Full-Screen by Max Kohler (@maxakohler) on CodePen.

Example 5: 2D image grid

Scroll snapping can work in two directions at the same time. Again, we can set scroll-snap-type directly on the <body> element:

.container { scroll-snap-type: both mandatory; }

Then, we define the top-left corner of each tile as a snap point:

.tile { scroll-snap-align: start; }

See the Pen 2d Snapping by Max Kohler (@maxakohler) on CodePen.

Some thoughts on user experience

Messing with scrolling is risky business. Since it’s such a fundamental part of interacting with the web, changing it in any way can feel jarring—the term scrolljacking used to get thrown around to describe that sort of experience.

The great thing about CSS-based scroll snapping is that you're not taking direct control over the scroll position. Instead you're just giving the browser a list of positions to snap in a way that is appropriate to the platform, input method, and user preferences. This means a scrolling interface you build is going to feel just like the native interface (i.e using the same animations, etc.) on whatever platform it's viewed on.

To me, this is the key advantage of CSS scroll snapping over JavaScript libraries that offer similar functionality.

This works fairly well in my experience, especially on mobile. Maybe this is because scroll snapping is already part of the native UI on mobile platforms. (Picture the home screens on iOS and Android—they’re essentially horizontal sliders with snap points.) The interaction on Chrome on Android is particularly nice because it feels like regular scrolling, but the viewport always happens to come to a stop at a snap point:

There’s definitely some fancy maths going on to make this happen. Thanks to CSS scroll snapping, we’re getting it for free.

Of course, we shouldn’t start throwing snap points onto everything. Things like article pages do just fine without them. But I think they can be a nice enhancement in the right situation—image galleries, slideshows seem like good candidates, but maybe there’s potential beyond that.


If done thoughtfully, scroll snapping can be a useful design tool. CSS snap points allow you to hook into the browser’s native scrolling interaction, so your interface feel seamless and smooth. With a JavaScript API potentially on the horizon, these are going to become even more powerful. Still, a light touch is probably the way to go.

The post Practical CSS Scroll Snapping appeared first on CSS-Tricks.

Categories: Web Technologies

Interview with Adam Culp - Voices of the ElePHPant

Planet PHP - Wed, 08/15/2018 - 05:00

@AdamCulp Show Notes


This episode is sponsored by


The post Interview with Adam Culp appeared first on Voices of the ElePHPant.

Categories: Web Technologies

Interview with Adam Culp - Voices of the ElePHPant

Planet PHP - Wed, 08/15/2018 - 05:00

@AdamCulp Show Notes


This episode is sponsored by


The post Interview with Adam Culp appeared first on Voices of the ElePHPant.

Categories: Web Technologies

MySQL Connector/NET 6.10.8 has been released

Planet MySQL - Tue, 08/14/2018 - 23:26

Dear MySQL users,

MySQL Connector/NET 6.10.8 is the fifth GA release with .NET Core
now supporting various connection-string options and MySQL 8.0 server

To download MySQL Connector/NET 6.10.8 GA, see the “Generally Available
(GA) Releases” tab at http://dev.mysql.com/downloads/connector/net/

Changes in Connector/NET 6.10.8 (2018-08-14, General Availability)

Functionality Added or Changed

* Optimistic locking for database-generated fields was
improved with the inclusion of the [ConcurrencyCheck,
attribute. Thanks to Tony Ohagan for the patch. (Bug
#28095165, Bug #91064)

* All recent additions to .NET Core 2.0 now are compatible
with the Connector/NET 6.10 implementation.

* With the inclusion of the Functions.Like extended method,
scalar-function mapping, and table-splitting
capabilities, Entity Framework Core 2.0 is fully

Bugs Fixed

* EF Core: An invalid syntax error was generated when a new
property (defined as numeric, has a default value, and is
not a primary key) was added to an entity that already
contained a primary-key column with the AUTO_INCREMENT
attribute. This fix validates that the entity property
(column) is a primary key first before adding the
attribute. (Bug #28293927)

* EF Core: The implementation of some methods required to
scaffold an existing database were incomplete. (Bug
#27898343, Bug #90368)

* The Entity Framework Core implementation did not render
accented characters correctly on bases with different
UTF-8 encoding. Thanks to Kleber kleberksms for the
patch. (Bug #27818822, Bug #90316)

* The Microsoft.EntityFrameworkCore assembly (with EF Core
2.0) was not loaded and the absence generated an error
when the application project was built with any version
of .NET Framework. This fix ensures the following

+ EF Core 1.1 with .NET Framework 4.5.2 only

+ EF Core 2.0 with .NET Framework 4.6.1 or later
(Bug #27815706, Bug #90306)

* Attempts to create a new foreign key from within an
application resulted in an exception when the key was
generated by a server in the MySQL 8.0 release series.
(Bug #27715069)

* A variable of type POINT when used properly within an
application targeting MySQL 8.0 generated an SQL syntax
error. (Bug #27715007)

* The case-sensitive lookup of field ordinals was
initialized using case-insensitive comparison logic. This
fix removes the original case-sensitive lookup. (Bug
#27285641, Bug #88950)

* The TreatTinyAsBoolean connection option was ignored when
the MySqlCommand.Prepare() method was called. (Bug
#27113566, Bug #88472)

* The MySql.Data.Types.MySqlGeometry constructor called
with an array of bytes representing an empty geometry
collection generated an ArgumentOutOfRangeException
exception, rather than creating the type as expected.
Thanks to Peet Whittaker for the patch. (Bug #26421346,
Bug #86974)

* Slow connections made to MySQL were improved by reducing
the frequency and scope of operating system details
required by the server to establish and maintain a
connection. (Bug #22580399, Bug #80030)

* All columns of type TINYINT(1) stopped returning the
expected Boolean value after the connector encountered a
NULL value in any column of this type. Thanks to David
Warner for the patch. (Bug #22101727, Bug #78917)

Nuget packages are available at:


Enjoy and thanks for the support!

On Behalf of the MySQL/Oracle Release Engineering Team,
Hery Ramilison

Categories: Web Technologies