emGee Software Solutions Custom Database Applications

Share this

Web Technologies

Error message

  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2074 of /homepages/18/d193362201/htdocs/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, boolean given in format_date() (line 2084 of /homepages/18/d193362201/htdocs/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2074 of /homepages/18/d193362201/htdocs/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTimeInterface, boolean given in format_date() (line 2084 of /homepages/18/d193362201/htdocs/includes/common.inc).

MySQL 8.0 and keywords

Planet MySQL - Fri, 01/04/2019 - 12:58

As you know, MySQL uses some keywords and some of them are also reserved.

Let’s have a look how to deal with that:

mysql> create table WRITE (id int auto_increment primary key, varying varchar(10), than int);
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your
MySQL server version for the right syntax to use near 'WRITE (id int auto_increment primary key,
varying varchar(10), than int)' at line 1

OK, it seems WRITE is a keyword I cannot use as table name. I’ve then two choices:

  • rename the table to something else like WRITE_TBL
  • use back-ticks (`) around the table like `WRITE`

Let’s use the first option:

mysql> create table WRITE_TBL (id int auto_increment primary key, varying varchar(10), than int);
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your
MySQL server version for the right syntax to use near 'varying varchar(10), than int)' at line 1

We have a second error, this time MySQL is not happy with “varying“.

Let’s modify it, using the second option:

mysql> create table WRITE_TBL (id int auto_increment primary key, `varying` varchar(10), than int);
Query OK, 0 rows affected (2.34 sec)

It worked, however, I am sure that  “than” is also a keyword but it’s not reserved !

Of course, it’s not the most convenient to check the manual each time you want to check for keywords. Additionally, more keywords are appearing with new releases. It was the case with MySQL 8.0 were 70 new keywords were added !

That’s why, MySQL provides also an Information_Schema tables with all the keywords.

mysql> select count(*) from information_Schema.keywords;
+----------+
| 679 |
+----------+
1 row in set (0.10 sec

And we can check the amount of reserved keywords:

mysql> select count(*) from information_schema.keywords where reserved;
+----------+
| 262 |
+----------+
1 row in set (0.01 sec

And of course, we can verify for “than“:

mysql> select * from information_schema.keywords where word like 'than';
+------+----------+
| WORD | RESERVED |
+------+----------+
| THAN | 0 |
+------+----------+
1 row in set (0.03 sec)

Indeed, it’s a keyword but not reserved.

In summary, yes there are many keywords in MySQL and almost 40% are reserved. But it’s very easy to verify them using Information_Schemaor using back-ticks (but I don’t recommend you to do so and I encourage you to avoid keywords in your schemas).

Categories: Web Technologies

MySQL Performance Cheat Sheet

Planet MySQL - Fri, 01/04/2019 - 09:32

MySQL is extensive and has lots of areas to optimize and tweak for the desired performance. Some changes can be performed dynamically, others require a server restart. It is pretty common to find a MySQL installation with a default configuration, although the latter may not be appropriate per se from your workload and setup.

Here are the key areas in MySQL which I have taken from different expert sources in the MySQL world, as well as our own experiences here at Severalnines. This blog would serve as your cheat sheet to tune performance and make your MySQL great again :-)

Let’s take a look on these by outlining the key areas in MySQL.

System Variables

MySQL has lots of variables that you can consider to change. Some variables are dynamic which means they can be set using the SET statement. Others require a server restart, after they are set in the configuration file (e.g. /etc/my.cnf, etc/mysql/my.cnf). However, I’ll go over the common things that are pretty common to tune to make the server optimized.

sort_buffer_size

This variable controls how large your filesort buffer is, which means that whenever a query needs to sort the rows, the value of this variable is used to limit the size that needs to be allocated. Take note that this variable is per-query that is processed (or per-connection) basis, which means that it would be a memory hungry when you set this higher and if you have multiple connections that requires sorting of your rows. However, you can monitor your needs by checking the global status variable Sort_merge_passes. If this value is large, you should consider increasing the value of the sort_buffer_size system variable. Otherwise, take it to the moderate limit that you need. If you set this too low or if you have large queries to process, the effect of sorting your rows can be slower than expected because data is retrieved randomly doing disk dives. This can cause performance degradation. However, it is best to fix your queries. Otherwise, if your application is designed to pull large queries and requires sorting, then it is efficient to use tools that handles query caching like Redis. By default, in MySQL 8.0, the current value set is 256 KiB. Set this accordingly only when you have queries that are heavily using or calling sorts.

read_buffer_size

MySQL documentation mentions that for each request that performs a sequential scan of a table, it allocates a read buffer. The read_buffer_size system variable determines the buffer size. It is also useful for MyISAM, but this variable affects all storage engines as well. For MEMORY tables, it is use to determine the memory block size.

Basically, each thread that does a sequential scan for a MyISAM table allocates a buffer of this size (in bytes) for each table it scans. It does applies for all storage engines (that includes InnoDB) as well, so it’s helpful for queries that are sorting rows using ORDER BY and caching its indexes in a temporary file. If you do many sequential scans, bulk insert into partition tables, caching results of nested queries, then consider increasing its value. The value of this variable should be a multiple of 4KB. If it is set to a value that is not a multiple of 4KB, its value will be rounded down to the nearest multiple of 4KB. Take into account that setting this to a higher value will consume a large chunk of your server’s memory. I suggest not to use this without proper benchmarking and monitoring of your environment.

read_rnd_buffer_size

This variable deals with reading rows from a MyISAM table in sorted order following a key-sorting operation, the rows are read through this buffer to avoid disk seeks. The documentation says, when reading rows in an arbitrary sequence or from a MyISAM table in sorted order following a key-sorting operation, the rows are read through this buffer (and determined through this buffer size) to avoid disk seeks. Setting the variable to a large value can improve ORDER BY performance by quite a lot. However, this is a buffer allocated for each client, so you should not set the global variable to a large value. Instead, change the session variable only from within those clients that need to run large queries. However, you should take into account that this does not apply to MariaDB, especially when taking advantage of MRR. MariaDB uses mrr_buffer_size while MySQL uses read_buffer_size read_rnd_buffer_size.

join_buffer_size

By default, value is of 256K. The minimum size of the buffer that is used for plain index scans, range index scans, and joins that do not use indexes and thus perform full table scans. Also used by the BKA optimization (which is disabled by default). Increase its value to get faster full joins when adding indexes is not possible. Caveat though might be memory issues if you set this too high. Remember that one join buffer is allocated for each full join between two tables. For a complex join between several tables for which indexes are not used, multiple join buffers might be necessary. Best left low globally and set high in sessions (by using SET SESSION syntax) that require large full joins. In 64-bit platforms, Windows truncates values above 4GB to 4GB-1 with a warning.

max_heap_table_size

This is the maximum size in bytes for user-created MEMORY tables are permitted to grow. This is helpful when your application is dealing with MEMORY storage engine tables. Setting the variable while the server is active has no effect on existing tables unless they are recreated or altered. The smaller of max_heap_table_size and tmp_table_size also limits internal in-memory tables. This variable is also in conjunction with tmp_table_size to limit the size of internal in-memory tables (this differs from the tables created explicitly as Engine=MEMORY as it only applies max_heap_table_size), whichever is smaller is applied between the two.

tmp_table_size

The largest size for temporary tables in-memory (not MEMORY tables) although if max_heap_table_size is smaller the lower limit will apply. If an in-memory temporary table exceeds the limit, MySQL automatically converts it to an on-disk temporary table. Increase the value of tmp_table_size (and max_heap_table_size if necessary) if you do many advanced GROUP BY queries and you have large available memory space. You can compare the number of internal on-disk temporary tables created to the total number of internal temporary tables created by comparing the values of the Created_tmp_disk_tables and Created_tmp_tables variables. In ClusterControl, you can monitor this via Dashboard -> Temporary Objects graph.

table_open_cache

You can increase the value of this variable if you have large number of tables that are frequently accessed in your data set. It will be applied for all threads, meaning per connection basis. The value indicates the maximum number of tables the server can keep open in any one table cache instance. Although increasing this value increases the number of file descriptors that mysqld requires, so you might as well consider checking your open_files_limit value or check how large is the SOFT and HARD limit set in your *nix operating system. You can monitor this whether you need to increase the table cache by checking the Opened_tables status variable. If the value of Opened_tables is large and you do not use FLUSH TABLES often (which just forces all tables to be closed and reopened), then you should increase the value of the table_open_cache variable. If you have a small value for table_open_cache, and a high number of tables are frequently accessed, this can affect the performance of your server. If you notice many entries in the MySQL processlistwith status “Opening tables” or “Closing tables”, then it’s time to adjust the value of this variable but take note of the caveat mentioned earlier. In ClusterControl, you can check this under Dashboards -> Table Open Cache Status or Dashboards -> Open Tables. You can check it here for more info.

table_open_cache_instances

Setting this variable would help improve scalability, and of course, performance which would reduce contention among sessions. The value you set here limits the number of open tables cache instances. The open tables cache can be partitioned into several smaller cache instances of size table_open_cache / table_open_cache_instances . A session needs to lock only one instance to access it for DML statements. This segments cache access among instances, permitting higher performance for operations that use the cache when there are many sessions accessing tables. (DDL statements still require a lock on the entire cache, but such statements are much less frequent than DML statements.) A value of 8 or 16 is recommended on systems that routinely use 16 or more cores.

table_definition_cache

Cache table definitions i.e. this is where the CREATE TABLE are cached to speed up opening of tables and only one entry per table. It would be reasonable to increase the value if you have large number of tables. The table definition cache takes less space and does not use file descriptors, unlike the normal table cache. Peter Zaitsev of Percona suggest if you can try the setting of the formula below,

The number of user-defined tables + 10% unless 50K+ tables

But take note that the default value is based on the following formula capped to a limit of 2000.

MIN(400 + table_open_cache / 2, 2000)

So in case you have larger number of tables compared to the default, then it’s reasonable you increase its value. Take into account that with InnoDB, this variable is used as a soft limit of the number of open table instances for the data dictionary cache. It will apply the LRU mechanism once it exceeds the current value of this variable. The limit helps address situations in which significant amounts of memory would be used to cache rarely used table instances until the next server restart. Hence, parent and child table instances with foreign key relationships are not placed on the LRU list and could impose a higher than the limit defined by table_definition_cache and are not subject to eviction in memory during LRU. Additionally, the table_definition_cache defines a soft limit for the number of InnoDB file-per-table tablespaces that can be open at one time, which is also controlled by innodb_open_files and in fact, the highest setting between these variables is used, if both are set. If neither variable is set, table_definition_cache, which has a higher default value, is used. If the number of open tablespace file handles exceeds the limit defined by table_definition_cache or innodb_open_files, the LRU mechanism searches the tablespace file LRU list for files that are fully flushed and are not currently being extended. This process is performed each time a new tablespace is opened. If there are no “inactive” tablespaces, no tablespace files are closed. So keep this in mind.

max_allowed_packet

This is the per-connection maximum size of an SQL query or row returned. The value was last increased in MySQL 5.6. However in MySQL 8.0 (at least on 8.0.3), the current default value is 64 MiB. You might consider adjusting this if you have large BLOB rows that need to be pulled out (or read), otherwise you can leave this default settings with 8.0 but in older versions, default is 4 MiB so you might take care of that in case you encounter ER_NET_PACKET_TOO_LARGE error. The largest possible packet that can be transmitted to or from a MySQL 8.0 server or client is 1GB.

ClusterControl Single Console for Your Entire Database Infrastructure Find out what else is new in ClusterControl Install ClusterControl for FREE skip_name_resolve

MySQL server handles incoming connections by hostname resolution. By default, MySQL does not disable any hostname resolution which means it will perform a DNS lookups, and by chance, if DNS is slow, it could be the cause of awful performance to your database. Consider turning this on if you do not need DNS resolution and take advantage of improving your MySQL performance when this DNS lookup is disabled. Take into account that this variable is not dynamic, therefore a server restart is required if you set this in your MySQL config file. You may optionally start mysqld daemon, passing --skip-name-resolve option to enable this.

max_connections

This is the number of permitted connections for your MySQL server. If you find out the error in MySQL ‘Too many connections’, you might consider setting it higher. By default, the value of 151 isn’t enough especially on a production database, and considering that you have greater resources of the server (do not waste your server resources especially if it’s a dedicated MySQL server). However, you must have enough file descriptors otherwise you will run out of them. In that case, consider adjusting your SOFT and HARD limit of your *nix operating systems and set a higher value of open_files_limit in MySQL (5000 is the default limit). Take into account that it is very frequent that the application does not close connections to the database correctly, and setting a high max_connections can result to some unresponsive or high load of your server. Using a connection pool at the application level can help resolve the issue here.

thread_cache_size

This is the cache to prevent excessive thread creation. When a client disconnects, the client's threads are put in the cache if there are fewer than thread_cache_size threads there. Requests for threads are satisfied by reusing threads taken from the cache if possible, and only when the cache is empty is a new thread created. This variable can be increased to improve performance if you have a lot of new connections. Normally, this does not provide a notable performance improvement if you have a good thread implementation. However, if your server sees hundreds of connections per second you should normally set thread_cache_size high enough so that most new connections use cached threads. By examining the difference between the Connections and Threads_created status variables, you can see how efficient the thread cache is. Using the formula stated in the documentation, 8 + (max_connections / 100) is good enough.

query_cache_size

For some setup, this variable is their worst enemy. For some systems experiencing high load and are busy with high reads, this variable will bog you down. There has been benchmarks that were well-and-tested by e.g., Percona. This variable must be set to 0 along with query_cache_type = 0 as well to turn it off. The good news in MySQL 8.0 is that, the MySQL Team has stopped supporting this, as this variable can really cause performance issues. I have to agree on their blog that it is unlikely to improve predictability of performance. If you are engaged to use query caching, I suggest to use Redis or ProxySQL.

Storage Engine - InnoDB

InnoDB is an ACID-compliant storage engine with various features to offer along with foreign key support (Declarative Referential Integrity). This has a lot of things to say here but certain variables to consider for tuning:

innodb_buffer_pool_size

This variable acts like a key buffer of MyISAM but it has lots of things to offer. Since InnoDB relies heavily on the buffer pool, you would consider setting this value typically to 70%-80% of your server’s memory. It is favorable also that you have a larger memory space than your data set, and setting a higher value for your buffer pool but not by too much. In ClusterControl, this can be monitored using our Dashboards -> InnoDB Metrics -> InnoDB Buffer Pool Pages graph. You may also monitor this with SHOW GLOBAL STATUS using the variables Innodb_buffer_pool_pages*.

innodb_buffer_pool_instances

For your concurrency workload, setting this variable can improve concurrency and reduce contention as different threads of read/write to cached pages. Minimum innodb_buffer_pool_instances should be lie between 1 (minimum) & 64 (maximum). Each page that is stored in or read from the buffer pool is assigned to one of the buffer pool instances randomly, using a hashing function. Each buffer pool manages its own free lists, flush lists, LRUs, and all other data structures connected to a buffer pool, and is protected by its own buffer pool mutex. Take note that this option takes effect only when innodb_buffer_pool_size >= 1GiB and its size is divided among the buffer pool instances.

innodb_log_file_size

This variable is the log file in a log group. The combined size of log files (innodb_log_file_size * innodb_log_files_in_group) cannot exceed a maximum value that is slightly less than 512GB. According to Vadim, a bigger log file size is better for performance, but it has a drawback (a significant one) that you need to worry about: the recovery time after a crash. You need to balance recovery time in the rare event of a crash recovery versus maximizing throughput during peak operations. This limitation can translate to a 20x longer crash recovery process!

To elaborate it, a larger value would be good for InnoDB transaction logs and are crucial for good and stable write performance. The larger the value, the less checkpoint flush activity is required in the buffer pool, saving disk I/O. However, the recovery process is pretty slow once your database was abnormally shutdown (crash or killed, either OOM or accidental). Ideally, you can have 1-2GiB in production but of course you can adjust this. Benchmarking this changes can be a great advantage to see how it performs especially during after a crash.

innodb_log_buffer_size

To save disk I/O, InnoDB’s writes the change data into lt’s log buffer and it uses the value of innodb_log_buffer_size having a default value of 8MiB. This is beneficial especially for large transactions as it does not need to write the log of changes to disk before transaction commit. If your write traffic is too high (inserts, deletes, updates), making the buffer larger saves disk I/O.

innodb_flush_log_at_trx_commit

When innodb_flush_log_at_trx_commit is set to 1 the log buffer is flushed on every transaction commit to the log file on disk and provides maximum data integrity but it also has performance impact. Setting it to 2 means log buffer is flushed to OS file cache on every transaction commit. The implication of 2 is optimal and improves performance if you can relax your ACID requirements, and can afford to lose transactions for the last second or two in case of OS crashes.

innodb_thread_concurrency

With improvements to the InnoDB engine, it is recommended to allow the engine to control the concurrency by keeping it to default value (which is zero). If you see concurrency issues, you can tune this variable. A recommended value is 2 times the number of CPUs plus the number of disks. It’s dynamic variable means it can set without restarting MySQL server.

innodb_flush_method

This variable though must be tried and tested on which hardware fits you best. If you are using a RAID with battery-backed cache, DIRECT_IO helps relieve I/O pressure. Direct I/O is not cached so it avoids double buffering with buffer pool and filesystem cache. If your disk is stored in SAN, O_DSYNC might be faster for a read-heavy workload with mostly SELECT statements.

innodb_file_per_table

innodb_file_per_table is ON by default from MySQL 5.6. This is usually recommended as it avoids having a huge shared tablespace and as it allows you to reclaim space when you drop or truncate a table. Separate tablespace also benefits for Xtrabackup partial backup scheme.

innodb_stats_on_metadata

This attempts to keep the percentage of dirty pages under control, and before the Innodb plugin, this was really the only way to tune dirty buffer flushing. However, I have seen servers with 3% dirty buffers and they are hitting their max checkpoint age. The way this increases dirty buffer flushing also doesn’t scale well on high io subsystems, it effectively just doubles the dirty buffer flushing per second when the % dirty pages exceeds this amount.

innodb_io_capacity

This setting, in spite of all our grand hopes that it would allow Innodb to make better use of our IO in all operations, simply controls the amount of dirty page flushing per second (and other background tasks like read-ahead). Make this bigger, you flush more per second. This does not adapt, it simply does that many iops every second if there are dirty buffers to flush. It will effectively eliminate any optimization of IO consolidation if you have a low enough write workload (that is, dirty pages get flushed almost immediately, we might be better off without a transaction log in this case). It also can quickly starve data reads and writes to the transaction log if you set this too high.

innodb_write_io_threads

Controls how many threads will have writes in progress to the disk. I’m not sure why this is still useful if you can use Linux native AIO. These can also be rendered useless by filesystems that don’t allow parallel writing to the same file by more than one thread (particularly if you have relatively few tables and/or use the global tablespaces)

innodb_adaptive_flushing

Specifies whether to dynamically adjust the rate of flushing dirty pages in the InnoDB buffer pool based on the workload. Adjusting the flush rate dynamically is intended to avoid bursts of I/O activity. Typically, this is enabled by default . This variable, when enabled, tries to be smarter about flushing more aggressively based on the number of dirty pages and the rate of transaction log growth.

innodb_dedicated_server

This variable is new in MySQL 8.0 which is applied globally and requires a MySQL restart since it’s not a dynamic variable. However, as documentation states that this variable is desired to be enabled only if your MySQL is running on a dedicated server. Otherwise, do not enable this on a shared host or shares system resources with other applications. When this is enabled, InnoDB will do an automatic configuration for the amount of memory detected for variables innodb_buffer_pool_size, innodb_log_file_size, innodb_flush_method. The downside only is that you cannot have the feasibility to apply your desired values on the detected variables mentioned.

MyISAM key_buffer_size

InnoDB is the default storage engine now of MySQL, the default for key_buffer_size can probably be decreased unless you are using MyISAM productively as part of your application (but who uses MyISAM in production now?). I would suggest here to set perhaps 1% of RAM or 256 MiB at start if you have larger memory and dedicate the remaining memory for your OS cache and InnoDB buffer pool.

Other Provisions For Performance slow_query_log

Of course, this variable does not help boost your MySQL server. However, this variable can help you out analyze slow performing queries. Value can be set to 0 or OFF to disable logging. Setting it to 1 or ON to enable this. The default value depends on whether the --slow_query_log option is given. The destination for log output is controlled by the log_output system variable; if that value is NONE, no log entries are written even if the log is enabled. You might set the filename or destination of the query log file by setting the variable slow_query_log_file.

long_query_time

If a query takes longer than this many seconds, the server increments the Slow_queries status variable. If the slow query log is enabled, the query is logged to the slow query log file. This value is measured in real time, not CPU time, so a query that is under the threshold on a lightly loaded system might be above the threshold on a heavily loaded one. The minimum and default values of long_query_time are 0 and 10, respectively. Take note also that if variable min_examined_row_limit is set > 0, it won’t log queries even if it takes too long if the number of rows returned are less than the value set in min_examined_row_limit.

For more info on tuning your slow query logging, check the documentation here.

sync_binlog

This variable controls how often MySQL will sync binlogs to the disk. By default (>=5.7.7), this is set to 1 which means it will sync to disk before transactions are committed. However, this impose a negative impact on performance due to increased number of writes. But this is the safest setting if you want strictly ACID compliant along with your slaves. Alternatively, you can set this to 0 if you want to disable disk synchronization and just rely on the OS to flush the binary log to disk from time to time. Setting it higher than 1 means the binlog is sync to disk after N binary log commit groups have been collected, where N is > 1.

Dump/Restore Buffer Pool

It is pretty common thing that your production database needs to warm up from a cold start/restart. By dumping the current buffer pool before a restart, it would save the contents from the buffer pool and once it’s up, it’ll dump the contents back again from the buffer pool. Thus, this avoids the need to warm up your database back to the cache. Take note that, this version was since introduced in 5.6 but Percona Server 5.5 has it already available, just in case you wonder. To enable this feature, set both variables innodb_buffer_pool_dump_at_shutdown = ON and innodb_buffer_pool_load_at_startup = ON.

Hardware

We’re now in 2019, there has been a lot of new hardware improvements. Typically, there’s no hard requirement that MySQL would require a specific hardware, but this depends on what you need the database to do. I would expect that you are not reading this blog because you are doing a test if it runs on an Intel Pentium 200 MHz.

For CPU, faster processors with multiple cores will be optimal for MySQL in most recent versions at least since 5.6. Intel’s Xeon/Itanium processors can be expensive but tested for scalable and reliable computing platforms. Amazon has been shipping their EC2 instances running on ARM architecture. Though I personally haven’t tried running or recall running MySQL on ARM architecture, there are benchmarks that had been made years ago. Modern CPU’s can scale their frequencies up and down based on temperature, load, and OS power saving policies. However, there’s a chance that your CPU settings in your Linux OS set to a different governor. You can check that out or set with “performance” governor by doing the following:

echo performance | sudo tee /sys/devices/system/cpu/cpu[0-9]*/cpufreq/scaling_governor

For Memory, it is very important that your memory is large and can equate the size of your dataset. Ensure that you have swappiness = 1. You can check it out by checking sysctl or checking the file in procfs. This is achieved by doing the following:

$ sysctl -e vm.swappiness vm.swappiness = 1

Or setting it to a value of 1 as follows

$ sudo sysctl vm.swappiness=1 vm.swappiness = 1

Another great thing to consider for your Memory management is considering turning off THP (Transparrent Huge Pages). In the past, I do recall we have some weird issues encountered with CPU utilization and thought it was due to disk I/O. It turned out, the problem was with kernel khugepaged thread which allocates memory dynamically during runtime. Not only this, during kernel goes for defragmentation, your memory will be quickly allocated as it passes it to THP. Standard HugePages memory is pre-allocated at startup, and does not change during runtime. You can verify and disable this by doing the following:

$ cat /sys/kernel/mm/transparent_hugepage/enabled $ echo "never" > /sys/kernel/mm/transparent_hugepage/enabled

For Disk, it is important that you have a good throughput. Using RAID10 is the best setup for a database with a battery backup unit. With the advent of flash drives that offers high disk throughput and high disk I/O for read/writes, it is important that it can manage the high disk utilization and disk I/O.

Operating System Related resources  ClusterControl for MySQL  A Performance Cheat Sheet for MongoDB  A Performance Cheat Sheet for PostgreSQL

Most production systems running on MySQL runs on Linux. It is because MySQL had been tested and benchmarked on Linux, and sounds that it’s the de facto standard for a MySQL installation. However, of course, there’s nothing stopping you from using it on Unix or Windows platform. It would be easier if your platform has been tested and there is a wide community to help, in case you experience some trouble. Most setups runs on RHEL/Centos/Fedora and Debian/Ubuntu systems. In AWS, Amazon has their Amazon Linux which I see as well being used in production by some.

Most important to consider with your setup is that your file system is using either XFS or Ext4. For sure, there are pros and cons between these two file systems but I won’t go to the details here. Some say XFS outperform Ext4 but there are reports as well that Ext4 outperforms XFS. ZFS is also coming out of the picture as a good candidate for an alternative file system. Jervin Real (from Percona) has a great resource on this one, you can check this presentation during the ZFS conference.

External Links

https://developer.okta.com/blog/2015/05/22/tcmalloc

https://www.percona.com/blog/2012/07/05/impact-of-memory-allocators-on-mysql-performance/

https://www.percona.com/live/18/sessions/benchmark-noise-reduction-how-to-configure-your-machines-for-stable-results

https://zfs.datto.com/2018_slides/real.pdf

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/ladbi/disabling-transparent-hugepages.html#GUID-02E9147D-D565-4AF8-B12A-8E6E9F74BEEA

Tags:  MySQL mysql replication performance tuning optimization
Categories: Web Technologies

2018 Staff Favorites

CSS-Tricks - Fri, 01/04/2019 - 08:39

Last year, the team here at CSS-Tricks compiled a list of our favorite posts, trends, topics, and resources from around the world of front-end development. We had a blast doing it and found it to be a nice recap of the industry as we saw it over the course of the year. Well, we're doing it again this year!

With that, here's everything that Sarah, Robin, Chris and I saw and enjoyed over the past year.

Sarah Good code review

There are a few themes that cross languages, and one of them is good code review. Even though Nina Zakharenko gives talks and makes resources about Python, her talk about code review skills is especially notable because it applies across many disciplines. She’s got a great arc to this talk and I think her deck is an excellent resource, but you can take this a step even further and think critically about your own team, what works for it, and what practices might need to be reconsidered.

I also enjoyed this sarcastic tweet that brings up a good point:

When reviewing a PR, it’s essential that you leave a comment. Any comment. Even the PR looks great and you have no substantial feedback, find something trivial to nitpick or question. This communicates intelligence and mastery, and is widely appreciated by your colleagues.

— Andrew Clark (@acdlite) May 19, 2018

I've been guilty myself of commenting on a really clean pull request just to say something, and it’s healthy for us as a community to revisit why we do things like this.

Sophie Alpert, manager of the React core team, also wrote a great post along these lines right at the end of the year called Why Review Code. It’s a good resource to turn to when you'd like to explain the need for code reviews in the development process.

The year of (creative) code

So many wonderful creative coding resources were made this year. Creative coding projects might seem frivolous but you can actually learn a ton from making and playing with them. Matt DesLauriers recently taught a course called Creative Coding with Canvas & WebGL for Frontend Masters that serves as a good example.

CodePen is always one of my favorite places to check out creative work because it provides a way to reverse-engineer the work of other people and learn from their source code. CodePen has also started coding challenges adding yet another way to motivate creative experiments and collective learning opportunities. Marie Mosley did a lot of work to make that happen and her work on CodePen's great newsletter is equally awesome.

You should also consider checking out Monica Dinculescu's work because she has been sharing some amazing work. There's not one, not two, but three (!) that use machine learning alone. Go see all of her Glitch projects. And, for what it's worth, Glitch is a great place to explore creative code and remix your own as well.

GitHub Actions

I think hands-down one of the most game-changing developments this year is GitHub Actions. The fact that you can manage all of your testing, deployments, and project issues as containers chained in a unified workflow is quite amazing.

Containers are a great for actions because of their flexibility — you’re not limited to a single kind of compute and so much is possible! I did a writeup about GitHub Actions covering the feature in full. And, if you're digging into containers, you might find the dive repo helpful because it provides a way to explore a docker image and layer contents.

Actions are still in beta but you can request access — they’re slowly rolling out now.

UI property generators

I really like that we’re automating some of the code that we need to make beautiful front-end experiences these days. In terms of color there’s color by Adobe, coolors, and uiGradients. There are even generators for other things, like gradients, clip-path, font pairings, and box-shadow. I am very much here for all for this. These are the kind of tools that speed up development and allow us to use advanced effects, no matter the skill level.

Robin Ire Aderinokun’s blog

Ire has been writing a near constant stream of wondrous articles about front-end development on her blog, Bits of Code, over the past year, and it’s been super exciting to keep up with her work. It seems like she's posting something I find useful almost every day, from basic stuff like when hover, focus and active states apply to accessibility tips like the aria-live attribute.

"The All Powerful Front-end Developer"

Chris gave a talk this year about the ways the role of front-end development are changing... and for the better. It was perhaps the most inspiring talk I saw this year. Talks about front-end stuff are sometimes pretty dry, but Chris does something else here. He covers a host of new tools we can use today to do things that previously required a ton of back-end skills. Chris even made a website all about these new tools which are often categorized as "Serverless."

Even if none of these tools excite you, I would recommend checking out the talk – Chris’s enthusiasm is electric and made me want to pull up my sleeves and get to work on something fun, weird and exciting.

Future Fonts

The Future Fonts marketplace turned out to be a great place to find new and experimental typefaces this year. Obviously is a good example of that. But the difference between Future Fonts and other marketplaces is that you can buy fonts that are in beta and still currently under development. If you get in on the ground floor and buy a font for $10, then that shows the developer the interest in a particular font which may spur more features for it, like new weights, widths or even OpenType features.

It’s a great way to support type designers while getting a ton of neat and experimental typefaces at the same time.

React Conf 2018

The talks from React Conf 2018 will get you up to speed with the latest React news. It’s interesting to see how React Hooks let you "use state and other React features without writing a class."

It's also worth calling out that a lot of folks really improved our Guide to React here on CSS-Tricks so that it now contains a ton of advice about how to get started and how to level up on both basic and advanced practices.

The Victorian Internet

This is a weird recommendation because The Victorian Internet is a book and it wasn’t published this year. But! It’s certainly the best book I've read this year, even if it’s only tangentially related to web stuff. It made me realize that the internet we’re building today is one that’s much older than I first expected. The book focuses on the laying of the Transatlantic submarine cables, the design of codes and the codebreakers, fraudsters that used the telegraph to find their marks, and those that used it to find the person they’d marry. I really can’t recommend this book enough.

amzn_assoc_tracking_id = "csstricks-20"; amzn_assoc_ad_mode = "manual"; amzn_assoc_ad_type = "smart"; amzn_assoc_marketplace = "amazon"; amzn_assoc_region = "US"; amzn_assoc_design = "enhanced_links"; amzn_assoc_asins = "B07JW5WQSR"; amzn_assoc_placement = "adunit"; amzn_assoc_linkid = "162040592X";

Figma

The browser-based design tool Figma continued to release a wave of new features that makes building design systems and UI kits easier than ever before. I’ve been doing a ton of experiments with it to see how it helps designers communicate, as well as how to build more resilient components. It’s super impressive to see how much the tools have improved over the past year and I’m excited to see it improve in the new year, too.

Geoff Buzz about third party scripts

It seems there was a lot of chatter this year about the impact of third party scripts. Whether it’s the growing ubiquity of all-things-JavaScript or whatever, this topic covers a wide and interesting ground, including performance, security and even hard costs, to name a few.

My personal favorite post about this was Paulo Mioni’s deep dive into the anatomy of a malicious script. Sure, the technical bits are a great learning opportunity, but what really makes this piece is the way it reads like a true crime novel.

Gutenberg, Gutenberg and more Gutenberg

There was so much noise leading up to the new WordPress editor that the release of WordPress 5.0 containing it felt anti-climactic. No one was hurt or injured amid plenty of concerns, though there is indeed room for improvement.

Lara Schneck and Andy Bell teamed up for a hefty seven-party series aimed at getting developers like us primed for the changes and it’s incredible. No stone is left unturned and it perfectly suitable for beginners and experts alike.

Solving real life issues with UX

I like to think that I care a lot about users in the work I do and that I do my best to empathize so that I can anticipate needs or feelings as they interact with the site or app. That said, my mind was blown away by a study Lucas Chae did on the search engine experience of people looking for a way to kill themselves. I mean, depression and suicide are topics that are near and dear to my heart, but I never thought about finding a practical solution for handling it in an online experience.

So, thanks for that, Lucas. It inspired me to piggyback on his recommendations with a few of my own. Hopefully, this is a conversation that goes well beyond 2018 and sparks meaningful change in this department.

The growing gig economy

Freelancing is one of my favorite things to talk about at great length with anyone and everyone who is willing to talk shop and that’s largely because I’ve learned a lot about it in the five years I’ve been in it.

But if you take my experience and quadruple it, then you get a treasure trove of wisdom like Adam Coti shared in his collection of freelancing lessons learned over 20 years of service.

Freelancing isn’t for everyone. Neither is remote work. Adam’s advice is what I wish I had going into this five years ago.

Browser ecology

I absolutely love the way Rachel Nabors likens web browsers to a biological ecosystem. It’s a stellar analogy and leads into the long and winding history of browser evolution.

Speaking of history, Jason Hoffman’s telling of the history about browsers and web standards is equally interesting and a good chunk of context to carry in your back pocket.

These posts were timely because this year saw a lot of movement in the browser landscape. Microsoft is dropping EdgeHTML for Blink and Google ramped up its AMP product. 2018 felt like a dizzying year of significant changes for industry giants!

Chris All the best buzzwords: JAMstack, Serverless, & Headless

"Don’t tell me how to build a front end!" we, front-end developers, cry out. We are very powerful now. We like to bring our own front-end stack, then use your back-end data and APIs. As this is happening, we’re seeing healthy things happen like content management systems evolving to headless frameworks and focus on what they are best at: content management. We’re seeing performance and security improvements through the power of static and CDN-backed hosting. We’re seeing hosting and server usage cost reductions.

But we’re also seeing unhealthy things we need to work through, like front-end developers being spread too thin. We have JavaScript-focused engineers failing to write clean, extensible, performant, accessible markup and styles, and, on the flip side, we have UX-focused engineers feeling left out, left behind, or asked to do development work suddenly quite far away from their current expertise.

GraphQL

Speaking of powerful front-end developers, giving us front-end developers a well-oiled GraphQL setup is extremely empowering. No longer do we need to be roadblocked by waiting for an API to be finished or data to be massaged into some needed format. All the data you want is available at your fingertips, so go get and use it as you will. This makes building and iterating on the front end faster, easier, and more fun, which will lead us to building better products. Apollo GraphQL is the thing to look at here.

While front-end is having a massive love affair with JavaScript, there are plenty of front-end developers happily focused elsewhere

This is what I was getting at in my first section. There is a divide happening. It’s always been there, but with JavaScript being absolutely enormous right now and showing no signs of slowing down, people are starting to fall through the schism. Can I still be a front-end developer if I’m not deep into JavaScript? Of course. I’m not going to tell you that you shouldn’t learn JavaScript, because it’s pretty cool and powerful and you just might love it, but if you’re focused on UX, UI, animation, accessibility, semantics, layout, architecture, design patterns, illustration, copywriting, and any combination of that and whatever else, you’re still awesome and useful and always will be. Hugs. 🤗

Just look at the book Refactoring UI or the course Learn UI Design as proof there is lots to know about UI design and being great at it requires a lot of training, practice, and skill just like any other aspect of front-end development.

Shamelessly using grid and custom properties everywhere

I remember when I first learned flexbox, it was all I reached for to make layouts. I still love flexbox, but now that we have grid and the browser support is nearly just as good, I find myself reaching for grid even more. Not that it’s a competition; they are different tools useful in different situations. But admittedly, there were things I would have used flexbox for a year ago that I use grid for now and grid feels more intuitive and more like the right tool.

I'm still swooning over the amazing illustrations Lynn Fisher did for both our grid and flexbox guides. Massive discussions around CSS-in-JS and approaches, like Tailwind

These discussions can get quite heated, but there is no ignoring the fact that the landscape of CSS-in-JS is huge, has a lot of fans, and seems to be hitting the right notes for a lot of folks. But it’s far from settled down. Libraries like Vue and Angular have their own framework-prescribed way of handling it, whereas React has literally dozens of options and a fast-moving landscape with libraries popping up and popular ones spinning down in favor of others. It does seem like the feature set is starting to settle down a little, so this next year will be interesting to watch.

Then there is the concept of atomic CSS on the other side of the spectrum, and interesting in that doesn’t seem to have slowed down at all either. Tailwind CSS is perhaps the hottest framework out there, gaining enough traction that Adam is going full time on it.

What could really shake this up is if the web platform itself decides to get into solving some of the problems that gave rise to these solutions. The shadow DOM already exists in Web Components Land, so perhaps there are answers there? Maybe the return of <style scoped>? Maybe new best practices will evolve that employ a single-stylesheet-per-component? Who knows.

Design systems becoming a core deliverable

There are whole conferences around them now!

I’ve heard of multiple agencies where design systems are literally what they make for their clients. Not websites, design systems. I get it. If you give a team a really powerful and flexible toolbox to build their own site with, they will do just that. Giving them some finished pages, as polished as they might be, leaves them needing to dissect those themselves and figure out how to extend and build upon them when that need inevitably arrives. I think it makes sense for agencies, or special teams, to focus on extensible component-driven libraries that are used to build sites.

Machine Learning

Stuff like this blows me away:

I made a music sequencer! In JavaScript! It even uses Machine Learning to try to match drums to a synth melody you create!

✨&#x1f3a7; https://t.co/FGlCxF3W9p pic.twitter.com/TTdPk8PAwP

— Monica Dinculescu (@notwaldorf) June 28, 2018

Having open source libraries that help with machine learning and that are actually accessible for regular ol’ developers to use is a big deal.

Stuff like this will have real world-bettering implications:

&#x1f525; I think I used machine learning to be nice to people! In this proof of concept, I’m creating dynamic alt text for screenreaders with Azure’s Computer Vision API. &#x1f4ab;https://t.co/Y21AHbRT4Y pic.twitter.com/KDfPZ4Sue0

— Sarah Drasner (@sarah_edo) November 13, 2017

And this!

Well that's impressive and dang useful. https://t.co/99tspvk4lo Cool URL too.

(Remove Image Background 100% automatically – in 5 seconds – without a single click) pic.twitter.com/k9JTHK91ff

— CSS-Tricks (@css) December 17, 2018

OK, OK. One more

You gotta check out the Unicode Pattern work (more) that Yuan Chuan does. He even shared some of his work and how he does it right here on CSS-Tricks. And follow that name link to CodePen for even more. This <css-doodle> thing they have created is fantastic.

See the Pen Seeding by yuanchuan (@yuanchuan) on CodePen.

The post 2018 Staff Favorites appeared first on CSS-Tricks.

Categories: Web Technologies

The Most Hearted of 2018

CSS-Tricks - Fri, 01/04/2019 - 08:38

We've released the Most Hearted Pens, Posts, and Collections on CodePen for 2018! Just absolutely incredible work on here — it's well worth exploring.

Remember CodePen has a three-tiered hearting system, so while the number next to the heart reflects the number of users who hearted the item, each of those could be worth 1, 2, or 3 hearts total. This list is a great place to find awesome people to follow on CodePen as well, and we're working on ways to make following people a lot more interesting in 2019.

Direct Link to ArticlePermalink

The post The Most Hearted of 2018 appeared first on CSS-Tricks.

Categories: Web Technologies

Amazon RDS Aurora MySQL – Differences Among Editions

Planet MySQL - Fri, 01/04/2019 - 07:51

Amazon Aurora with MySQL Compatibility comes in three editions which, at the time of writing, have quite a few differences around the features that they support.  Make sure you don’t assume the newer Aurora 2.x supports everything in Aurora 1.x. On the contrary, right now Aurora 1.x (MySQL 5.6 based) supports most Aurora features.  The serverless option was launched for this version, and it’s not based on the latest MySQL 5.7.  However, the serverless option, too, has its own set of limitations

I found a concise comparison of what is available in which Amazon Aurora edition hard to come by so I’ve created one.  The table was compiled based mostly on documentation research, so if you spot some mistakes please let me know and I’ll make a correction.

Please keep in mind, this is expected to change over time. For example Amazon Aurora 2.x was initially released without Performance_Schema support, which was enabled in later versions.

There seems to be lag porting Aurora features from MySQL 5.6 compatible to MySQL 5.7 compatible –  the current 2.x release does not include features introduced in Aurora 1.16 or later as per this document

A comparison table MySQL 5.6 Based MySQL 5.7 Based Serverless MySQL 5.6 Based Compatible to MySQL MySQL 5.6.10a MySQL 5.7.12 MySQL 5.6.10a Aurora Engine Version 1.18.0 2.03.01 1.18.0 Parallel Query Yes No No Backtrack Yes No No Aurora Global Database Yes No No Performance Insights Yes No No SELECT INTO OUTFILE S3 Yes Yes Yes Amazon Lambda – Native Function Yes No No Amazon Lambda – Stored Procedure Yes Yes Yes Hash Joins Yes No Yes Fast DDL Yes Yes Yes LOAD DATA FROM S3 Yes Yes No Spatial Indexing Yes Yes Yes Asynchronous Key Prefetch (AKP) Yes No Yes Scan Batching Yes No Yes S3 Backed Based Migration Yes No No Advanced Auditing Yes Yes No Aurora Replicas Yes Yes No Database Cloning Yes Yes No IAM database authentication Yes Yes No Cross-Region Read Replicas Yes Yes No Restoring Snapshot from MySQL DB Yes Yes No Enhanced Monitoring Yes Yes No Log Export to Cloudwatch Yes Yes No Minor Version Upgrade Control Yes Yes Always On Data Encryption Configuration Yes Yes Always On Maintenance Window Configuration Yes Yes No

Hope this is helps with selecting which Amazon Aurora edition is right for you, when it comes to supported features.


Photo by Nathan Dumlao on Unsplash

Categories: Web Technologies

Percona XtraDB Cluster 5.6.42-28.30 Is Now Available

Planet MySQL - Fri, 01/04/2019 - 07:12

Percona announces the release of Percona XtraDB Cluster 5.6.42-28.30 (PXC) on January 4, 2019. Binaries are available from the downloads section or our software repositories.

Percona XtraDB Cluster 5.6.42-28.30 is now the current release, based on the following:

All Percona software is open-source and free.

Fixed Bugs
  • PXC-2281: Debug symbols were missing in Debian dbg packages.
  • PXC-2220: Starting two instances of Percona XtraDB Cluster on the same node could cause writing transactions to a page store instead of a galera.cache ring buffer, resulting in huge memory consumption because of retaining already applied write-sets.
  • PXC-2230: rgcs.fc_limit=0 not allowed as dynamic setting to avoid generating flow control on every message was still possible in my.cnf due to the inconsistent check.
  • PXC-2238: setting read_only=1 caused race condition.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Categories: Web Technologies

WordCamp US 2018

CSS-Tricks - Fri, 01/04/2019 - 06:43

I recently attended and had the chance to speak at WordCamp US 2018 in Nashville. I had a great time. I love conferences that bring people together around a tight theme because it's very likely you'll have something to talk about with every person there. Plus, I rather like WordPress and its community. The vibe was very centered around Gutenberg, as it was released in WordPress 5.0 just as the conference started.

Matt's State of the Word gets into all that:

I took the opportunity to give a brand new talk I've been working on I've called, “Thinking Like a Front-End Developer”:

There were loads of wonderful people there and loads of wonderful talks. Here's a playlist!

The post WordCamp US 2018 appeared first on CSS-Tricks.

Categories: Web Technologies

The Elements of UI Engineering

CSS-Tricks - Fri, 01/04/2019 - 06:42

I really enjoyed this post by Dan Abramov. He defines his work as a UI engineer and I especially like what he writes about his learning experience:

My biggest learning breakthroughs weren’t about a particular technology. Rather, I learned the most when I struggled to solve a particular UI problem. Sometimes, I would later discover libraries or patterns that helped me. In other cases, I’d come up with my own solutions (both good and bad ones).

It’s this combination of understanding the problems, experimenting with the solutions, and applying different strategies that led to the most rewarding learning experiences in my life. This post focuses on just the problems.

He then breaks those problems down into a dozen different areas: consistency, responsiveness, latency, navigation, staleness, entropy, priority, accessibility, internationalization, delivery, resilience, and abstraction. This is a pretty good list of what a front-end developer has to be concerned about on a day-to-day basis, but I also feel like this is perhaps the best description of what I believe my own skills are besides being “the person who cares about component design and CSS.”

I also love what Dan has to say about accessibility:

Inaccessible websites are not a niche problem. For example, in UK disability affects 1 in 5 people. (Here’s a nice infographic.) I’ve felt this personally too. Though I’m only 26, I struggle to read websites with thin fonts and low contrast. I try to use the trackpad less often, and I dread the day I’ll have to navigate poorly implemented websites by keyboard. We need to make our apps not horrible to people with difficulties — and the good news is that there’s a lot of low-hanging fruit. It starts with education and tooling. But we also need to make it easy for product developers to do the right thing. What can we do to make accessibility a default rather than an afterthought?

This is a good reminder that front-end development is not a problem to be solved, except I reckon Dan’s post is more helpful and less snarky than my take on it.

Anywho, we all want accessible interfaces so that every browser can access our work making use of beautiful and consistent mobile interactions, instantaneous performance, and a design system teams can utilize to click-clack components together with little-to-no effort. But these things are only possible if others recognize that UI and front-end development are a worthy fields.

Direct Link to ArticlePermalink

The post The Elements of UI Engineering appeared first on CSS-Tricks.

Categories: Web Technologies

Percona XtraDB Cluster 5.7.24-31.33 Is Now Available

Planet MySQL - Fri, 01/04/2019 - 05:13

Percona is glad to announce the release of Percona XtraDB Cluster 5.7.24-31.33 (PXC) on January 4, 2019. Binaries are available from the downloads section or from our software repositories.

Percona XtraDB Cluster 5.7.24-31.33 is now the current release, based on the following:

Deprecated

The following variables are deprecated starting from this release:

  • wsrep_preordered was used to turn on transparent handling of preordered replication events applied locally first before being replicated to other nodes in the cluster. It is not needed anymore due to the carried out performance fix eliminating the lag in asynchronous replication channel and cluster replication.
  • innodb_disallow_writes usage to make InnoDB avoid writes during SST was deprecated in favor of the innodb_read_only variable.
  • wsrep_drupal_282555_workaround avoided the duplicate value creation caused by buggy auto-increment logic, but the correspondent bug is already fixed.
  • session-level variable binlog_format=STATEMENT was enabled only for pt-table-checksum, which would be addressed in following releases of the Percona Toolkit.
Fixed Bugs
  • PXC-2220: Starting two instances of Percona XtraDB Cluster on the same node could cause writing transactions to a page store instead of a galera.cache ring buffer, resulting in huge memory consumption because of retaining already applied write-sets.
  • PXC-2230: rgcs.fc_limit=0 not allowed as dynamic setting to avoid generating flow control on every message was still possible in my.cnf due to the inconsistent check.
  • PXC-2238: setting read_only=1 caused race condition.
  • PXC-1131: mysqld-systemd threw an error at MySQL restart in case of non-existing error-log in Centos/RHEL7.
  • PXC-2269: being not dynamic, the pxc_encrypt_cluster_traffic variable was erroneously allowed to be changed by a SET GLOBAL statement.
  • PXC-2275: checking wsrep_node_address value in the wsrep_sst_common command line parser caused parsing the wrong variable.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

 

Categories: Web Technologies

One year in San Francisco as a Software Engineer - Evert Pot

Planet PHP - Thu, 01/03/2019 - 07:51

In 2017 the company I worked for in Toronto got acquired by Yelp. The software engineers in my company (including myself) were asked to move to San Francisco. At the end of 2017 we moved, and I spent most of 2018 there. With the year coming to an end, I thought it might be a good time to reflect on my time there.

As a software engineer, it’s often suggested that you haven’t really made it to the top, unless you work in the Bay Area. I disagree with this idea, but it’s easy to see why some people feel this way.

On my first trip to SF, it was already quite telling. Flying from Toronto airport I noticed a higher than usual technology themed tees, stickers on laptops and black terminals. Pretty exciting! By chance My Uber driver to downtown told me he was applying to a free machine learning course ran by Google. On the way I noticed that the billboards next to the highways were directly targetting developers.

SF felt like the Mecca of tech, but also the center of capitalism. There is a lot of money, but not a lot of wealth. Salaries are the highest I’ve personally seen, but so is the cost of living.

As a software engineer this more or less evens out (compared to Toronto, where I’ve lived and worked for a long time), but if you’re not in the business it’s rough.

Before I moved to SF I never had the intention to move to the Bay. It just wasn’t a goal for me. But when the opportunity arose, we felt that for the trip to be worth it, we didn’t really want to lower our qualtity of living standard, and we wanted a 2-bedroom apartment and a reasonable commute. Ultimately this meant that our rent was $4250 USD per month, and a larger portion of our salary going towards rent. Had we stayed longer, we would definitely have tried to find a cheaper place to live and save more.

Median Montly Rent Price of 2BD Rental. (source).

You can imagine that at those prices, it’s very difficult for many people to live in San Francisco. The cost of living has exploded in the last 30 years, and many people blame the tech industry for this.

Every now and then you’re confronted by this fact that there’s people who ‘hate us’. Take for instance the attacks on the Google commuter bus.

Grafiti on the street in Mission District

Personally I can emphathize with the sentiment. Even though I don’t think that 20-something programmers in Google buses are personally responsible for the disparity, but the brandless tinted Google buses are a powerful symbol for new class system.

I’ve never seen so much poverty and homelessness before. There’s many major streets where wearing open shoes would be a big no-no, because of used needles lying around in plain sight. Seeing people shooting up on Market Street is pretty normal.

I don’t think this is necessarily a bad thing. I imagine in many cities this addiction and poverty might be more contained to certain neighbourhoods. It’s much easier that way to pretend it doesn’t exist if you don’t see it. One of the silver linings in SF is that there were lots of places to safely do drugs.

But it’s a weird juxtaposition. There are times where our engineering team would have lunch in Yerba Buena Gardens, and if you looked one way you would see electric scooters and onewheels zooming by, and I was actually reminded of Star Trek episodes that feature Starfleet Academy. I’m not a cynic and it felt like a futuristic place to be. But, look the other way and you might see someone defecating on the street. This is absolutely not hyperbole. San Francisco has a "/>

Truncated by Planet PHP, read more at the original (another 4866 bytes)

Categories: Web Technologies

Multi-Line Inline Gradient

CSS-Tricks - Thu, 01/03/2019 - 07:17

Came across this thread:

CSS superfriends! Have you seen examples of how to do multi-line padded text like this article on @css (https://t.co/2j8p4jmaT4), but with a gradient that doesn't reset for each line? pic.twitter.com/MVPdAjxt1W

— Dan Mall (@danmall) December 3, 2018

My first thought process was:

But it turns out we need a litttttle extra trickery to make it happen.

If a solid color is fine, then some padding combined with box-decoration-break should get the basic framework:

See the Pen Multiline Padding with box-decoration-break by Chris Coyier (@chriscoyier) on CodePen.

But a gradient on there is gonna get weird on multiple lines:

See the Pen Multiline Padding with box-decoration-break by Chris Coyier (@chriscoyier) on CodePen.

I'm gonna credit Matthias Ott, from that thread, with what looks like the perfect answer to me:

See the Pen Multiline background gradient with mix-blend-mode by Matthias Ott (@matthiasott) on CodePen.

The trick there is to set up the padded multi-line background just how you want it with pure white text and a black background. Then, a pseudo-element is set over the whole area with the gradient in the black area. Throw in mix-blend-mode: lighten; to make the gradient only appear on the black area. Nice one.

The post Multi-Line Inline Gradient appeared first on CSS-Tricks.

Categories: Web Technologies

Jetpack

CSS-Tricks - Thu, 01/03/2019 - 07:14

My favorite way to think about Jetpack is that it's a WordPress plugin that brings a whole heap of features to your site. I've documented the features that we use here on CSS-Tricks, which isn't even all of them (yet).

Some of Jetpack features are essentially connecting it to the powers of WordPress.com. For example, of course, WordPress.com has some amazing way to optimize and serve images. They can build a service that millions of sites on WordPress.com can benefit from, which really benefits everyone, including them, because optimized images reduce bandwidth costs. Then Jetpack steps in and can offer that same power to you on your self-hosted WordPress site. Here's a video I did showing how that works.

Other features are things like real-time backups of your site to VaultPress, which is incredibly important to me knowing I have every bit of this site backed up and under my control.

Because your site now lives within your WordPress.com dashboard, you get features there. I quite like the analytics dashboards, which seem more accessible to me than trying to poke around Google Analytics.

Another one I really like is the ability to manage WordPress plugins from there. I'm happing doing that in the admin of my own site as well, but from here, I can tell certain plugins to auto-update, which just saves me the minor hassle of doing it myself.

The post Jetpack appeared first on CSS-Tricks.

Categories: Web Technologies

Quicklink

CSS-Tricks - Thu, 01/03/2019 - 06:59

We're in the future now so, of course, we're working on ways to speed up the web with fancy new tactics above and beyond the typical make-pages-slimmer-and-cached-like-crazy techniques.

One tactic, from years ago, was InstantClick:

Before visitors click on a link, they hover over that link. Between these two events, 200 ms to 300 ms usually pass by (test yourself here). InstantClick makes use of that time to preload the page, so that the page is already there when you click.

Clever, but not as advanced as what can be done in these modern times. For instance, InstantClick doesn't take into account the fact that someone might not want to preload stuff they didn't explicitly ask for, especially if they are on a slow network.

Addy Osmani wrote up a document calling this "predictive fetching":

... given an arbitrary entry-page, a solution could calculate the likelihood a user will visit a given next page or set of pages and prefetch resources for them while the user is still viewing their current page. This has the possibility of improving page-load performance for subsequent page visits as there's a strong chance a page will already be in the user's cache.

Just think: we could feed analytics data into the mix and let machine learning chew away at it. Addy also points to other prior attempts, like Gatsby's Link and a WordPress plugin.

Another contender is Quicklink by Google:

Quicklink attempts to make navigations to subsequent pages load faster. It:

  • Detects links within the viewport (using Intersection Observer)
  • Waits until the browser is idle (using requestIdleCallback)
  • Checks if the user isn't on a slow connection (using navigator.connection.effectiveType) or has data-saver enabled (using navigator.connection.saveData)
  • Prefetches URLs to the links (using <link rel=prefetch> or XHR). Provides some control over the request priority (can switch to fetch() if supported).

No machine learning or analytics usage there, but perhaps the most clever yet. I really like the spirit of prefetching only when there is a high enough likelihood of usage; the browser is idle anyway, and the network can handle it.

Direct Link to ArticlePermalink

The post Quicklink appeared first on CSS-Tricks.

Categories: Web Technologies

NetBeans 10 adds support for latest Java and PHP

InfoWorld JavaScript - Thu, 01/03/2019 - 03:00

Apache NetBeans 10, the latest version of the open source IDE for Java SE, PHP, and JavaScript development, is now available as a production release.

Where to download NetBeans 10

You can download NetBeans 10 from Apache’s NetBeans project page.

[ 15 Java frameworks that give developers a boost. • Which tools support Java’s new modularity features. | Keep up with hot topics in programming with InfoWorld’s App Dev Report newsletter. ] What’s new in NetBeans 10

Key to NetBeans 10 is enhanced support for Java Development Kit (JDK) 11 as well as capabilities for PHP and the JUnit 5 testing framework for Java.

To read this article in full, please click here

Categories: Web Technologies

JavaScript tutorial: Get started with generative art and P5.js

InfoWorld JavaScript - Thu, 01/03/2019 - 03:00

For the last few years, I've been running into presentation after presentation on generative art, meaning art created with code. Two talks at the Strange Loop 2018 conference in September were the last pushes I needed to dig into it. When I did, though, I stumbled on a few setup issues that left me scratching my head and slowed me down. Below, I'll briefly describe what P5.js is, what some of the initial roadblocks were, and how you can jump right into making some art using P5.js and ES6. Next week, we'll look at some of the API basics and attempt to make a watercolor effect. But first, a note on creativity.

To read this article in full, please click here

(Insider Story)
Categories: Web Technologies

Pages