Tags Filter: mysql (reset)
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
A friend of mine asked me:I'm hoping you can help me out with something -- I'm trying to optimize a search feature. Since it uses a MySQL database, the search already uses the "LIKE" statement to get matches for a search query, we might be needing something more flexible. I found mention on MySQL's website about something called the "SOUNDS LIKE" expression that can be more flexible than "LIKE". Do you know anything about this? If you do, can you point me a direction where I might be able to learn more about it? Thanks in advance for your help!My response:
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
The listing of Dynamic System Variables on the MySQL Reference Manual's page is supposed to show those variables you can change on-the-fly."innodb_data_home_dir" is listed there as a dynamic variable, but it is not one, and trying to change it on-the-fly doesn't work: mysql> set global innodb_data_home_dir="/data/mysql/data";ERROR 1193 (HY000): Unknown system variable 'innodb_data_home_
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
It felt like the right time for us to look back at some useful commands for table maintenance that some of us may not have mastered as much as we might like to think.In my post about gathering index statistics, I referred to "OPTIMIZE TABLE", "ANALYZE TABLE", and "REPAIR TABLE" -- but I never explained in depth what the different commands do, and what the differences between them are. That is what I thought I would do with this post, focusing on InnoDB and MyISAM, and the differences in how they treat those commands. I will also look at different cases and see which one is right for in each case.
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
The Maatkit toolkit is a real blessing for the MySQL DBA. And while its documentation is pretty good, in some cases it's necessary to read carefully a second and third time to make sure you are not missing an important piece of information. In this article I will comment on mk-table-chksum and mk-table-sync. My comments are mostly aimed at those DBAs who are considering using these utilities with medium or larger-sized databases.--replicateThis option allows you to store the checksum results on the master, in a table that will get replicated to the slaves. Although it might seem like overhead for a simple check, it really simplifies your life, especially when used in combination with "mk-table-sync". I always use it, for the following reasons:\tYou only need to run "mk-table-chksum" on the master.\tA simple query will tell you the slave status.\tWhen used with "--chunksize", it divides the checking
[]
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
After I moved back to Europe and Malta in order to set up our operations here, I was approached by a old friend of mine who wanted to know how to add a "UNIQUE" constraint and remove duplicates on a table, while keeping the newest records. He had been trying with "ALTER TABLE" but ran into problems as the older values were taken. So, to help him out, I first solved it based on his original idea, and then figured I would post a small note about the solution here.So, let's say we have the following structure . . .sql01 blogs> SHOW CREATE TABLE post1164\\G*************************** 1. row *************************** Table: post1164Create Table: CREATE TABLE `post1164` ( `a` int(10) unsigned NOT NULL AUTO_INCREMENT, `b` varchar(5) DEFAULT NULL, `c` varchar(5) DEFAULT NULL, PRIMARY KEY (`a`)) ENGINE=MyISAM AUTO_INCREMENT=7 DEFAULT CHARSET=latin11 row in set (0.00
[]
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
Let me first say that the PBXT storage engine has some great people behind it. At the users conference last April, I had a chance to meet Paul McCullagh, who created PBXT, and some of the people who work on it. They are dedicated individuals who are creating something unique.Like the InnoDB storage engine, which is backed by the Innobase company, PBXT has a company that backs it, Primebase Technologies. This means that if needed, support can be got from the company that created the product. For enterprise companies this might be important.The basics characteristics of PBXT:\tMVCC: Multi-version concurrency control, enables reading without locking.\tTransactional: Support for "BEGIN", "COMMIT" and "ROLLBACK" and recovery on startup.\tACID compliant: Atomic, Consistent, Isolated, Durable (once committed, changes cannot be lost).\tRow-level locking: updates use row-level locking, allowing for maximum
[]
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
Over the last few weeks I've been looking at several customers' slow query logs, and I found in many of them an odd type of query. These are "SELECT" statements that contain an "IN" clause that includes dozens, sometimes hundreds of values. These statements often end in the slow query log. I'm not sure if these queries are this way by design or if they are generated by a specific database development tool. I did some tests in one of my own databases, one with only around 10K rows in its largest table. The database corresponds to the Amarok media player. For example, I queried for songs by B. B. King (spelled "BB King", "B.B. King", etc. or with other artists: "B. B. King & Eric Clapton"). The first query used a "JOIN" and an "IN" clause with all the spellings in my db; the second used the same "JOIN" and "WHERE ... name LIKE
[]
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
So, you have a binlog. You want to find out something specific that happened inside of it. What to do? "mysqlbinlog" has some neat features, which I thought we would look at here. I should first explain what "mysqlbinlog" really is. It is a tool that lets you analyze and view the binlogs/relaylogs from mysql, which are stored in binary format. This tool converts them to plaintext, so that they're human-readable.For the first tip, let's start with the "--read-from-remote-server" option, which allows you to examine a binlog on a master server in order, perhaps, to dump it onto your slave and compare master/slave logs for potential problems*.$ mysqlbinlog --read-from-remote-server -uwesterlund -p mysql-bin.000001 -h 127.0.0.1 -P 3306 | head -5Enter password: /*!40019 SET @@session.max_insert_delayed_threads=0*/;/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;DELIMITER /*!*/;#
[]
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
Or, "Missing information in the MySQL Manual". Just earlier today, I was using "POW()", which I've grown quite fond of, simply because it makes life easier. I prefer using it like "SELECT 512*POW(1024,2)" to find out the number of bytes to put in a variable, for example.First, let's take a look at the "POW" function: Name: 'POW'Description:Syntax:POW(X,Y)Returns the value of X raised to the power of Y.Okay, so it gives us a value; but what about the data type? Let's take "512*POW(1024,2)" as an example.5067 (blogs) > SELECT 512*POW(1024,2) AS example;+-----------+| example |+-----------+| 536870912 | +-----------+1 row in set (0.00 sec)What is that? Well, it sure does look like an "INT" at this point, doesn't it?
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
Every so often we have a client worrying about unauthenticated users. For example, as part of the output of "SHOW PROCESSLIST" they will see:+-----+----------------------+--------------------+------+---------+------+-------+------------------+| Id | User | Host | db | Command | Time | State | Info |+-----+----------------------+--------------------+------+---------+------+-------+------------------+| 235 | unauthenticated user | 10.10.2.74:53216 | NULL | Connect | NULL | login | NULL || 236 | unauthenticated user | 10.120.61.10:51721 | NULL | Connect | NULL | login | NULL || 237 | user | localhost | NULL | Query | 0 | NULL | show processlist |+-----+----------------------+--------------------+------+---------+------+-------+------------------+Who are these unauthenticated users, how do they get there, and why aren't
[]
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
In two words: online operations. In a paragraph: Forget partitioning, row-based replication and events. The big reasons most people are going to salivate over 5.1, and probably start plans to upgrade now, are the online operations: \tonline ALTER TABLE for column rename, column default value change, and adding values to the end of an ENUM/SET\tOnline, table-based logging. No more need to restart your server to enable or change the general or slow query logs. You can have the standard file-based output or choose a table format...which you can query.
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
Do you "have_community_features"? I do!SHOW GLOBAL VARIABLES LIKE 'have_community_features';+-------------------------+-------+| Variable_name | Value |+-------------------------+-------+| have_community_features | YES | +-------------------------+-------+1 row in set (0.00 sec)I am pretty sure this is one of those variables that MySQL has put in as an unused placeholder, but for now, it is not even documented as unused (as are table_lock_wait_timeout,
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
So there's a whole bunch of MySQL scripts included with any release, who can say they know them all? In this blog post, we will take a look at some of these and show usage examples that may help you in your environment.mysql_accessLet's say you want to manage your users' rights, and you have this user 'john.doe' that can connect from 'host1', but you aren't sure what he can do to your db: company-staging as well as company-dev.What do you do? Well, for example, you could use "mysql_access" and take a look:
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
Wheel of protocol, turn turn turn.Tell us the lesson that we should learn. (with apologies to the original source)Writing a book comes with many challenges. For me, writing a MySQL book for MySQL beginners, who may or may not be database beginners, has fed my compulsion to research and test bizarre interactions.Today's lesson is on what protocol is used when connecting to a local "mysqld" instance on a non-Windows machine. The TCP/IP protocol is used by default when connecting on a Windows machine, and connecting from any operating system to a non-local "mysqld" instance. I am assuming the connections are being made by a command line client such as "mysql", "mysqladmin" or "mysqldump". Connections made via connectors such as Connector/J, an ODBC connector, DBD::mysql, etc are not covered in this post.If you seem to be having trouble connecting, check your options against your
[]
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
Had an interesting situation come up today with a client. We had a situation where a server crashed because it ran out of memory. The calculation we used to monitor memory usage did not take into account all factors. When looking at this, I noticed a couple of things:\tThere are numerous calculations available online.\tIt seems that none of them take everything in account.Wouldn't it be nice if there was a total possible memory consumption status value? It can be a valuable piece of information. (And while I am requesting things, what about peak memory usage by the mysqld server since it has been online?) Realistically, there is almost no case where you would actually use the maximum amount of memory. But knowing how much memory that MySQL server could possibly use would be a very useful bit of information to know.Second, why isn't there an "official" formula that actually shows everything? It's easy to forget things.
[]
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
A while ago me and Singer Wang (another DBA here at pythian) was working on a problem where the slave went from 800M to 1200 to 2700 to 3200 to ~800M again. I have to admit I haven't seen this before.The specific setting was 5.0.45 slave that also acted as a master, and the instance the experienced the problems was running 5.0.50.After upgrading to 5.0.74, it worked again. But here's a few of the show slave status while experiencing these problems:
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
The other day, a client mentioned they were getting strange results when running "ALTER TABLE". The episode involved modifying an existing primary key to add an "auto_increment" "primary key": it was "shifting" values. Say what?! As it turns out, it was a very special value getting changed: "zero". Some fiddling revealed the underlying reason. Care to join me?To understand what's going on, follow the example below as we start with an empty database, create a table and insert a few rows:mysql> use test;Database changedmysql> show tables;Empty set (0.00 sec)mysql> create table test_table (id int not null primary key) engine=innodb;Query OK, 0 rows affected (0.01 sec)mysql> desc test_table;+-------+---------+------+-----+---------+-------+| Field | Type | Null | Key | Default | Extra |+-------+---------+------+-----+---------+-------+| id | int(11) |
[]
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
Recently, I had an opportunity to participate in a stress-testing exercise. By stress-testing, I mean simulating an expected peak load on the database and observing how the database performs. The objective is to make sure the particular hardware can handle the expected load and also to test where the physical limit of the machine lies if we keep increasing the number of concurrent users. I searched for tools available out there. On forums, most people were talking about Hammerora, and although some people recommended high end tools for big bucks, I decided to try this free tool that promised to do exactly what I needed it to do. The open source Hammerora turned out to be excellent tool for the purpose of simulating a typical transactions load in any number of threads. It's built with the Tcl scripting language and it can simulate a real workload with as many client user sessions as your system can handle.So if you're looking for
[]
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
First off, let me wish you all a Happy 2009, and my apologies for being a bit slack with blogging in the last two months of 2008. It's been a bunch of busy weeks, but I'm fully expecting to remedy that in this year. Second, let me welcome you to this series on refactoring MySQL applications. I will cover best practices and useful tips, as well as show working examples of potential loopholes and solutions.So, you are a DBA or a developer, and you're faced with a problem. Your years-old application (perhaps you inherited it from a former co-worker) is now starting to keel over, and your choice of responses is limited.Let's start by saying that there are other ways to reach performance, but in some cases refactoring may be the only way that is possible to pursue, for one reason or another. Let's take the refactoring way, focusing on SQL rather than applications, as there's a lot that can be done on this side.But first, why
[]
posted by Sakila The Librarian
on Mon 07 Sep 2009 12:11 UTC
Welcome back to my series of blogposts, "Back to Basics". Today I wanted to share a few commands that could be useful in various situations, such as delaying replication, stopping the slave (or parts of it).I'm sorry, but I will not be going through how to set up replication, so my assumption is that you already have replication working and that you want to know some of the details surrounding it.Let's begin by taking a look at "STOP SLAVE"STOP SLAVE stops the slave threads