Skip navigation links
Showing entries 1 to 11

Tags Filter: Keith Murphy (reset)

Articles
Add to Favourites +0 Vote Up -0Vote Down
Over the weekend, I worked on a client's two computers, trying to get a slave in sync with the master. It was during this time that I began thinking about:a) how this never should have happened in the first place.b) how "slave drift" could be kept from happening.c) how this is probably keeping some businesses from using MySQL.d) how MySQL DBAs must spend thousands of hours a year wasting time fixing replication issues.I'll be the first person to tell you that the replication under MySQL is pretty much dead-simple to set up. My only complaint is that it is annoying to type in the two-line "CHANGE MASTER" command to set up a new slave. Even so, it makes sense.It is also very easy, however, for a slave to end up with different data than the master server has. This can be caused by replication bugs, hardware problems, or by using non-deterministic functions. Without proper permissions, a user/developer/DBA can log  []
Articles
Add to Favourites +0 Vote Up -0Vote Down
A few days ago I had a new idea for a blog post. A post about what it really takes to be a good database administrator. I began by researching what others had done on the topic. At the end of this post you will find links to six of the posts I found that provided some insight into this question. Even after uncovering this information, I thought I could add something to the mix from my own experiences. So here we go!
Articles
Add to Favourites +1 Vote Up -0Vote Down
Recently, a customer wondered if they should start using the "innodb_file_per_table" option, or if they should continue to use the large InnoDB tablespace files created by the "innodb_data_file_path" option in the "my.cnf" option file.Many people still use the older "innodb_data_file_path" option because it is the default for MySQL server. So, what are the benefits of using "innodb_file_per_table" instead?The "innodb_file_per_table" makes for easier-to-manage files. With this option each InnoDB table has its own data and index file under the database directory. As an example, if you had table "foo" located in database "xyz" the InnoDB data file for table "foo" would be "/var/lib/mysql/data/xyz/foo.idb". Each table would have its own idb table in the appropriate database directory. This is in contrast to using the  []
Articles
Add to Favourites +0 Vote Up -0Vote Down
InnoDB is a storage engine that uses MVCC (described shortly) to provide ACID-compliant transactional data storage using row-level locking.  MVCC stands for Multi-Version Concurrency Control.  It is how InnoDB allows multiple transactions to look at a data set of one or more tables and have a consistent view of the data. MVCC keeps a virtual snapshot of the dataset for each transaction.  An example will make this clear. Let's assume you have two transactions (and only two transactions) running on a system. If transaction A starts at 10:45:56 and ends at 10:45:89, it gets a consistent view of the dataset during the time that the transaction runs.  If transaction B starts at 10:45:65, it would see exactly the same view of the dataset that transaction A saw when it began the transaction.  If transaction B started at 10:45:95, it would see the modified dataset after transaction A made modifications. During the duration of each  []
Articles
Add to Favourites +0 Vote Up -0Vote Down
It's time to continue our series on the transactional storage engines for MySQL. Some might question why I even include Falcon because it is very much beta at this time. MySQL, however, has made quite an investment into Falcon, and while it is currently beta, the code is improving and it looks like that it will be production-worthy when MySQL server 6.0 hits GA.If this is the case, it is important to begin to understand what Falcon was designed for and how it differs from other transactional engines such as InnoDB. I am going to concentrate quite a bit on the Falcon/InnoDB comparison as that is what everyone wants to talk about. This is despite my having heard MySQL employees repeatedly make statements to the effect of, "Falcon is not going to replace InnoDB," or "Falcon is not competing with InnoDB." Well, take that with a grain of salt. It certainly seems to me that they are competing for the same  []
Articles
Add to Favourites +0 Vote Up -0Vote Down
Or, There is a Use for GUI Tools!Recently I was working on a problem for a customer. They are converting a server with two InnoDB data files to "innodb_file_per_table":$ ls -l ibdata*-rwxr-xr-x   1 mysql    mysql       2.0G Oct 14 13:10 ibdata1-rwxr-xr-x   1 mysql    mysql       350G Oct 14 13:10 ibdata2I honestly don't recall ever seeing (or hearing about) so large a data file.The method chosen for conversion boils down to this:\tstop and start the server to enable "innodb_file_per_table"\talter all tables to myisam\tstop server and delete ibdata file\trestart server\tconvert tables back to InnoDB\tadd foreign keysThis post isn't about how we did it, or about whether or not it was the best way, or anything like that. This post is really about the the last two steps.There were over 500 InnoDB tables and something like 375 foreign keys. So I needed to somehow create the "ALTER TABLE" statements for  []
Articles
Add to Favourites +0 Vote Up -0Vote Down
I was doing some research over the weekend on how transactions work "under the hood" in MySQL. I thought it might be enlightening if I wrote about what I found.The database system must resolve the situation where, for whatever reason, the database server stops working in the middle of a transaction.  Perhaps the power goes out, the hard drive crashes, or the CPU disappears in a cloud of smoke. Maybe the MySQL Server itself crashes for some reason. What does the MySQL Server do when operation resumes? Potentially, there are transactions in an inconsistent state. How are inconsistent transactions resolved? They are resolved through a recovery process involving log files that are called transactional logs. There are two kinds of transactional logs: redo logs and undo logs.
Articles
Add to Favourites +0 Vote Up -0Vote Down
I recently needed to set up multiple MySQL servers on a test computer to simulate a master-slave setup. I had never done this before, so I think it might be useful for others if I documented what occurred. This setup was done on a Linux server, however other platforms should operate similarly. I choose to use the "mysqld_multi" script to manage these instances. There is a way that you can compile the MySQL server with different network interfaces, but in my opinion, using "mysqld_multi" to manage activities is much easier.So shall we begin?
Articles
Add to Favourites +0 Vote Up -0Vote Down
Let me first say that the PBXT storage engine has some great people behind it. At the users conference last April, I had a chance to meet Paul McCullagh, who created PBXT, and some of the people who work on it. They are dedicated individuals who are creating something unique.Like the InnoDB storage engine, which is backed by the Innobase company, PBXT has a company that backs it, Primebase Technologies. This means that if needed, support can be got from the company that created the product. For enterprise companies this might be important.The basics characteristics of PBXT:\tMVCC: Multi-version concurrency control, enables reading without locking.\tTransactional: Support for "BEGIN", "COMMIT" and "ROLLBACK" and recovery on startup.\tACID compliant: Atomic, Consistent, Isolated, Durable (once committed, changes cannot be lost).\tRow-level locking: updates use row-level locking, allowing for maximum  []
Articles
Add to Favourites +0 Vote Up -0Vote Down
Had an interesting situation come up today with a client. We had a situation where a server crashed because it ran out of memory. The calculation we used to monitor memory usage did not take into account all factors. When looking at this, I noticed a couple of things:\tThere are numerous calculations available online.\tIt seems that none of them take everything in account.Wouldn't it be nice if there was a total possible memory consumption status value? It can be a valuable piece of information. (And while I am requesting things, what about peak memory usage by the mysqld server since it has been online?) Realistically, there is almost no case where you would actually use the maximum amount of memory. But knowing how much memory that MySQL server could possibly use would be a very useful bit of information to know.Second, why isn't there an "official" formula that actually shows everything? It's easy to forget things.  []
Articles
Add to Favourites +0 Vote Up -0Vote Down
I had the chance to review the new Query Analyzer program from Sun over the last few days. I am very interested in how it performs as I have previously not had a chance to see the program in action (or the Enterprise Monitor program either for that matter). So, before getting into what the Query Analyzer can (and cannot) do let's look at what Sun says it does.From the November 19th, 2008 press release:The MySQL Query Analyzer saves time and effort in finding and fixing problem queries by providing:\tAggregated view into query execution counts, run time, result sets across all MySQL servers with no dependence on MySQL logs or "SHOW PROCESSLIST"\tSortable views by all monitored statistics\tSearchable and sortable queries by query type, content, server, database, date/time, interval range, and "when first seen"\tHistorical and real-time analysis of all queries across all servers\tDrill downs into sampled query  []
Showing entries 1 to 11