Today, just like many times before, I needed to configure a monitoring server for MySQL using Cacti and awesome Percona Monitoring Templates. The only difference was that this time I wanted to get it to run with 1 min resolution (using ganglia and graphite, both with 10 sec resolution, for all the rest of our monitoring in Swiftype really spoiled me!). And that’s where the usual pain in the ass Cacti configuration gets really amplified by the million things you need to change to make it work. So, this is a short checklist post for those who need to configure a Cacti server with 1 minute resolution and setup Percona Monitoring Plugins on it.
Read the rest of this entry →
Back in November 2009 I was working on a project to port Scribd.com code base to Rails 2.2 and noticed that some old plugins we were using in 2.1 were abandoned by their authors. Some of them were just removed from the code base, but one needed a replacement – that was an old plugin called acts_as_readonlyable that helped us to distribute our queries among a cluster of MySQL slaves. There were some alternatives but we didn’t like them for one or another reasons so we’ve decided to go with creating our own ActiveRecord plugin, that would help us scale our databases out. That’s the story behind the first release of DbCharmer.
Today, six months after the first release of the gem and we’ve moved it to gemcutter (which is now the official gems hosting) and we’re already at version 1.6.11. The gem was downloaded more than 2000 times. There are (at least) 10+ large users that rely on this gem to scale their products out. And (this is the most exciting) we’ve added tons of new features to the product.
Here are the main features added since the first release:
- Much better multi-database migrations support including default migrations connection changing.
- We’ve added ActiveRecord associations preload support that makes it possible to move eager loading queries to the same connection where your finder queries go to.
- We’ve improved ActiveRecord’s query logging feature and now you can see what connections your queries executed on (and yes, all those improvements are colorized :-)).
- We’ve added an ability to temporary remap any ActiveRecord connections to any other connections for a block of code (really useful when you need to make sure all your queries would go to some non-default slave and you do not want to mess with all your models).
- The most interesting change: we’ve implemented some basic sharding functionality in ActiveRecord which currently is being used in production in our application.
As you can see now DbCharmer helps you to do three major scalability tasks in your Rails projects:
- Master-Slave clusters to scale out your Rails models reads.
- Vertical sharding by moving some of your models to a separate (maybe even dedicated) servers and still keep using AR associations
- Horizontal sharding by slicing your models data to pieces and placing those pieces into different databases and/or servers.
So, If you didn’t check DbCharmer out yet and you’re working on some large rails project that is (or going to be) facing scalability problems, go read the docs, download/install the gem and prove them that Rails CAN scale!
Today I’m proud to announce the first public release of our ActiveRecord database connection magic plugin: DbCharmer.
DB Charmer – ActiveRecord Connection Magic Plugin
DbCharmer is a simple yet powerful plugin for ActiveRecord that does a few things:
- Allows you to easily manage AR models’ connections (
switch_connection_to
method)
- Allows you to switch AR models’ default connections to a separate servers/databases
- Allows you to easily choose where your query should go (
on_*
methods family)
- Allows you to automatically send read queries to your slaves while masters would handle all the updates.
- Adds multiple databases migrations to ActiveRecord
Read the rest of this entry →
- Posted in: Admin-tips, Databases, Development, My Projects
- Tags: activemq, database, development, MySQL, performance, queue, Ruby, scalability, scribd, stomp
30 Oct2008
Few months ago I’ve switched one of our internal projects from doing synchronous database saves of analytics data to an asynchronous processing using starling + a pool of workers. This was the day when I really understood the power of specialized queue servers. I was using database (mostly, MySQL) for this kind of tasks for years and sometimes (especially under a highly concurrent load) it worked not so fast… Few times I worked with some queue servers, but those were either some small tasks or I didn’t have a time to really get the idea, that specialized queue servers were created just to do these tasks quickly and efficiently.
All this time (few months now) I was using starling noticed really bad thing in how it works: if workers die (really die, or lock on something for a long time, or just start lagging) and queue start growing, the thing could kill your server and you won’t be able to do something about it – it just eats all your memory and this is it. Since then I’ve started looking for a better solution for our queuing, the technology was too cool to give up. I’ve tried 5 or 6 different popular solutions and all of them sucked… They ALL had the same problem – if your queue grows, this is your problem and not queue broker’s :-/ The last solution I’ve tested was ActiveMQ and either I wasn’t able to push it to its limits or it is really so cool, but looks like it does not have this memory problem. So, we’ve started using it recently.
In this small post I’d like to describe a few things that took me pretty long to figure out in ruby Stomp client: how to make queues persistent (really!) and how to process elements one by one with clients’ acknowledgments.
Read the rest of this entry →
- Posted in: Databases, Development, My Projects, Networks
- Tags: caching, http, memcache, MySQL, performance, scalability, scribd, squid
25 Oct2008
Since the day one when I joined Scribd, I was thinking about the fact that 90+% of our traffic is going to the document view pages, which is a single action in our documents controller. I was wondering how could we improve this action responsiveness and make our users happier.
Few times I was creating a git branches and hacking this action trying to implement some sort of page-level caching to make things faster. But all the time results weren’t as good as I’d like them to be. So, branches were sitting there and waiting for a better idea.
Read the rest of this entry →