If you’d take a look at any web site, you will notice, that almost all of the pages on this given site are pretty static in their nature. Or course, this site could have some dynamic elements like login field or link in the header, some customized menu elements and some other things… But entire page could be considered static in many cases.
When I started thinking about my sites from this point of view, I understood, how great it would be to be able to cache entire page somewhere (in memcache for example) and be able to send it to the user without any requests to my applications, which are pretty slow (comparing to memcache 😉 ) in content generation. Then I came up with a pretty simple and really powerful idea I’ll describe in this article. An idea of caching entire pages of the site and using my application only to generate small partials of the page. This idea allows me to handle hundreds of queries with one server running pretty slow (yeah! it is slow even after all optimizations on MySQL side and lots of tweaks in site’s code) Ruby on Rails application. Of course, the idea is kind of universal and could be used with any back-end languages, technologies or frameworks because all of them are slower then memcache in content “generation”.
Read the rest of this entry →
Few month ago I heard about some initial work on MySQL Proxy software by Jan Kneschke and I thought about implementing some type of MySQL Replication Aggregator based on this software. The idea was to create some piece of software which could get many replication streams, merge them and feed to some mysql slave. This software could be used for backup purposes and many other interesting things. But back then mysqlproxy distribution has been suspended (afaik, by MySQL AB because of some legal issues).
And at last, today MySQL Proxy project has been released to public and it became much more flexible so I think we need to take a look at it and try to implement such replication aggregator patches for it.
Pretty usually in our Perl projects we use third-party modules (CPAN or others) and sometimes we can find a bug in such module. What are our options in such cases? This small article describes some useful hints for Perl developers who have such problems.
Read the rest of this entry →
- Posted in: Databases, Development, My Projects, Networks
- Tags: manager, master, mmm, MMM-Cluster, MySQL, release, replication, solaris, timeouts
16 May2007
New alpha release 1.0-pre4 of the MySQL Master-Master Replication Manager. This release has lots of major fixes and I’m glad to announce first sponsored port of mmm to non-linux platform – it has been ported to Solaris 10. So, here are our changes in this version:
- Real checks timeouts – I’ve found and fixed lots of problems in checks timeout code and now if you specified in your mmm_mon.conf, that some check should timeout in 5 sec, it would timeout correctly on all supported platforms.
- External third-party tools using – On all supported non-linux platforms mmm will use system binaries for fping and arp_ping so porting to another platforms would be much easier.
- Agent notifications fix – Now we don’t try to notify dead agents about cluster changes and additionally we have 10 sec timeout on notification sends to prevent monitoring from lagging on network connection timeouts.
- Bundled fping and send_arp – We have both used third-party tools used in mmm bundled in our distribution as separate build trees (you could find and build/install them from contrib directory).
- Flexible perl binary location – We’re using “#!/usr/bin/env perl” as shebang line in our perl scripts so you can use any perl interpreter just by placing it earlier in your PATH variable.
Notice: If you’ll try to install this version, try to run bin/sys/fping and bin/sys/send_arp on your server before installation. If you’ll notice any errors, feel free to build binaries for your platform using contrib/* sources (you’ll need gcc and libnet installed).
So, as you can see, mmm development goes forward and we’re fixing some problems trying to make this software mature. If you want to help us, you can send your comments to mmm-devel mailing list, post your bug reports to our bug tracking or sponsor any changes you need 😉
[lang_en]
Sometimes you may need to create a set of rspec specifications with pretty similar structure and small differences. I’ve got such situation in my project and decided to try to use Ruby’s dynamic code generation features to make my spec file shorter.
I have some multiplexing helper in my templates which allows me to use the same template for different similar pages. This helper returns URL from the set of params and a type. It could accept 5 different url types and raises an Exception when requested URL type is invalid. Without this dynamic code generation feature I would need to create 5 different specifications (one for each URL type) to be able to see each URL type test as a separate line in test results log. But with this simple technique my code looks like following now:
[/lang_en]
[lang_ru]
Иногда бывают моменты, когда Вам может быть нужно создать набор спецификаций для rspec, отличающихся одним-двумя вызовами или параметрами. У меня в проекте сложилась такая ситуация, и я решил попробовать использовать возможности Ruby для динамической генерации кода чтобы, сделать spec-файлы короче и избежать дублирования.
У меня есть хелпер, который используется в нескольких универсальных темплейтах для генерации похожих страниц. Этот хелпер возвращвет URL по набору параметров и типу ссылки. Он может принимать 5 различных типов ссылок и выбрасывает исключение с случаях, когда тип ссылки не поддерживается. Без использования динамической генерации кода мне пришлось бы создать 5 различных спецификаций (по одной для каждого типа ссылок) для того, чтобы иметь возможность видеть каждый тип ссылок отдельной строкой в результатах теста. С использованием же динамической генерации код выглядит примерно так:
[/lang_ru]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
| describe VideoHelper, 'when profile_video_url method called' do
before do
@user = mock('user')
end
url_types = {
'personal_feed' => 'personal_feed',
'favorites' => 'favorites',
'voted' => 'voted_videos',
'posted' => 'posted_videos',
'commented' => 'commented_videos'
}
url_types.each do |url_type, route|
it "should return #{route}_url for #{url_type} type urls" do
@user.should_receive(:login).at_least(1).times.and_return('login')
profile_video_url(url_type, @user, 2, 'expert').should == send("#{route}_url", @user, 2, 'expert')
end
end
it 'should raise ArgumentError("Invalid feed type") on invalid url_types' do
lambda { profile_video_url('crap', @user, 2, 'expert') }.should raise_error(ArgumentError, 'Invalid feed type')
end
end |
[lang_en]
This technique could be used even to create entire describe sections, but I would not like to show tons of code here. Anyways, the idea is pretty simple: you could use some loop with nested describe section and send() method calls to dynamically construct your code.
[/lang_en]
[lang_ru]
Эта техника может быть использована даже для целых describe-секций, но я не буду приводить здесь гору кода, который получился у меня. Так или иначе, идея предельно проста: вы можете использовать любые циклы с секцией describe внутри и вызовами метода send() для динамического конструирования Вашего кода.
[/lang_ru]