Mailing List Archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [tlug] Making better use of SSDs?



Hi Kalin,

2012/05/28 18:55, Kalin KOZHUHAROV wrote:
On Mon, May 28, 2012 at 6:15 PM,<satoshi.nagayasu@example.com>  wrote:
A few days ago, I just published a technical report that examine
SSD performance compared with HDD to execute DWH (or analytics)
workloads on PostgreSQL. SSD was used for its data directory.

Will the report be available somewhere to read?

Unfortunately, it's written in Japanese only (for now).
It's considerable to translate into English though...

If you can read Japanese, you can purchase the report
from my web site.

http://www.uptime.jp/ja/resources/techdocs/2012/05/pgsql_dbt3_hdd_ssd/

According to the experiments, it's been determined that SSD is able to
improve the query performance extremely if it has massive random
acccess I/O operations. As a results, more than 20 times faster
performance has been observed with some workloads.

Yep, that is to be expected, more or less. Actually the more random
and more processes try to read a HDD, the better improvement you'll
see with databases. 10-100 times is not uncommon.
No idea what is the size of your tables, but have you tried to fit
them in memory?

Not yet.
The database size was x5 larger than the physical memory
of the database server. I intended to generate lots of
I/O operations.

It will be a good indication of how much wait for IO is there, better
than measuring it.
> If you don't want to change anything, just mount a tmpfs and put the
> data directory there and `mount -o bind` it to the old place (when db
> is down).

It seems worth trying. Thanks.

Regards,
--
NAGAYASU Satoshi <satoshi.nagayasu@example.com>


Home | Main Index | Thread Index

Home Page Mailing List Linux and Japan TLUG Members Links