官术网_书友最值得收藏!

Summary

Doing your own hardware benchmarking serves two complementary purposes. Knowing how fast your current systems are relative to one another, and being able to evaluate a candidate new server via the same measurements, is extremely valuable for helping nail down where the bottlenecks in your hardware are.

Second, the difference between reality and your hardware vendor's claims or reputation can be quite large. It is not safe to assume that your system is fast because you bought it from a reputable vendor. You should not assume a SAN is properly configured when delivered simply because it's a very expensive item and you were told it's already optimized for you. Systems are complicated, odd hardware interactions are inevitable, and not everyone involved in sales is going to be completely honest with you.

At the same time, you don't need to be a benchmarking expert to do useful hardware validation tests. It's actually counterproductive to run really complicated tests. If they don't give expected results, it will be hard to get your vendor to acknowledge the result and replicate the issue where it can be resolved. It's better to stick with the simplest possible, industry-standard tests for benchmarking, rather than attempt to do really complicated ones. If it takes a complicated test requiring hours of custom application set up to show a performance problem, the odds your vendor is going to help resolve that issue are low. Much more likely, your application will be blamed. If on the other hand, you can easily replicate the problem using the UNIX standard dd command in a few minutes, it's difficult to refute that the lowest levels of hardware/software are to blame.

Finally, doing some heavy benchmarking work when a new server arrives will do one additional thing: put some early stress on the hardware while it's still new. It's always better to deploy a system that's already gotten a workout to prove itself.

  • Always run your own basic hardware benchmarks on any system you intend to put a database on.
  • Simpler tests your vendor can replicate if you run into a problem are better than complicated ones.
  • memtest86+, STREAM, sysbench, hdtune, dd, bonnie++, and sysbench are all useful tools for measuring various aspects of system performance.
  • Disk drive testing needs to be very sensitive of how disk speed changes over the surface of the drive.
  • IOPS is a common way to measure disk and disk array performance, but it's not very well matched to the requirements of database applications.
  • Speeds on a seek-heavy workload can be much slower than you might expect based on a disk's sequential read/write performance.
  • Commit rate needs to be measured to confirm the caching levels you believe are active really are, since that impacts database reliability.
  • Complicated tests are better done using benchmarks of real database applications, rather than focusing on synthetic disk tests.
主站蜘蛛池模板: 成安县| 定襄县| 方正县| 时尚| 新郑市| 额敏县| 运城市| 都江堰市| 商河县| 温宿县| 伊川县| 绥宁县| 广宁县| 尚义县| 东宁县| 龙川县| 恩施市| 成武县| 滕州市| 东平县| 墨竹工卡县| 海南省| 贡嘎县| 太白县| 遂宁市| 花垣县| 龙陵县| 襄樊市| 扎兰屯市| 五大连池市| 石楼县| 洛扎县| 赣榆县| 清河县| 修文县| 静安区| 大名县| 大竹县| 莒南县| 阳高县| 南木林县|