http://slash.solidot.org/article.pl?sid=07/10/27/1244202&from=rss

Solidot网站经常不时出现小毛小病,比如最近留言计数器严重滞后。同样采用slashcode的slashdot是如何运行的,值得我们参考。它的Alexa排名在800左右(digg现在是100左右,差距越来越大了),每天的流量很惊人。在建站10周年之际,Slashdot的工程师介绍了网站整体架构,分为硬件和软件两部分。
硬件:slashdot现在属于SourceForge公司,硬件基本结构与SourceForge旗下其它网站如SourceForge.net,Thinkgeek.com, Freshmeat.net,Linux.com等相同。 一个数据中心,活动地板、发电机、UPS、24x7小时安全防护等等之类,和一般的数据中心一样。
带宽和网络:一对Cisco 7301s路由器,一对Foundry BigIron 8000s交换机,一对Rackable Systems 1Us作负载平衡防火墙:配置P4 Xeon 2.66Gz,2G RAM,2x80GB IDE,运行CentOS和LVS。
16个web服务器,都运行Red Hat 9。2个用于统计内容:脚本,图像,非注册用户看到的首页;4个用于注册用户看到的首页内容;其余10个处理评论页。服务器型号为Rackable 1U:2 Xeon 2.66Ghz处理器,2GB of RAM,2x80GB IDE硬盘...
7个数据库服务器,都运行CentOS 4,配置是2 Dual Opteron 270,16GB RAM,4x36GB 15K RPM SCSI Drives。一个是只写数据库,其余则是读写数据库,它们互相之间可以随时可以动态交换。

软件: HTTP请求需经过pound servers,pound是一种代理服务器,它会选择一个web server来响应请求。slashdot一共有6个pound,一个是HTTPS加密访问模式(提供给订阅用户),5个都是标准的HTTP。web server使用Apache,数据库是MySQL。Slash 1.0是在2000年初完成的目前的最新版本是2.2.6。
___________________________________________________
http://meta.slashdot.org/article.pl?sid=07/10/22/145209

Today we have Part 2 in our exciting 2 part series about the infrastructure that powers Slashdot. Last week Uriah told us all about the hardware powering the system. This week, Jamie McCarthy picks up the story and tells us about the software... from pound to memcached to mysql and more. Hit that link and read on.
<!-- ad position 6 --> <!-- DoubleClick Ad Tag --> var ad6 = 'active';

//'); dfp_tile++; //]]>

<!-- End DoubleClick Ad Tag -->

The software side of Slashdot takes over at the point where our load balancers -- described in Friday's hardware story -- hand off your incoming HTTP request to our pound servers.

Pound is a reverse proxy, which means it doesn't service the request itself, it just chooses which web server to hand it off to. We run 6 pounds, one for HTTPS traffic and the other 5 for regular HTTP. (Didn't know we support HTTPS, did ya? It's one of the perks for subscribers: you get to read Slashdot on the same webhead that admins use, which is always going to be responsive even during a crush of traffic -- because if it isn't, Rob's going to breathe down our necks!)

The pounds send traffic to one of the 16 apaches on our 16 webheads -- 15 regular, and the 1 HTTPS. Now, pound itself is so undemanding that we run it side-by-side with the apaches. The HTTPS pound handles SSL itself, handing off a plaintext HTTP request to its machine's apache, so the apache it redirects traffic to doesn't need mod_ssl compiled in. One less headache! Of our other 15 webheads, 5 also run a pound, not to distribute load but just for redundancy.

(Trivia: pound normally adds an X-Forwarded-For header, which Slash::Apache substitutes for the (internal) IP of pound itself. But sometimes if you use a proxy on the internet to do something bad, it will send us an X-Forwarded-For header too, which we use to try to track abuse. So we patched pound to insert a special X-Forward-Pound header, so it doesn't overwrite what may come from an abuser's proxy.)

The other 15 webheads are segregated by type. This segregation is mostly what pound is for. We have 2 webheads for static (.shtml) requests, 4 for the dynamic homepage, 6 for dynamic comment-delivery pages (comments, article, pollBooth.pl), and 3 for all other dynamic scripts (ajax, tags, bookmarks, firehose). We segregate partly so that if there's a performance problem or a DDoS on a specific page, the rest of the site will remain functional. We're constantly changing the code and this sets up "performance firewalls" for when us silly coders decide to write infinite loops.

But we also segregate for efficiency reasons like httpd-level caching, and MaxClients tuning. Our webhead bottleneck is CPU, not RAM. We run MaxClients that might seem absurdly low (5-15 for dynamic webheads, 25 for static) but our philosophy is if we're not turning over requests quickly anyway, something's wrong, and stacking up more requests won't help the CPU chew through them any faster.

All the webheads run the same software, which they mount from a /usr/local exported by a read-only NFS machine. Everyone I've ever met outside of this company gives an involuntary shudder when NFS is mentioned, and yet we haven't had any problems since shortly after it was set up (2002-ish). I attribute this to a combination of our brilliant sysadmins and the fact that we only export read-only. The backend task that writes to /usr/local (to update index.shtml every minute, for example) runs on the NFS server itself.

The apaches are versions 1.3, because there's never been a reason for us to switch to 2.0. We compile in mod_perl, and lingerd to free up RAM during delivery, but the only other nonstandard module we use is mod_auth_useragent to keep unfriendly bots away. Slash does make extensive use of each phase of the request loop (largely so we can send our 403's to out-of-control bots using a minimum of resources, and so your page is fully on its way while we write to the logging DB).

Slash, of course, is the open-source perl code that runs Slashdot. If you're thinking of playing around with it, grab a recent copy from CVS: it's been years since we got around to a tarball release. The various scripts that handle web requests access the database through Slash's SQL API, implemented on top of DBD::mysql (now maintained, incidentally, by one of the original Slash 1.0 coders) and of course DBI.pm. The most interesting parts of this layer might be:

(a) We don't use Apache::DBI. We use connect_cached, but actually our main connection cache is the global objects that hold the connections. Some small chunks of data are so frequently used that we keep them around in those objects.

(b) We almost never use statement handles. We have eleven ways of doing a SELECT and the differences are mostly how we massage the results into the perl data structure they return.

(c) We don't use placeholders. Originally because DBD::mysql didn't take advantage of them, and now because we think any speed increase in a reasonably-optimized web app should be a trivial payoff for non-self-documenting argument order. Discuss!

(d) We built in replication support. A database object requested as a reader picks a random slave to read from for the duration of your HTTP request (or the backend task). We can weight them manually, and we have a task that reweights them automatically. (If we do something stupid and wedge a slave's replication thread, every Slash process, across 17 machines, starts throttling back its connections to that machine within 10 seconds. This was originally written to handle slave DBs getting bogged down by load, but with our new faster DBs, that just never happens, so if a slave falls behind, one of us probably typed something dumb at the mysql> prompt.)

(e) We bolted on memcached support. Why bolted-on? Because back when we first tried memcached, we got a huge performance boost by caching our three big data types (users, stories, comment text) and we're pretty sure additional caching would provide minimal benefit at this point. Memcached's main use is to get and set data objects, and Slash doesn't really bottleneck that way.

Slash 1.0 was written way back in early 2000 with decent support for get and set methods to abstract objects out of a database (getDescriptions, subclassed _wheresql) -- but over the years we've only used them a few times. Most data types that are candidates to be objectified either are processed in large numbers (like tags and comments), in ways that would be difficult to do efficiently by subclassing, or have complicated table structures and pre- and post-processing (like users) that would make any generic objectification code pretty complicated. So most data access is done through get and set methods written custom for each data type, or, just as often, through methods that perform one specific update or select.

Overall, we're pretty happy with the database side of things. Most tables are fairly well normalized, not fully but mostly, and we've found this improves performance in most cases. Even on a fairly large site like Slashdot, with modern hardware and a little thinking ahead, we're able to push code and schema changes live quickly. Thanks to running multiple-master replication, we can keep the site fully live even during blocking queries like ALTER TABLE. After changes go live, we can find performance problem spots and optimize (which usually means caching, caching, caching, and occasionally multi-pass log processing for things like detecting abuse and picking users out of a hat who get mod points).

In fact, I'll go further than "pretty happy." Writing a database-backed web site has changed dramatically over the past seven years. The database used to be the bottleneck: centralized, hard to expand, slow. Now even a cheap DB server can run a pretty big site if you code defensively, and thanks to Moore's Law, memcached, and improvements in open-source database software, that part of the scaling issue isn't really a problem until you're practically the size of eBay. It's an exciting time to be coding web applications

<!-- end template: ID 283, dispStory;misc;default -->

slashdot网站架构:硬件和软件 zz相关推荐

  1. slashdot网站架构:硬件和软件 zz 1

    http://slash.solidot.org/article.pl?sid=07/10/27/1244202&from=rss Solidot网站经常不时出现小毛小病,比如最近留言计数器严 ...

  2. Slashdot 网站架构补遗

    Slashdot 前一段时间搞 10 周年庆典,公布了网站的架构信息(软件.硬件)情况.国内的克隆站点 Solidot 有朋友对此做了介绍.看了之后感觉剩下没有介绍的还有嚼头,也写一篇记录一下. 前面 ...

  3. 蓝牙核心技术了解(蓝牙协议、架构、硬件和软件笔记)

    声明:这篇文章是楼主beautifulzzzz学习网上关于蓝牙的相关知识的笔记,其中比较多的受益于xubin341719的蓝牙系列文章,同时还有其他网上作者的资料.由于有些文章只做参考或统计不足,如涉 ...

  4. [蓝牙] 1、蓝牙核心技术了解(蓝牙协议、架构、硬件和软件笔记)

    From: https://www.cnblogs.com/zjutlitao/p/4742428.html 主要参考资料的来源:xubin341719[下面是该前辈的BT系列文章] 下载连接:Blu ...

  5. [转]1、蓝牙核心技术了解(蓝牙协议、架构、硬件和软件笔记)

    本文转自:http://www.cnblogs.com/zjutlitao/p/4742428.html 声明:这篇文章是楼主beautifulzzzz学习网上关于蓝牙的相关知识的笔记,其中比较多的受 ...

  6. 1、蓝牙核心技术了解(蓝牙协议、架构、硬件和软件笔记)

    原文地址:http://www.cnblogs.com/zjutlitao/p/4742428.html 声明:这篇文章是楼主beautifulzzzz学习网上关于蓝牙的相关知识的笔记,其中比较多的受 ...

  7. 蓝牙核心技术介绍(蓝牙协议、架构、硬件和软件笔记)

    原文地址:http://www.cnblogs.com/zjutlitao/p/4742428.html 声明:这篇文章是楼主beautifulzzzz学习网上关于蓝牙的相关知识的笔记,其中比较多的受 ...

  8. [转载]如果你是12306网站架构师,你会如何设计网站的软件架构和硬件系统架构?...

    转载至德问网站.链接地址为:http://www.dewen.org/q/963/?ts=edm20121018&e=MzAyNjY5NzU3QHFxLmNvbQ%3D%3D 今年火车票网上售 ...

  9. 高并发大型网站架构设计

    一个大型的网站网站应该由如下6个子系统组成 负载均衡系统 反向代理系统 Web服务器系统 分布式存储系统 底层服务系统 数据库集群系统 为什么要做高并发系统设计? 事实上,针对于任何单一的网络服务器程 ...

最新文章

  1. GPS轨迹数据集免费下载资源整理
  2. Xamarin Essentials教程磁力计Magnetometer
  3. java企业号回调模式,微信公众平台企业号开发—开启回调模式
  4. 【QuotationTool】主要数据结构
  5. Java通过class文件得到所在jar包
  6. leetcode55. 跳跃游戏
  7. 我是如何用6个月,从0编程经验变成数据科学家的?
  8. 【图】max51开发板top页
  9. 长连接与心跳包 Persistent connection and HearBeats
  10. 使用VSCode开发Electron的初步入门
  11. Ubuntu配置NFS服务器与客户端
  12. 批量梯度下降(BGD)、小批量梯度下降(mini-BGD)、随机梯度下降(SGD)优缺点比较
  13. 吴恩达机器学习视频笔记记录(第2、5、7、8章)
  14. 解决 IDEA 导入项目 中文乱码
  15. 浅谈unicode字符集及编码方式
  16. Linux系统编程思维导图:基础指令,常用工具,进程,基础IO,IPC,线程;思维导图因为图片过大所以放了链接,需要的可以下载
  17. linux系统的日历如何改,linux下实现农历的日历
  18. Java空格算不算字符串长度_计算字符串尾部空格长度
  19. 使用cmd命令行netsh wlan创建wifi热点
  20. mysql实现添加图片_如何往mysql中添加图片

热门文章

  1. 刘清扬老师《银行客户经理营销》课程大纲
  2. 在量化交易方面,美国究竟比中国领先多久?
  3. 当代罪恶:山西黑砖窑
  4. RabbitMQ erl: command not found解决方案
  5. Progamming Erlang 通过 Makefile 自动编译 .erl 文件
  6. 如何通过CAD图中的坐标来确定是哪个坐标系
  7. [赛后总结]COCI2016/2017 Round#3题解
  8. android娃娃机源码,微信夹娃娃游戏娃娃机完整开源版源码(安装教程+三方支付)...
  9. 通过例子学TLA+(十四)--宏,过程与标签
  10. 现代机器人(Modern Robotics):力学,规划,控制读书笔记