MySQL Server Memory Usage
Every so often people ask me the question how should they estimate memory consumption by MySQL Server in given configuration. What is the formula they could use.
经常有人问我配置MySQL时该如何估算内存的消耗。那么该使用什么公式来计算呢?
The reasons to worry about memory usage are quite understandable. If you configure MySQL Server so it uses too small amount of memory it will likey perform suboptimally. If you however configure it so it consumes too much memory it may be crashing , failing to execute queries or make operation to swap seriously slowing down. On now legacy 32bit platforms you could also run out of address space so that had to be watched as well.
Having said so, I do not think looking for the secret fomula to compute your possible memory usage is the right approach to this problem. The reasons are - this formula is very complex nowadays and what is even more important “theoretically possible” maximum it provides have nothing to do with real memory consumptions. In fact typical server with 8GB of memory will often run with maximum theoretical memory usage of 100GB or more. Furthermore there is no easy “overcommit factor” you can use - it really depends on application and configuration. Some applications will drive server to 10% of theoretical memory consumptions others only to 1%.
关心内存怎么使用的原因是可以理解的。如果配置MySQL服务器使用太少的内存会导致性能不是最优的;如果配置了太多的内存则会导致崩溃,无法执行查询或者导致交换操作严重变慢。在现在的32位平台下,仍有可能把所有的地址空间都用完了,因此需要监视着。
话虽如此,但我并不觉得找到什么可以计算内存使用的秘诀公式就能很好地解决这个问题。原因有 -- 如今这个公式已经很复杂了,更重要的是,通过它计算得到的值只是“理论可能”并不是真正消耗的值。事实上,有8GB内存的常规服务器经常能运行到最大的理论值 -- 100GB甚至更高。此外,你轻易不会使用到“超额因素” -- 它实际上依赖于应用以及配置。一些应用可能需要理论内存的 10% 而有些仅需 1%。
So what could you do instead ? First take a look at global buffers which are allocated at start and always where - these are key_buffer_size, innodb_buffer_pool_size, innodb_additional_memory_pool_size, innodb_log_buffer_size, query_cache_size. If you’re using MyISAM seriously you can also add the size of Operation System cache you would like MySQL to use for your table. Take this number add to it number of memory Operation System and other applications need, add might be 32MB more for MySQL Server code and various small static buffers. This is memory which you can consider used when you just start MySQL Server. The rest of memory is available for connections. For exampe with 8GB server you might have everything listed adding up to 6GB, so you have 2GB left for your threads.
那么,我们可以做什么呢?首先,来看看那些在启动时就需要分配并且总是存在的全局缓冲 -- key_buffer_size, innodb_buffer_pool_size, innodb_additional_memory_pool_size, innodb_log_buffer_size, query_cache_size。如果你大量地使用MyISAM表,那么你也可以增加操作系统的缓存空间使得MySQL也能用得着。把这些也都加到操作系统和应用程序所需的内存值之中,可能需要增加32MB甚至更多的内存给MySQL服务器代码以及各种不同的小静态缓冲。这些就是你需要考虑的在 MySQL服务器启动时所需的内存。其他剩下的内存用于连接。例如有8GB内存的服务器,可能监听所有的服务就用了6GB的内存,剩下的2GB内存则留下来给线程使用。
Each thread connecting to MySQL Server will needs its own buffers. About 256K is allocated at once even if thread is idle - they are used by default thread stack, net buffer etc. If transaction is started some more space can add up. Running small queries might only barely increase memory consumption for given thread, however if table will perform complex operations such as full table scans, sorts, or need temporary tables as much as read_buffer_size, sort_buffer_size, read_rnd_buffer_size, tmp_table_size of memory might be allocated. But they are only allocated upon the need and freed once given stage of query is done. Some of them are allocated as single chunk at once others, for example tmp_table_size is rather maximum amount of memory MySQL will allocate for this operation. Note it is more complicated than once may think - multiple buffers of the same type might be allocated for exampe to handle subqueries. For some special queries memory usage might be even larger - bulk inserts may allocate bulk_insert_buffer_size bytes of memory if done to MyISAM tables. myisam_sort_buffer_size used for ALTER TABLE, OPTIMIZE TABLE, REPAIR TABLE commands.
每个连接到MySQL服务器的线程都需要有自己的缓冲。大概需要立刻分配256K,甚至在线程空闲时 -- 它们使用默认的线程堆栈,网络缓存等。事务开始之后,则需要增加更多的空间。运行较小的查询可能仅给指定的线程增加少量的内存消耗,然而如果对数据表做复杂的操作例如扫描、排序或者需要临时表,则需分配大约 read_buffer_size, sort_buffer_size, read_rnd_buffer_size, tmp_table_size 大小的内存空间。不过它们只是在需要的时候才分配,并且在那些操作做完之后就释放了。有的是立刻分配成单独的组块,例如 tmp_table_size 可能高达MySQL所能分配给这个操作的最大内存空间了。注意,这里需要考虑的不只有一点 -- 可能会分配多个同一种类型的缓存,例如用来处理子查询。一些特殊的查询的内存使用量可能更大 -- 如果在MyISAM表上做成批的插入时需要分配 bulk_insert_buffer_size 大小的内存。执行 ALTER TABLE, OPTIMIZE TABLE, REPAIR TABLE 命令时需要分配 myisam_sort_buffer_size 大小的内存。
For OLTP applications with simple queries memory consumption is often less than 1MB per thread with default buffers, and you really do not need to increase per thread buffers unless you have complex queries. Sorting 10 rows will be as fast with 1MB sort buffer as with 16MB (actually 16MB might be even slower but it is other story).
只有简单查询OLTP应用的内存消耗经常是使用默认缓冲的每个线程小于1MB,除非需要使用复杂的查询否则无需增加每个线程的缓冲大小。使用1MB的缓冲来对10行记录进行排序和用16MB的缓冲基本是一样快的(实际上16MB可能会更慢,不过这是其他方面的事了)。
Another approach you may take is to come up with amount of memory you want MySQL Server to consume at peak. This can be easily computed by memory needed for OS, File Cache and other applications. For 32bit envinronment you also should keep 32bit limits into account and probably limit “mysqld” size to about 2.5GB (exact number depens on a lot of factors). Now you can use “ps aux” to see VSZ - Virtual Memory allocated by MySQL process. You can also look at “Resident Memory” but I find it less helpful as it may down because of swapping - not what you would like to see. Monitor how the value changes so you know memory requirements with current settings and increase/decrease values appropriately.
另外,就是找出MySQL服务器内存消耗的峰值。这很容易就能计算出操作系统所需的内存、文件缓存以及其他应用。在32位环境下,还需要考虑到32位的限制,限制 “mysqld” 的值大约为2.5G(实际上还要考虑到很多其他因素)。现在运行 “ps aux” 命令来查看 VSZ 的值 -- MySQL 进程分配的虚拟内存。也可以查看 “Resident Memory” 的值,不过我想它可能没多大用处,因为它会由于交换而变小 -- 这并不是你想看到的。监视着内存变化的值,就能知道是需要增加/减少当前的内存值了。
Some may say, Hey we want to have 100% guarantee our server will never run out of memory, no matter which queries or users will decide to run. Unfortunately this is as much close to impossible to be impractical. Here is why:
可能有的人想说,我们想要让服务器能保证100%不会耗尽内存,不管决定用什么样的查询、什么样的用户。很不幸,这其实很不明智也不可能,因为:
List of rarely considered MySQL Server Memory Requirements
以下是很少考虑的MySQL服务器内存需求
Thread buffers can be allocated more than once for each thread. Consider for example subqueries - each layer may need its own read_buffer,sort_buffer, tmp_table_size etc
每个线程可能会不止一次需要分配缓冲。 考虑到例如子查询 -- 每层都需要有自己的 read_buffer,sort_buffer, tmp_table_size 等。
Many variabes can be set per connection. So you can’t relay on global values if developers may use their local values to run some queries.
在每个连接中很多变量都可能需要重新设置。 如果开发者想设定自己的变量值来运行某些查询就不能继续使用全局值。
There can be mutiple key caches. Multiple key caches can be created to accomodate query executions
可能有多个索引缓存。 为了配合执行查询可能会创建多个索引缓存。
Query Parsing and optimization needs memory. This is usually small to be ignored but certain queries can have very large memory requrement for this step, especially specially crafted ones.
解析查询和优化都需要内存。 这些内存通常比较小,可以忽略,不过如果是某些查询在这个步骤中则需要大量内存,尤其是那些设计的比较特别的查询。
Stored Procedures. Compex stored procedures may require a lot of memory
存储过程。 复杂的存储过程可能会需要大量内存。
Prepared statements and Cursors. Single connection may have many prepared statements and cursors. Their number finally can be limited but each of them still can have very large memory consumption
准备查询语句以及游标。 单次链接可能会有很多的准备好的语句以及游标。它们的数量最后可以限定,但是仍然会消耗大量的内存。
Innodb Table Cache. Innodb has its own table cache in which meta data about each table Accessed from the start is stored. It is never purged and may be large if you have a lot of tables. It also means user having CREATE TABLE privilege should be able to run MySQL Server out of memory
Innodb表缓存。 Innnodb表有自己的缓存,它保存了从一开始访问每个表的元数据。它们从未被清除过,如果有很多Innodb表的话,那么这个量就很大了。这也就意味着拥有 CREATE TABLE 权限的用户就可能把MySQL服务器的内存耗尽。
MyISAM buffers. MyISAM may allocate buffer which is large enough to contain largest record in the given table which is held until table is closed.
MyISAM缓冲。 MyISAM表可能会分配一个足以装下指定表最大记录的缓冲,而且这个缓冲直到表关闭了才释放。
Federated Storage Engine. This may have unbound memory requirements retriving result sets from remove queries.
FEDERATED存储引擎。 This may have unbound memory requirements retriving result sets from remove queries.
Blobs may require 3x time of memory. This is important if you’re deaing with large Blobs (your max_allowed_packet is large) Processing of 256MB of blob may require 768MB of memory.
Blobs可能需要3倍的内存。 这在处理很大(max_allowed_packet 的值较大)的Blobs数据时很重要,如果处理256MB的数据可能需要768MB的内存。
Storage Engines. In general storage engines may have their own per thread or global memory allocations which are not tuned as buffers. Watch for these especially now with many storage engines being released for MySQL by various parties.
存储引擎。 通常情况下,存储引擎会设置自己的每个线程的全局分配内存,它通常不能像缓存一样可以调节。现在应该通过各种方式来特别关注MySQL释放出来的存储引擎。
I do not pretend this to be complete list. On the contrary I’m quite sure I’ve missed something (drop me a note if you have something to add). But the main point is - there are a lot of memory consumers out where and trying to find peak possible usage for each is impractical - so my advice would be measure what you get in practice and how memory consumption reacts to changing various variables. For example you may find out increasing sort_buffer_size from 1MB to 4MB and 1000 max_connections increases peak memory consumption just 30MB not 3000MB as you might have counted.
我想这还不是完成的列表,相反地,我觉得还是漏掉了一些(如果你知道,请给我回复加上)。但主要的原因是 -- 找到每次内存消耗峰值是不切实际的,因此我的这些建议可以用来衡量一下你实际修改一些变量值产生的反应。例如,把 sort_buffer_size 从1MB增加到4MB并且在 max_connections 为 1000 的情况下,内存消耗增长峰值并不是你所计算的3000MB而是30MB。
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Making a Debian or Ubuntu Machine an LDAP Authentication Client
此文档描述如何将Ubuntu或Debian Linux加入到Ldap认证登录。
So you've got an LDAP server floating around and you'd like to have your Ubuntu or Debian client authenticate against it. It's assumed here that you already have an LDAP server and you or your admin can provide the answers to some of the questions asked upon configuration. Firstly, you'll need to open up your favourite package manager and install libpam-ldap and libnss-ldap:
$ apt-get install libpam-ldap libnss-ldap
This command will bring down all the required libraries to enable you to have your machine authenticating against the LDAP server of your dreams. Once the packages start being unpacked you'll be hit up for a few questions:
- IP address / hostname of the LDAP server. ie: ldap.my.domain
- The search base of your LDAP domain. ie: dc=my,dc=domain
- You'll be asked the version of LDAP server you're connecting to, "Version 3" ought to be safe in most cases.
- A screen titled "Configuring LIBNSS-LDAP will appear with only the "OK" option. Select it :)
- On the next screen you'll be asked if you want to make root the DB admin. The best answer is "yes".
- Now you'll be asked whether the DB requires logging in, say "No"
- You'll be asked for the root login account for LDAP. It is often something like: cn=manager,dc=my,dc=domain
- Then you'll need to enter the LDAP password for the aforementioned LDAP account
That will see all the packages installed and the base configurations satisfied. If your LDAP server is already populated with content then at this point you should be able to run commands such as "getent passwd <username>" and if that username is unique to LDAP and you get a response then you answered all the questions correctly. Now you need to customise PAM to make it use LDAP for authentication.You'll need to run the following command:
$ sudo vi /etc/pam.d/sudo
Once deep in the bowells of the sudo file, you need to add one line above the existing line, something like this:
auth sufficient pam_ldap.so
auth required pam_unix.so
(Note: From Ubuntu 5.10 (Breezy) and Debian 3.1 (Sarge) you no longer need to edit /etc/pam.d/sudo.) This process now gets repeated for four more files, so I'll show the vi command and then the changes required:
$ sudo vi /etc/pam.d/common-account
account sufficient pam_ldap.so
account required pam_unix.so
$ sudo vi /etc/pam.d/common-auth
auth sufficient pam_ldap.so
auth required pam_unix.so nullok_secure
$ sudo vi /etc/pam.d/common-password
password sufficient pam_ldap.so
password required pam_unix.so nullok obscure min=4 max=8 md5
$ sudo vi /etc/pam.d/common-session
session sufficient pam_ldap.so
session required pam_unix.so
Last but not least we need to edit nsswitch.conf:
$ sudo vi /etc/nsswitch.conf
and once you're in that file, run this command:
:%s/compat/ldap files/g
Tada! If you've entered in all your local configuration information correctly, you'll have a living breathing LDAP authentication client. Enjoy :)
1. 不再需要手动修改pam.d下面的文件, 安装libpam-ldap时会根据你的回答自动配置pam.d下的文件。
2. 更新后的/etc/pam.d/sudo 可能会导致sudo时要求输入多次密码,在线更新程序也会失败,解决办法是将/etc/pam.d/sudo中的pam_ldap.so, pam_unix.so行注释。
3. ldap中的用户如果需要使用sudo, 可以将用户加到admin组,默认情况下,所有admin组的用户都可以sudo. 或者修改/etc/sudoers,允许ldap用户对应的组可以sudo
4. ldap中,一个组设置多个成员,需要在对应的组中添加memberUid属性,此属性可以有多个值,对应的是用户名,而不是用户id。
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Ubuntu 9.10下加入自启动程序
>>
Ubuntu 9.10下安装Eclipse CDT 6.0
首先,直接从Eclipse官方网站下载的最新的CDT包好象有点问题,Finish按钮总是点击不了,非常奇怪,不过可以通过Enter按键来替代。使用起来总有点不爽,所以决定使用deb包试一下。
鱼漂(www.eit.name)原创,转载请注明.
安装步骤如下:
1. 安装Jre, Jdk
sudo apt-get install sun-java6-jre
sudo apt-get install sun-java6-jdk
2. 安装Eclipse
sudo apt-get install eclipse
安装完eclipse后,还需要安装以下两个包,否则安装CDT将失败:
sudo apt-get install eclipse-pde
sudo apt-get install eclipse-jdt
3.安装CDT
通过Eclipse的Help -> Install New Software进入安装插件界面;
首先,需要添加一个CDT的下载站点,填写CDT的下载地址:
http://download.eclipse.org/tools/cdt/releases/galileo
(以上下载地址,可以在http://www.eclipse.org/cdt/downloads.php找到)
之后在此站点安装CDT包即可。
>>>>>>>>>>>>>>>>>>>>>>>>
Shell中exec、source和shell script的区别
exec和source都属于bash内部命令(builtins commands),在bash下输入man exec或man source可以查看所有的内部命令信息。
bash shell的命令分为两类:外部命令和内部命令。外部命令是通过系统调用或独立的程序实现的,如sed、awk等等。内部命令是由特殊的文件格式(.def)所实现,如cd、history、exec等等。
在说明exe和source的区别之前,先说明一下fork的概念。
fork是linux的系统调用,用来创建子进程(child process)。子进程是父进程(parent process)的一个副本,从父进程那里获得一定的资源分配以及继承父进程的环境。子进程与父进程唯一不同的地方在于pid(process id)。
环境变量(传给子进程的变量,遗传性是本地变量和环境变量的根本区别)只能单向从父进程传给子进程。不管子进程的环境变量如何变化,都不会影响父进程的环境变量。
shell script:
有两种方法执行shell scripts,一种是新产生一个shell,然后执行相应的shell scripts;一种是在当前shell下执行,不再启用其他shell。
新产生一个shell然后再执行scripts的方法是在scripts文件开头加入以下语句
#!/bin/sh
一般的script文件(.sh)即是这种用法。这种方法先启用新的sub-shell(新的子进程),然后在其下执行命令。
另外一种方法就是上面说过的source命令,不再产生新的shell,而在当前shell下执行一切命令。
source:
source命令即点(.)命令。
在bash下输入man source,找到source命令解释处,可以看到解释"Read and execute commands from filename in the current shell environment and ..."。从中可以知道,source命令是在当前进程中执行参数文件中的各个命令,而不是另起子进程(或sub-shell)。
exec:
在bash下输入man exec,找到exec命令解释处,可以看到有"No new process is created."这样的解释,这就是说exec命令不产生新的子进程。那么exec与source的区别是什么呢?
exec命令在执行时会把当前的shell process关闭,然后换到后面的命令继续执行。
用shc 加密shell
Shell 在Linux 系统中的强大有目共睹,每个管理员都是自己写的一组shell 来简化自己的工作,有时我们会把自己写的shell 让别人用,但这时有一个麻烦的地方,就是shell 里包含一些敏感信息如帐号密码等,这时我们需要一个工具–shc 把我们的shell 加密。
shc是一个脚本编译工具, 使用RC4加密算法, 它能够把shell程序转换成二进制可执行文件(支持静态链接和动态链接)。
shc主页:
http://www.datsi.fi.upm.es/%7Efrosal/
安装shc
大部分的系统都在自己的源中提供shc 的二进制安装包,如果不幸你的系统中没有则可以去软件主页下载源码包自己安装。
源码安装:
wget http://www.datsi.fi.upm.es/%7Efrosal/sources/shc-3.8.6.tgz
tar vxf shc-3.8.6.tgz
cd shc-3.8.6
make test
make strings
make install
shc的使用方法:
shc -v -f myshell.sh #-f 为要加密的shell
运行后会生成两个文件: myshell.sh.x 和 myshell.sh.x.c
shc -r -f.x 为可执行的二进制文件,myshell.sh.x.c为该文件的C原文件。
Using PHP sessions across subdomains
Example: www.domain.com
The downside to this is that the session data can't travel with you to other subdomains. So if you started a session on www.domain.com, the session data would become unavailable on forums.domain.com. The solution is to change the domain PHP uses when it sets the 'PHPSESSID' cookie.
Assuming you have an init file that you include at the top of every PHP page, you can use the ini_set() function. Just add this to the top of your init page:
ini_set('session.cookie_domain', substr($_SERVER['SERVER_NAME'],strpos($_SERVER['SERVER_NAME'],"."),100));
This line of code takes the domain and lops off the subdomain.
Example: forums.domain.com -> .domain.com
Now, every time PHP sets the 'PHPSESSID' cookie, the cookie will be available to all subdomains!
鱼漂提醒:
1. 以下代码可以优化如下:
//sessions across subdomains
if (strpos($_SERVER['SERVER_NAME'],".")!==false) {
ini_set('session.cookie_domain',substr($_SERVER['SERVER_NAME'],strpos($_SERVER['SERVER_NAME'],".")));
}
2.除了设置session的domain外,还需要在所有的服务器上共享session的存储, 如果session是文件存储,则所有服务器应共享session存储目录,比如通过nfs或san等. 另外一个方法是将所有服务器的session保存到同一个DB;>>>>>>>>>>>>>>>>>>>>>>>>>>>
GRUB 2 中文指南
在这份指南中,GRUB 2 版本为 1.96 或更新版本。传统 GRUB (0.97 版) 将以 GRUB 称之。
GRUB 2 与原本的 GRUB 相比之下主要增加的功能包含:
l 支持脚本(Scripting),包括条件语法和函数
l 动态加载模块
l 救援模式
l 可定制菜单
l 主题
l 支持图形化启动菜单以及增强的引导画面功能
l 可直接由硬盘中的启动光盘映像文件来启动
l 新的配置文件结构
l 支持非 x86 平台(例如 PowerPC)
l 全面支持 UUID(不只在 Ubuntu)
l openSUSE 和 Fedora 尚未采用 GRUB 2
PDF全文下载:
[file]attachment/200911/grub2.rar[/file]
[转]Linux下core文件调试方法
在程序不寻常退出时,内核会在当前工作目录下生成一个core文件(是一个内存映像,同时加上调试信息)。使用gdb来查看core文件,可以指示出导致程序出错的代码所在文件和行数。
1.core文件的生成开关和大小限制
1)使用ulimit -c命令可查看core文件的生成开关。若结果为0,则表示关闭了此功能,不会生成core文件。
2)使用ulimit -c filesize命令,可以限制core文件的大小(filesize的单位为kbyte)。若ulimit -c unlimited,则表示core文件的大小不受限制。如果生成的信息超过此大小,将会被裁剪,最终生成一个不完整的core文件。在调试此core文件的时候,gdb会提示错误。
2.core文件的名称和生成路径
core文件生成路径:
输入可执行文件运行命令的同一路径下。
若系统生成的core文件不带其它任何扩展名称,则全部命名为core。新的core文件生成将覆盖原来的core文件。
1)/proc/sys/kernel/core_uses_pid可以控制core文件的文件名中是否添加pid作为扩展。文件内容为1,表示添加pid作为扩展名,生成的core文件格式为core.xxxx;为0则表示生成的core文件同一命名为core。
可通过以下命令修改此文件:
echo "1" > /proc/sys/kernel/core_uses_pid
2)proc/sys/kernel/core_pattern可以控制core文件保存位置和文件名格式。
可通过以下命令修改此文件:
echo "/corefile/core-%e-%p-%t" > core_pattern,可以将core文件统一生成到/corefile目录下,产生的文件名为core-命令名-pid-时间戳
以下是参数列表:
%p - insert pid into filename 添加pid
%u - insert current uid into filename 添加当前uid
%g - insert current gid into filename 添加当前gid
%s - insert signal that caused the coredump into the filename 添加导致产生core的信号
%t - insert UNIX time that the coredump occurred into filename 添加core文件生成时的unix时间
%h - insert hostname where the coredump happened into filename 添加主机名
%e - insert coredumping executable name into filename 添加命令名
3.core文件的查看
core文件需要使用gdb来查看。
gdb ./a.out
core-file core.xxxx
使用bt命令即可看到程序出错的地方。
以下两种命令方式具有相同的效果,但是在有些环境下不生效,所以推荐使用上面的命令。
1)gdb -core=core.xxxx
file ./a.out
bt
2)gdb -c core.xxxx
file ./a.out
bt
4.开发板上使用core文件调试
如果开发板的操作系统也是linux,core调试方法依然适用。如果开发板上不支持gdb,可将开发板的环境(依赖库)、可执行文件和core文件拷贝到PC的linux下。
在PC上调试开发板上产生的core文件,需要使用交叉编译器自带的gdb,并且需要在gdb中指定solib-absolute-prefix和solib-search-path两个变量以保证gdb能够找到可执行程序的依赖库路径。有一种建立配置文件的方法,不需要每次启动gdb都配置以上变量,即:在待运行gdb的路径下建立.gdbinit。
配置文件内容:
set solib-absolute-prefix YOUR_CROSS_COMPILE_PATH
set solib-search-path YOUR_CROSS_COMPILE_PATH
set solib-search-path YOUR_DEVELOPER_TOOLS_LIB_PATH
handle SIG32 nostop noprint pass
注意:待调试的可执行文件,在编译的时候需要加-g,core文件才能正常显示出错信息!有时候core信息很大,超出了开发板的空间限制,生成的core信息会残缺不全而无法使用,可以通过挂载到PC的方式来规避这一点。
本文来自CSDN博客,转载请标明出处:http://blog.csdn.net/shaovey/archive/2008/07/31/2744487.aspx
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>Ubuntu 9.04 Beta安装Adobe Reader 9.1
2、执行后会安装至/opt/Adobe/Reader9目录
3、下载中文简体字型:FontPack910_chs_i486-linux.tar.bz2
4、下载中文繁体字型:FontPack910_cht_i486-linux.tar.bz2
5、分别解开字型压缩档至CHSKIT及CHTKIT目录: tar jxvf压缩档
6、执行CHSKIT及CHTKIT目录中的INSTALL: sudo CHTKIT/INSTALL>>>>>>>>>>>>>>>>