gitee地址:https://gitee.com/fastdfs100
下载:FastDFS ,libfastcommon,fastdfs-nginx-module
1.安装 libfastcommon

tar -xf  libfastcommon-V1.0.55.tar.gz #解压
mv libfastcommon-V1.0.55 libfastcommon
cd libfastcommon/  #进入目录
./make.sh && ./make.sh install #编译安装

2.安装FastDFS

tar  -xf fastdfs-V6.07.tar.gz    #解压
mv fastdfs-V6.07 fastdfs
cd fastdfs/ #进入目录
./make.sh && ./make.sh install  #编译安装

创建目录

mkdir /data/fastdfs/
mkdir /data/fastdfs/conf
mkdir /data/fastdfs/data
mkdir /data/fastdfs/logs
mkdir /data/fastdfs/storage

创建配置文件到 /data/fastdfs/conf 目录下
storage.conf

# is this config file disabled
# false for enabled
# true for disabled
disabled=false# the name of the group this storage server belongs to
#
# comment or remove this item for fetching from tracker server,
# in this case, use_storage_id must set to true in tracker.conf,
# and storage_ids.conf must be configed correctly.
group_name=mmopen# bind an address of this host
# empty for bind all addresses of this host
bind_addr=# if bind an address of this host when connect to other servers
# (this storage server as a client)
# true for binding the address configed by above parameter: "bind_addr"
# false for binding any address of this host
client_bind=true# the storage server port
port=23000# connect timeout in seconds
# default value is 30s
connect_timeout=10# network timeout in seconds
# default value is 30s
network_timeout=60# heart beat interval in seconds
heart_beat_interval=30# disk usage report interval in seconds
stat_report_interval=60# the base path to store data and log files
#基本路径
base_path=/data/fastdfs# max concurrent connections the server supported
# default value is 256
# more max_connections means more memory will be used
# you should set this parameter larger, eg. 10240
max_connections=1024# the buff size to recv / send data
# this parameter must more than 8KB
# default value is 64KB
# since V2.00
buff_size = 256KB# accept thread count
# default value is 1
# since V4.07
accept_threads=1# work thread count, should <= max_connections
# work thread deal network io
# default value is 4
# since V2.00
work_threads=4# if disk read / write separated
##  false for mixed read and write
##  true for separated read and write
# default value is true
# since V2.00
disk_rw_separated = true# disk reader thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_reader_threads = 1# disk writer thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_writer_threads = 1# when no entry to sync, try read binlog again after X milliseconds
# must > 0, default value is 200ms
sync_wait_msec=50# after sync a file, usleep milliseconds
# 0 for sync successively (never call usleep)
sync_interval=0# storage sync start time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_start_time=00:00# storage sync end time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_end_time=23:59# write to the mark file after sync N files
# default value is 500
write_mark_file_freq=500# path(disk or mount point) count, default value is 1
store_path_count=1# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
# NOTE: the store paths' order is very important, don't mess up.
#文件所在路径
store_path0=/data/fastdfs/storage
#store_path1=/home/yuqing/fastdfs2# subdir_count  * subdir_count directories will be auto created under each
# store_path (disk), value can be 1 to 256, default value is 256
subdir_count_per_path=256# tracker_server can ocur more than once for multi tracker servers.
# the value format of tracker_server is "HOST:PORT",
#   the HOST can be hostname or ip address,
#   and the HOST can be dual IPs or hostnames seperated by comma,
#   the dual IPS must be an intranet IP and an extranet IP.
#   such as: 192.168.2.100,122.244.141.46
# 本机ip:22122
tracker_server=192.168.0.1:22122#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info#unix group name to run this program,
#not set (empty) means run by the group of current user
run_by_group=#unix username to run this program,
#not set (empty) means run by current user
run_by_user=# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" (only one asterisk) means match all ip addresses
# we can use CIDR ips like 192.168.5.64/26
# and also use range like these: 10.0.1.[0-254] and host[01-08,20-25].domain.com
# for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# allow_hosts=192.168.5.64/26
allow_hosts=*# the mode of the files distributed to the data path
# 0: round robin(default)
# 1: random, distributted by hash code
file_distribute_path_mode=0# valid when file_distribute_to_path is set to 0 (round robin),
# when the written file count reaches this number, then rotate to next path
# default value is 100
file_distribute_rotate_count=100# call fsync to disk when write big file
# 0: never call fsync
# other: call fsync when written bytes >= this bytes
# default value is 0 (never call fsync)
fsync_after_written_bytes=0# sync log buff to disk every interval seconds
# must > 0, default value is 10 seconds
sync_log_buff_interval=10# sync binlog buff / cache to disk every interval seconds
# default value is 60 seconds
sync_binlog_buff_interval=1# sync storage stat info to disk every interval seconds
# default value is 300 seconds
sync_stat_file_interval=300# thread stack size, should >= 512KB
# default value is 512KB
thread_stack_size=512KB# the priority as a source server for uploading file.
# the lower this value, the higher its uploading priority.
# default value is 10
upload_priority=10# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# default values is empty
if_alias_prefix=# if check file duplicate, when set to true, use FastDHT to store file indexes
# 1 or yes: need check
# 0 or no: do not check
# default value is 0
check_file_duplicate=0# file signature method for check file duplicate
## hash: four 32 bits hash code
## md5: MD5 signature
# default value is hash
# since V4.01
file_signature_method=hash# namespace for storing file indexes (key-value pairs)
# this item must be set when check_file_duplicate is true / on
key_namespace=FastDFS# set keep_alive to 1 to enable persistent connection with FastDHT servers
# default value is 0 (short connection)
keep_alive=0# you can use "#include filename" (not include double quotes) directive to
# load FastDHT server list, when the filename is a relative path such as
# pure filename, the base path is the base path of current/this config file.
# must set FastDHT server list when check_file_duplicate is true / on
# please see INSTALL of FastDHT for detail
##include /home/yuqing/fastdht/conf/fdht_servers.conf# if log to access log
# default value is false
# since V4.00
use_access_log = false# if rotate the access log every day
# default value is false
# since V4.00
rotate_access_log = false# rotate access log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.00
access_log_rotate_time=00:00# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time=00:00# rotate access log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_access_log_size = 0# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0# keep days of the log files
# 0 means do not delete old log files
# default value is 0
log_file_keep_days = 0# if skip the invalid record when sync file
# default value is false
# since V4.02
file_sync_skip_invalid_record=false# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600# if compress the binlog files by gzip
# default value is false
# since V6.01
compress_binlog = false# try to compress binlog time, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 01:30
# since V6.01
compress_binlog_time=01:30# use the ip address of this storage server if domain_name is empty,
# else this domain name will ocur in the url redirected by the tracker server
http.domain_name=# the port of the web server on this storage server
http.server_port=8888

tracker.conf

# is this config file disabled
# false for enabled
# true for disabled
disabled = false# bind an address of this host
# empty for bind all addresses of this host
bind_addr =# the tracker server port
port = 22122# connect timeout in seconds
# default value is 30
# Note: in the intranet network (LAN), 2 seconds is enough.
connect_timeout = 5# network timeout in seconds for send and recv
# default value is 30
network_timeout = 60# the base path to store data and log files
#基本路径
base_path = /data/fastdfs# max concurrent connections this server support
# you should set this parameter larger, eg. 10240
# default value is 256
max_connections = 1024# accept thread count
# default value is 1 which is recommended
# since V4.07
accept_threads = 1# work thread count
# work threads to deal network io
# default value is 4
# since V2.00
work_threads = 4# the min network buff size
# default value 8KB
min_buff_size = 8KB# the max network buff size
# default value 128KB
max_buff_size = 128KB# the method for selecting group to upload files
# 0: round robin
# 1: specify group
# 2: load balance, select the max free space group to upload file
store_lookup = 2# which group to upload file
# when store_lookup set to 1, must set store_group to the group name
store_group = mmopen# which storage server to upload file
# 0: round robin (default)
# 1: the first server order by ip address
# 2: the first server order by priority (the minimal)
# Note: if use_trunk_file set to true, must set store_server to 1 or 2
store_server = 0# which path (means disk or mount point) of the storage server to upload file
# 0: round robin
# 2: load balance, select the max free space path to upload file
store_path = 0# which storage server to download file
# 0: round robin (default)
# 1: the source storage server which the current file uploaded to
download_server = 0# reserved storage space for system or other applications.
# if the free(available) space of any stoarge server in
# a group <= reserved_storage_space, no file can be uploaded to this group.
# bytes unit can be one of follows:
### G or g for gigabyte(GB)
### M or m for megabyte(MB)
### K or k for kilobyte(KB)
### no unit for byte(B)
### XX.XX% as ratio such as: reserved_storage_space = 10%
reserved_storage_space = 20%#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level = info#unix group name to run this program,
#not set (empty) means run by the group of current user
run_by_group=#unix username to run this program,
#not set (empty) means run by current user
run_by_user =# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" (only one asterisk) means match all ip addresses
# we can use CIDR ips like 192.168.5.64/26
# and also use range like these: 10.0.1.[0-254] and host[01-08,20-25].domain.com
# for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# allow_hosts=192.168.5.64/26
allow_hosts = *# sync log buff to disk every interval seconds
# default value is 10 seconds
sync_log_buff_interval = 1# check storage server alive interval seconds
check_active_interval = 120# thread stack size, should >= 64KB
# default value is 256KB
thread_stack_size = 256KB# auto adjust when the ip address of the storage server changed
# default value is true
storage_ip_changed_auto_adjust = true# storage sync file max delay seconds
# default value is 86400 seconds (one day)
# since V2.00
storage_sync_file_max_delay = 86400# the max time of storage sync a file
# default value is 300 seconds
# since V2.00
storage_sync_file_max_time = 300# if use a trunk file to store several small files
# default value is false
# since V3.00
use_trunk_file = false # the min slot size, should <= 4KB
# default value is 256 bytes
# since V3.00
slot_min_size = 256# the max slot size, should > slot_min_size
# store the upload file to trunk file when it's size <=  this value
# default value is 16MB
# since V3.00
slot_max_size = 1MB# the alignment size to allocate the trunk space
# default value is 0 (never align)
# since V6.05
# NOTE: the larger the alignment size, the less likely of disk
#       fragmentation, but the more space is wasted.
trunk_alloc_alignment_size = 256# if merge contiguous free spaces of trunk file
# default value is false
# since V6.05
trunk_free_space_merge = true# if delete / reclaim the unused trunk files
# default value is false
# since V6.05
delete_unused_trunk_files = false# the trunk file size, should >= 4MB
# default value is 64MB
# since V3.00
trunk_file_size = 64MB# if create trunk file advancely
# default value is false
# since V3.06
trunk_create_file_advance = false# the time base to create trunk file
# the time format: HH:MM
# default value is 02:00
# since V3.06
trunk_create_file_time_base = 02:00# the interval of create trunk file, unit: second
# default value is 38400 (one day)
# since V3.06
trunk_create_file_interval = 86400# the threshold to create trunk file
# when the free trunk file size less than the threshold,
# will create he trunk files
# default value is 0
# since V3.06
trunk_create_file_space_threshold = 20G# if check trunk space occupying when loading trunk free spaces
# the occupied spaces will be ignored
# default value is false
# since V3.09
# NOTICE: set this parameter to true will slow the loading of trunk spaces
# when startup. you should set this parameter to true when neccessary.
trunk_init_check_occupying = false# if ignore storage_trunk.dat, reload from trunk binlog
# default value is false
# since V3.10
# set to true once for version upgrade when your version less than V3.10
trunk_init_reload_from_binlog = false# the min interval for compressing the trunk binlog file
# unit: second, 0 means never compress
# FastDFS compress the trunk binlog when trunk init and trunk destroy
# recommand to set this parameter to 86400 (one day)
# default value is 0
# since V5.01
trunk_compress_binlog_min_interval = 86400# the interval for compressing the trunk binlog file
# unit: second, 0 means never compress
# recommand to set this parameter to 86400 (one day)
# default value is 0
# since V6.05
trunk_compress_binlog_interval = 86400# compress the trunk binlog time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 03:00
# since V6.05
trunk_compress_binlog_time_base = 03:00# max backups for the trunk binlog file
# default value is 0 (never backup)
# since V6.05
trunk_binlog_max_backups = 7# if use storage server ID instead of IP address
# if you want to use dual IPs for storage server, you MUST set
# this parameter to true, and configure the dual IPs in the file
# configured by following item "storage_ids_filename", such as storage_ids.conf
# default value is false
# since V4.00
use_storage_id = false# specify storage ids filename, can use relative or absolute path
# this parameter is valid only when use_storage_id set to true
# since V4.00
storage_ids_filename = storage_ids.conf# id type of the storage server in the filename, values are:
## ip: the ip address of the storage server
## id: the server id of the storage server
# this paramter is valid only when use_storage_id set to true
# default value is ip
# since V4.03
id_type_in_filename = id# if store slave file use symbol link
# default value is false
# since V4.01
store_slave_file_use_link = false# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time = 00:00# if compress the old error log by gzip
# default value is false
# since V6.04
compress_old_error_log = false# compress the error log days before
# default value is 1
# since V6.04
compress_error_log_days_before = 7# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0# keep days of the log files
# 0 means do not delete old log files
# default value is 0
log_file_keep_days = 0# if use connection pool
# default value is false
# since V4.05
use_connection_pool = true# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600# HTTP port on this tracker server
http.server_port = 8080# check storage HTTP server alive interval seconds
# <= 0 for never check
# default value is 30
http.check_alive_interval = 30# check storage HTTP server alive type, values are:
#   tcp : connect to the storge server with HTTP port only,
#        do not request and get response
#   http: storage check alive url must return http status 200
# default value is tcp
http.check_alive_type = tcp# check storage HTTP server alive uri/url
# NOTE: storage embed HTTP server support uri: /status.html
http.check_alive_uri = /status.html

去到/etc/init.d 目录下找到
fdfs_storaged文件修改配置文件目录

#!/bin/bash
#
# fdfs_storaged Starts fdfs_storaged
#
#
# chkconfig: 2345 99 01
# description: FastDFS storage server
### BEGIN INIT INFO
# Provides: $fdfs_storaged
### END INIT INFO# Source function library.
if [ -f /etc/init.d/functions ]; then. /etc/init.d/functions
fiPRG=/usr/bin/fdfs_storaged
# 修改配置文件路径
CONF=/data/fastdfs/conf/storage.confif [ ! -f $PRG ]; thenecho "file $PRG does not exist!"exit 2
fiif [ ! -f $CONF ]; thenecho "file $CONF does not exist!"exit 2
fiCMD="$PRG $CONF"
RETVAL=0start() {echo -n "Starting FastDFS storage server: "$CMD &RETVAL=$?echoreturn $RETVAL
}
stop() {$CMD stopRETVAL=$?return $RETVAL
}
rhstatus() {status fdfs_storaged
}
restart() {$CMD restart &
}case "$1" instart)start;;stop)stop;;status)rhstatus;;restart|reload)restart;;condrestart)restart;;*)echo "Usage: $0 {start|stop|status|restart|condrestart}"exit 1
esacexit $?

fdfs_trackerd 文件修改配置文件目录

#!/bin/bash
#
# fdfs_trackerd Starts fdfs_trackerd
#
#
# chkconfig: 2345 99 01
# description: FastDFS tracker server
### BEGIN INIT INFO
# Provides: $fdfs_trackerd
### END INIT INFO# Source function library.
if [ -f /etc/init.d/functions ]; then. /etc/init.d/functions
fiPRG=/usr/bin/fdfs_trackerd
#修改配置文件目录
CONF=/data/fastdfs/conf/tracker.confif [ ! -f $PRG ]; thenecho "file $PRG does not exist!"exit 2
fiif [ ! -f $CONF ]; thenecho "file $CONF does not exist!"exit 2
fiCMD="$PRG $CONF"
RETVAL=0start() {echo -n $"Starting FastDFS tracker server: "$CMD &RETVAL=$?echoreturn $RETVAL
}
stop() {$CMD stopRETVAL=$?return $RETVAL
}
rhstatus() {status fdfs_trackerd
}
restart() {$CMD restart &
}case "$1" instart)start;;stop)stop;;status)rhstatus;;restart|reload)restart;;condrestart)restart;;*)echo $"Usage: $0 {start|stop|status|restart|condrestart}"exit 1
esacexit $?

设置开机自启

vim /etc/apt/sources.list #编辑
deb http://archive.ubuntu.com/ubuntu/ trusty main universe restricted multiverse #在最后一行添加
apt update  #更新
apt install sysv-rc-conf #安装
sudo sysv-rc-conf #进入设置
把fdfs_stor$ 和 fdfs_trac$ 2,3,4,5 选上,然后按Q键退出
#然后就可以设置开机自启动了
sudo sysv-rc-conf fdfs_trackerd on
sudo sysv-rc-conf fdfs_storaged on

解压fastdfs-nginx-module-V1.22.tar.gz

tar -xf fastdfs-nginx-module-V1.22.tar.gz 解压
mv fastdfs-nginx-module-V1.22 fastdfs-nginx-module
cd /opt/fastdfs-nginx-module/src #进入目录修改mod_fastdfs.conf 配置文件

mod_fastdfs.conf

# connect timeout in seconds
# default value is 30s
connect_timeout=2# network recv and send timeout in seconds
# default value is 30s
network_timeout=30# the base path to store log files
base_path=/tmp# if load FastDFS parameters from tracker server
# since V1.12
# default value is false
load_fdfs_parameters_from_tracker=true# storage sync file max delay seconds
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V1.12
# default value is 86400 seconds (one day)
storage_sync_file_max_delay = 86400# if use storage ID instead of IP address
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# default value is false
# since V1.13
use_storage_id = false# specify storage ids filename, can use relative or absolute path
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V1.13
storage_ids_filename = storage_ids.conf# FastDFS tracker_server can ocur more than once, and tracker_server format is
#  "host:port", host can be hostname or ip address
# valid only when load_fdfs_parameters_from_tracker is true
#这里填 本机ip:22122
tracker_server=192.168.0.1:22122# the port of the local storage server
# the default value is 23000
storage_server_port=23000# the group name of the local storage server
#修改组名,这里的组名要在nginx中使用
group_name=xxxx# if the url / uri including the group name
# set to false when uri like /M00/00/00/xxx
# set to true when uri like ${group_name}/M00/00/00/xxx, such as group1/M00/xxx
# default value is false
# url中是否带有group信息。nginx400可检查此项是否设置为true
url_have_group_name = true# path(disk or mount point) count, default value is 1
# must same as storage.conf
store_path_count=1# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
# must same as storage.conf
#文件目录
store_path0=/data/fastdfs/storage
#store_path1=/home/yuqing/fastdfs1# standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info# set the log filename, such as /usr/local/apache2/logs/mod_fastdfs.log
# empty for output to stderr (apache and nginx error_log file)
log_filename=# response mode when the file not exist in the local file system
## proxy: get the content from other storage server, then send to client
## redirect: redirect to the original storage server (HTTP Header is Location)
response_mode=proxy# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# this paramter used to get all ip address of the local host
# default values is empty
if_alias_prefix=# use "#include" directive to include HTTP config file
# NOTE: #include is an include directive, do NOT remove the # before include
#include http.conf# if support flv
# default value is false
# since v1.15
flv_support = true# flv file extension name
# default value is flv
# since v1.15
flv_extension = flv# set the group count
# set to none zero to support multi-group on this storage server
# set to 0  for single group only
# groups settings section as [group1], [group2], ..., [groupN]
# default value is 0
# since v1.14
group_count = 0# group settings for group #1
# since v1.14
# when support multi-group on this storage server, uncomment following section
#[group1]
#group_name=group1
#storage_server_port=23000
#store_path_count=2
#store_path0=/home/yuqing/fastdfs
#store_path1=/home/yuqing/fastdfs1# group settings for group #2
# since v1.14
# when support multi-group, uncomment following section as neccessary
#[group2]
#group_name=group2
#storage_server_port=23000
#store_path_count=1
#store_path0=/home/yuqing/fastdfs

安装nginx

sudo apt-get install pcre #安装相关依赖
tar -xf nginx-1.20.2.tar.gz #解压nginx
cd nginx-1.20.2/ #进入目录
./configure  --prefix=/opt/nginx  --add-module=/opt/fastdfs-nginx-module/src  #--add-module=选择刚才配置好的fastdfs-nginx-module路径  --prefix= 可以指定安装位置
make && make install  #编译安装
server{location ^~/mmopen/ {limit_conn one 30;# 文件所在路径root /data/fastdfs/storage/data;if ($arg_attname ~ "^(.+)") {#设置文件名add_header Content-Disposition "filename=$arg_attname";}ngx_fastdfs_module;}
}

/usr/lib/systemd/system 进入目录编写
nacos.service

[Unit]
Description=nacos
After=network.target[Service]
Type=forking
ExecStart=/opt/nacos/bin/startup.sh
ExecReload=/opt/nacos/bin/shutdown.sh; /opt/nacos/bin/startup.sh;
ExecStop=/opt/nacos/bin/shutdown.sh
PrivateTmp=true[Install]
WantedBy=multi-user.target

systemctl enable nginx #设置开机自启

Ubuntu20.04 FastDFS二进制包安装相关推荐

  1. etcd 笔记(02)— etcd 安装(apt 或 yum 安装 、二进制包安装、Docker 安装 etcd、etcd 前端工具etcdkeeper)

    1. 使用 apt 或 yum 安装 etcd 命令如下: sudo apt-get install etcd 或者 sudo yum install etcd 这样安装的缺点是:安装的 etcd 版 ...

  2. 二进制包安装MySQL数据库

    1.1二进制包安装MySQL数据库 1.1.1 安装前准备(规范) 1 [root@Mysql_server ~]# mkdir -p /home/zhurui/tools ##创建指定工具包存放路径 ...

  3. 二进制包 mysql_二进制包安装MySQL数据库

    1.1二进制包安装MySQL数据库 1.1.1 安装前准备(规范) [root@Mysql_server ~]# mkdir -p /home/zhurui/tools  ##创建指定工具包存放路径 ...

  4. MariaDB的二进制包安装方法

    软件包的安装方式有三种:源码包安装.二进制包安装.rpm包安装.这三种安装方法都各有优劣.RPM安装:适合小环境,核心功能都具备,快速搭建环境,但它的版本一般都不会太高:二进制安装:它是发布出来时预先 ...

  5. mysql5.7二进制包安装

    2019独角兽企业重金招聘Python工程师标准>>> mysql5.7二进制包安装 shell> groupadd mysql shell> useradd -r -g ...

  6. centos改变文件拥有者_每天学点之CentOS软件二进制包安装

    在Linux中需要根据不同的需求安装不同的软件服务.在Linux中,软件包分类两种源码包安装与二进制包安装. 一.优缺点: 优点:安装过程简单快速 缺点:无法查看源代码.选择功能不灵活.有依赖性(需要 ...

  7. centos6.5 mysql5.6.24 单实例二进制包安装

    线上部署考虑因素: 版本选择,5.1.5.5还是5.6? 5.1官方已不再维护,不建议 5.5是现在线上使用最多的版本 5.6最新的稳定版,已发布3年多,现在使用的也很多 分支选择,官方社区版?per ...

  8. Linux下二进制包安装postgresql10.4

    因为是二进制包安装,所以我下载的是postgresql-10.4-1-linux-x64-binaries.tar.gz 1.若没有postgres用户组,就先创建postgres用户组 groupa ...

  9. mysql5.7.10 二进制包_mysql 32 位安装教程mysql5.7 二进制包安装

    1. 下载包 wget http://mirrors.sohu.com/mysql/MySQL-5.7/mysql-5.7.10-linux-glibc2.5-x86_64.tar.gz 2. 解压 ...

最新文章

  1. Rust 2020 调查报告出炉,95%的开发者吐槽Rust难学
  2. Nancy之结合tinyfox给我们的应用提供简单的数据服务
  3. 线程(Thread,ThreadPool)、Task、Parallel
  4. oracle 查询不能重复,oracle – 如何防止在选择查询中选择重复行?
  5. Go语言基础之1--标识符、关键字、变量和常量、数据类型、Go的基本程序结构、Golang的特性...
  6. mysql导致根目录爆满_因为根目录磁盘满了,我移动数据和软件造成mysql启动不了,查原因mysql.sock不在了...
  7. android launchmode java代码,java – Android:launchMode = SingleTask问题
  8. JAVA学习之类与对象例题分享(两点确定直线并进行相关操作)
  9. OpenCV adaptiveThreshold 自适应阈值
  10. 当WEB2.0从概念变成电子商务网站的工具
  11. VB.net中的sender和e
  12. 解决git push报错error: failed to push some refs to 的问题
  13. 华为交换机关闭接口命令_华为路由交换机基础命令,看看你会哪些?
  14. 相声登上直播平台 传统艺术能借风口浴火重生吗?
  15. Netty权威指南(三)Netty入门应用
  16. JAVA_观察者模式例子
  17. jsp中你必须记住的379
  18. 鸿蒙系统安全模式,菜鸟必看 如何在安全模式下使用光驱
  19. 嵌入式Linux应用与开发——内核配置选项含Linux最新版本kernel下载后续操作
  20. Google 三大论文之——MapReduce

热门文章

  1. element组件popper-class属性设置弹出框类名
  2. 旅游景区智能分析-需求文档
  3. 联科首个开源项目启动!未来可期,诚邀加入!
  4. JavaScript中有关字符串的方法总结
  5. 开源网络负载测试工具-基准测试
  6. 2011 UPS与数据中心技术发展论坛隆重举行
  7. 中国领先的创意_创新_创业类群博客
  8. 低功耗之产品功耗计算
  9. spacedesk副屏幕延迟优化
  10. pointnet++代码实现结果