文章目录

  • 配置Mysql
    • 安装mysql镜像
    • 创建mysql容器并运行
    • 进入mysql,并配置
    • 报错
  • 配置Nginx
    • 安装nginx镜像
    • 创建nginx容器并运行
    • 修改nginx配置
    • 报错
  • 配置Redis
    • 安装redis镜像
    • 创建redis容器并运行
    • 进入redis,测试等
    • 配置一个配置文件
  • 项目部署流程
    • 前端部署
    • 后端部署

这些镜像创建的时候,可以创建项目文件夹,统一映射

配置Mysql

安装mysql镜像

docker pull mysql:[版本号] # 安装指定版本
docker pull mysql:latest # 安装最新版本;docker pull mysql默认最新

创建mysql容器并运行

docker run \
--name mysql \
-d \
-p 3306:3306 \
--restart unless-stopped \
-v /home/test/mysql/my.conf:/etc/mysql/my.conf \
-v /home/test/mysql/data:/var/lib/mysql \
-v /home/test/mysql/logs/error.log:/var/log/mysql/error.log \
# -v $PWD/logs:/logs \
# -v $PWD/data:/var/lib/mysql \
# -v $PWD/conf:/etc/mysql/conf.d \ # 宿主机目录:容器目录;$PWD谨慎使用
-e MYSQL_ROOT_PASSWORD=123456 \
mysql:[版本号]# --name mysql \ # 容器的名字
# -d \ # 后台运行
# -p 3306:3306 \ # 映射容器服务的 3306 端口到宿主机的 3306 端口
# --restart unless-stopped \ # 容器重启策略
# -v $PWD/logs:/logs \ # 将日志文件夹挂载到主机
# -v $PWD/data:/var/lib/mysql \ # 将mysql储存文件夹挂载到主机
# -v $PWD/conf:/etc/mysql/conf.d \ # 将配置文件夹挂载到主机
# -e MYSQL_ROOT_PASSWORD=123456 \ #  设置 root 用户的密码
# mysql:[版本号] # 启动哪个版本的 mysql (本地镜像的版本)

进入mysql,并配置

docker exec -it mysql bashmysql -u root -p
ALTER USER 'root'@'localhost' IDENTIFIED BY '密码';#添加远程登录用户
CREATE USER '用户名'@'%' IDENTIFIED WITH mysql_native_password BY '密码';
GRANT ALL PRIVILEGES ON *.* TO '用户名'@'%';
# GRANT:赋权命令
# ALL PRIVILEGES:当前用户的所有权限
# ON:介词
# *.*:当前用户对所有数据库和表的相应操作权限
# TO:介词
# ‘用户名’@’%’:权限赋给某用户,所有ip都能连接# 等等其他配置

报错

1、在进入mysql时报错

Error response from daemon: Container XXX is restarting, wait until the container is running
#查看日志,看错误日志是什么,可能是映射的目录不存在
docker logs 容器名

2、如果使用Navicat等图形化工具连接时报2003错误,可能是mysql本身配置问题,也可能是防火墙端口未开放,可以看看端口是否开放

3、如果报错1130,是因为 mysql服务器出于安全考虑,默认只允许本地登录数据库服务器

# 进入mysql容器内部登录后
use mysql;select host,user from user;# 看到如下表格
#   +-----------+------------------+
#   | host      | user             |
#   +-----------+------------------+
#   | localhost | mysql.infoschema |
#   | localhost | mysql.session    |
#   | localhost | mysql.sys        |
#   | localhost | root             |
#   +-----------+------------------+# 更改host,允许非本地访问
update user set host='%' where host='localhost' and user='root';# mysql 新设置用户或更改密码后需用flush privileges刷新MySQL的系统权限相关表,否则会出现拒绝访问,还有一种方法,就是重新启动mysql服务器,来使新设置生效
flush privileges;

配置Nginx

安装nginx镜像

docker pull nginx:[版本号] # 安装指定版本
docker pull nginx:latest # 安装最新版本;docker pull nginx默认最新

创建nginx容器并运行

docker run -d -p 443:443 -p 80:80 --name nginx \
-v /home/test/nginx/html:/usr/share/nginx/html \
-v /home/test/nginx/conf:/etc/nginx \
-v /home/test/nginx/logs:/var/log/nginx \
-v /home/test/nginx/certificate:/home/test/nginx/certificate \
nginx
# test即项目名称
# 注意自己的端口号是否开放

修改nginx配置

upstream mycsdn{server 172.17.0.1:8090;
}server {#SSL 默认访问端口号为 443listen 443 ssl; #请填写绑定证书的域名,就是自己的域名server_name www.lovli.top;server_name lovli.top;client_max_body_size 20m;#access_log /logs/access.log main;#请填写证书文件的相对路径或绝对路径ssl_certificate /certificate/lovli.top_bundle.crt; #请填写私钥文件的相对路径或绝对路径ssl_certificate_key /certificate/lovli.top.key; ssl_session_timeout 5m;#请按照以下套件配置,配置加密套件,写法遵循 openssl 标准。ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4;#请按照以下协议配置ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on;#access_log  /var/log/nginx/host.access.log  main;location / {proxy_set_header Host       $http_host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme;proxy_pass http://mycsdn;index  index.html index.htm;}error_page   500 502 503 504  /50x.html;location = /50x.html {root   /usr/share/nginx/html;}}server {listen 80;#请填写绑定证书的域名server_name www.lovli.top;server_name lovli.top;#把http的域名请求转成httpsreturn 301 https://www.lovli.top;
}

报错

1、WARNING: Published ports are discarded when using host network mode

原因是自己设置了网络模式-net=host,使用默认的bridge就行
# bridge模式: Docker的默认模式就行 【 host模式 none模式 container模式 overlay模式 】

2、如果nginx容器刚启动就关闭,可能是由于挂载了配置文件nginx.conf

# 使用docker logs 容器id/容器名称 查看日志
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
nginx: [emerg] open() "/etc/nginx/nginx.conf" failed (2: No such file or directory)

意思是要先有一个配置文件nginx.conf才能挂载

解决办法:

# 启动一个没有挂载的nginx
docker run -d -p 80:80 --name nginx-test nginx# 使用cp拷贝文件到目标文件夹下;如果出现没有/home/test/nginx的错误,就新建一个
docker cp nginx-test:/etc/nginx /home/test/nginx/conf# 之后就可以删除该容器了
docker stop nginx-test
docker rm nginx-test# 然后使用之前具有挂载卷的命令就没错了
docker run -d -p 80:80 --name nginxTest \
-v /home/test/nginx/html:/usr/share/nginx/html \
-v /home/test/nginx/conf:/etc/nginx \
-v /home/test/nginx/logs:/var/log/nginx \
nginx# 这时由于html挂载的/usr/share/nginx/html,里面为空,所以访问nginx会报403

参考博客:https://blog.csdn.net/six_teen/article/details/112602577

配置Redis

安装redis镜像

docker pull redis:[版本号] # 安装指定版本
docker pull redis:latest # 安装最新版本;docker pull mysql默认最新

创建redis容器并运行

docker run -itd \
-p 6379:6379 \
-v /home/项目名/reids/data:/data \
-v /home/项目名/redis/conf/redis.conf:/etc/redis/redis.conf \
--name redisTest
redis redis-server /etc/redis/redis.conf --appendonly yes \
# 这个redis就是镜像名,需要带tag,不然默认latest# edis-server /etc/redis/redis.conf:指定配置文件启动redis-server进程
# –appendonly yes:开启数据持久化

进入redis,测试等

docker exec -it redisTest /bin/bashredis-cli #测试连接使用

配置一个配置文件

进入映射地址redis.conf内,然后创建配置文件,输入以下内容(根据需求更改):

# Redis配置文件样例# Note on units: when memory size is needed, it is possible to specifiy
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.# Redis默认不是以守护进程的方式运行,可以通过该配置项修改,使用yes启用守护进程
# 启用守护进程后,Redis会把pid写到一个pidfile中,在/var/run/redis.pid
daemonize no# 当Redis以守护进程方式运行时,Redis默认会把pid写入/var/run/redis.pid文件,可以通过pidfile指定
pidfile /var/run/redis.pid# 指定Redis监听端口,默认端口为6379
# 如果指定0端口,表示Redis不监听TCP连接
port 6379# 绑定的主机地址
# 你可以绑定单一接口,如果没有绑定,所有接口都会监听到来的连接
# bind 127.0.0.1# Specify the path for the unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /tmp/redis.sock
# unixsocketperm 755# 当客户端闲置多长时间后关闭连接,如果指定为0,表示关闭该功能
timeout 0# 指定日志记录级别,Redis总共支持四个级别:debug、verbose、notice、warning,默认为verbose
# debug (很多信息, 对开发/测试比较有用)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel verbose# 日志记录方式,默认为标准输出,如果配置为redis为守护进程方式运行,而这里又配置为标准输出,则日志将会发送给/dev/null
logfile stdout# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no# Specify the syslog identity.
# syslog-ident redis# Specify the syslog facility.  Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0# 设置数据库的数量,默认数据库为0,可以使用select <dbid>命令在连接上指定数据库id
# dbid是从0到‘databases’-1的数目
databases 16################################ SNAPSHOTTING  #################################
# 指定在多长时间内,有多少次更新操作,就将数据同步到数据文件,可以多个条件配合
# Save the DB on disk:
#
#   save <seconds> <changes>
#
#   Will save the DB if both the given number of seconds and the given
#   number of write operations against the DB occurred.
#
#   满足以下条件将会同步数据:
#   900秒(15分钟)内有1个更改
#   300秒(5分钟)内有10个更改
#   60秒内有10000个更改
#   Note: 可以把所有“save”行注释掉,这样就取消同步操作了save 900 1
save 300 10
save 60 10000# 指定存储至本地数据库时是否压缩数据,默认为yes,Redis采用LZF压缩,如果为了节省CPU时间,可以关闭该选项,但会导致数据库文件变的巨大
rdbcompression yes# 指定本地数据库文件名,默认值为dump.rdb
dbfilename dump.rdb# 工作目录.
# 指定本地数据库存放目录,文件名由上一个dbfilename配置项指定
#
# Also the Append Only File will be created inside this directory.
#
# 注意,这里只能指定一个目录,不能指定文件名
dir ./################################# REPLICATION ################################## 主从复制。使用slaveof从 Redis服务器复制一个Redis实例。注意,该配置仅限于当前slave有效
# so for example it is possible to configure the slave to save the DB with a
# different interval, or to listen to another port, and so on.
# 设置当本机为slav服务时,设置master服务的ip地址及端口,在Redis启动时,它会自动从master进行数据同步
# slaveof <masterip> <masterport># 当master服务设置了密码保护时,slav服务连接master的密码
# 下文的“requirepass”配置项可以指定密码
# masterauth <master-password># When a slave lost the connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
#
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
#    still reply to client requests, possibly with out of data data, or the
#    data set may just be empty if this is the first synchronization.
#
# 2) if slave-serve-stale data is set to 'no' the slave will reply with
#    an error "SYNC with master in progress" to all the kind of commands
#    but to INFO and SLAVEOF.
#
slave-serve-stale-data yes# Slaves send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_slave_period option. The default value is 10
# seconds.
#
# repl-ping-slave-period 10# The following option sets a timeout for both Bulk transfer I/O timeout and
# master data or ping response timeout. The default value is 60 seconds.
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-slave-period otherwise a timeout will be detected
# every time there is low traffic between the master and the slave.
#
# repl-timeout 60################################## SECURITY #################################### Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
# 设置Redis连接密码,如果配置了连接密码,客户端在连接Redis时需要通过auth <password>命令提供密码,默认关闭
# requirepass foobared# Command renaming.
#
# It is possilbe to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# of hard to guess so that it will be still available for internal-use
# tools but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possilbe to completely kill a command renaming it into
# an empty string:
#
# rename-command CONFIG ""################################### LIMITS ##################################### 设置同一时间最大客户端连接数,默认无限制,Redis可以同时打开的客户端连接数为Redis进程可以打开的最大文件描述符数,
# 如果设置maxclients 0,表示不作限制。当客户端连接数到达限制时,Redis会关闭新的连接并向客户端返回max Number of clients reached错误信息
# maxclients 128# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys with an
# EXPIRE set. It will try to start freeing keys that are going to expire
# in little time and preserve keys with a longer time to live.
# Redis will also try to remove objects from free lists if possible.
#
# If all this fails, Redis will start to reply with errors to commands
# that will use more memory, like SET, LPUSH, and so on, and will continue
# to reply to most read-only commands like GET.
#
# WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
# 'state' server or cache, not as a real DB. When Redis is used as a real
# database the memory usage will grow over the weeks, it will be obvious if
# it is going to use too much memory in the long run, and you'll have the time
# to upgrade. With maxmemory after the limit is reached you'll start to get
# errors for write operations, and this may even lead to DB inconsistency.
# 指定Redis最大内存限制,Redis在启动时会把数据加载到内存中,达到最大内存后,Redis会先尝试清除已到期或即将到期的Key,
# 当此方法处理后,仍然到达最大内存设置,将无法再进行写入操作,但仍然可以进行读取操作。
# Redis新的vm机制,会把Key存放内存,Value会存放在swap区
# maxmemory <bytes># MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached? You can select among five behavior:
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key accordingly to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys->random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
#
# Note: with all the kind of policies, Redis will return an error on write
#       operations, when there are not suitable keys for eviction.
#
#       At the date of writing this commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy volatile-lru# LRU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can select as well the sample
# size to check. For instance for default Redis will check three keys and
# pick the one that was used less recently, you can change the sample size
# using the following configuration directive.
#
# maxmemory-samples 3############################## APPEND ONLY MODE ################################
# Note that you can have both the async dumps and the append only file if you
# like (you have to comment the "save" statements above to disable the dumps).
# Still if append only mode is enabled Redis will load the data from the
# log file at startup ignoring the dump.rdb file.
# 指定是否在每次更新操作后进行日志记录,Redis在默认情况下是异步的把数据写入磁盘,如果不开启,可能会在断电时导致一段时间内的数据丢失。
# 因为redis本身同步数据文件是按上面save条件来同步的,所以有的数据会在一段时间内只存在于内存中。默认为no
# IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
# log file in background when it gets too big.appendonly no# 指定更新日志文件名,默认为appendonly.aof
# appendfilename appendonly.aof# The fsync() call tells the Operating System to actually write data on disk
# instead to wait for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.# 指定更新日志条件,共有3个可选值:
# no:表示等操作系统进行数据缓存同步到磁盘(快)
# always:表示每次更新操作后手动调用fsync()将数据写到磁盘(慢,安全)
# everysec:表示每秒同步一次(折衷,默认值)appendfsync everysec
# appendfsync no# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving the durability of Redis is
# the same as "appendfsync none", that in pratical terms means that it is
# possible to lost up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
no-appendfsync-on-rewrite no# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size will growth by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (or if no rewrite happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a precentage of zero in order to disable the automatic AOF
# rewrite feature.auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb################################## SLOW LOG #################################### The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 1024################################ VIRTUAL MEMORY ################################## WARNING! Virtual Memory is deprecated in Redis 2.4
### The use of Virtual Memory is strongly discouraged.### WARNING! Virtual Memory is deprecated in Redis 2.4
### The use of Virtual Memory is strongly discouraged.# Virtual Memory allows Redis to work with datasets bigger than the actual
# amount of RAM needed to hold the whole dataset in memory.
# In order to do so very used keys are taken in memory while the other keys
# are swapped into a swap file, similarly to what operating systems do
# with memory pages.
# 指定是否启用虚拟内存机制,默认值为no,
# VM机制将数据分页存放,由Redis将访问量较少的页即冷数据swap到磁盘上,访问多的页面由磁盘自动换出到内存中
# 把vm-enabled设置为yes,根据需要设置好接下来的三个VM参数,就可以启动VM了
vm-enabled no
# vm-enabled yes# This is the path of the Redis swap file. As you can guess, swap files
# can't be shared by different Redis instances, so make sure to use a swap
# file for every redis process you are running. Redis will complain if the
# swap file is already in use.
#
# Redis交换文件最好的存储是SSD(固态硬盘)
# 虚拟内存文件路径,默认值为/tmp/redis.swap,不可多个Redis实例共享
# *** WARNING *** if you are using a shared hosting the default of putting
# the swap file under /tmp is not secure. Create a dir with access granted
# only to Redis user and configure Redis to create the swap file there.
vm-swap-file /tmp/redis.swap# With vm-max-memory 0 the system will swap everything it can. Not a good
# default, just specify the max amount of RAM you can in bytes, but it's
# better to leave some margin. For instance specify an amount of RAM
# that's more or less between 60 and 80% of your free RAM.
# 将所有大于vm-max-memory的数据存入虚拟内存,无论vm-max-memory设置多少,所有索引数据都是内存存储的(Redis的索引数据就是keys)
# 也就是说当vm-max-memory设置为0的时候,其实是所有value都存在于磁盘。默认值为0
vm-max-memory 0# Redis swap文件分成了很多的page,一个对象可以保存在多个page上面,但一个page上不能被多个对象共享,vm-page-size是要根据存储的数据大小来设定的。
# 建议如果存储很多小对象,page大小最后设置为32或64bytes;如果存储很大的对象,则可以使用更大的page,如果不确定,就使用默认值
vm-page-size 32# 设置swap文件中的page数量由于页表(一种表示页面空闲或使用的bitmap)是存放在内存中的,在磁盘上每8个pages将消耗1byte的内存
# swap空间总容量为 vm-page-size * vm-pages
#
# With the default of 32-bytes memory pages and 134217728 pages Redis will
# use a 4 GB swap file, that will use 16 MB of RAM for the page table.
#
# It's better to use the smallest acceptable value for your application,
# but the default is large in order to work in most conditions.
vm-pages 134217728# Max number of VM I/O threads running at the same time.
# This threads are used to read/write data from/to swap file, since they
# also encode and decode objects from disk to memory or the reverse, a bigger
# number of threads can help with big objects even if they can't help with
# I/O itself as the physical device may not be able to couple with many
# reads/writes operations at the same time.
# 设置访问swap文件的I/O线程数,最后不要超过机器的核数,如果设置为0,那么所有对swap文件的操作都是串行的,可能会造成比较长时间的延迟,默认值为4
vm-max-threads 4############################### ADVANCED CONFIG ################################ Hashes are encoded in a special way (much more memory efficient) when they
# have at max a given numer of elements, and the biggest element does not
# exceed a given threshold. You can configure this limits with the following
# configuration directives.
# 指定在超过一定的数量或者最大的元素超过某一临界值时,采用一种特殊的哈希算法
hash-max-zipmap-entries 512
hash-max-zipmap-value 64# Similarly to hashes, small lists are also encoded in a special way in order
# to save a lot of space. The special representation is only used when
# you are under the following limits:
list-max-ziplist-entries 512
list-max-ziplist-value 64# Sets have a special encoding in just one case: when a set is composed
# of just strings that happens to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into an hash table
# that is rhashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# active rehashing the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply form time to time
# to queries with 2 milliseconds delay.
# 指定是否激活重置哈希,默认为开启
activerehashing yes################################## INCLUDES #################################### 指定包含其他的配置文件,可以在同一主机上多个Redis实例之间使用同一份配置文件,而同时各实例又拥有自己的特定配置文件
# include /path/to/local.conf
# include /path/to/other.conf

项目部署流程

前端部署

#改哪个配置呢npm run build
# 之后会在目录中看dist文件夹

后端部署

# 后端项目打包成jar包,上传jar包到服务器的项目文件夹下# 目前有两种方案,一种是新建项目镜像,一种是直接使用docker运行jar包,然后映射文件夹

1、使用docker创建及运行,并映射文件夹

docker run -d \
-v /projects/renren:/projects \
-p 8080:8080 \
--name renren-fast openjdk:8 \
nohup java -jar -Duser.timezone=GMT+08 /projects/renren-fast.jar > renren.log 2>&1 &
# 此时如果关闭java程序(即 kill ),则该容器也会停止,但不会被删除
docker start renren-fast #该命令启动时也会启动jar包
# 运行后,使用浏览器访问 ip地址:端口/renren-fast/swagger/index.html就可以看到swagger界面了

docker配置服务器环境相关推荐

  1. 阿里云 docker php mysql_PHP开发环境02 - 阿里云Ubuntu使用Docker配置PHP环境(只限于学习)...

    视频地址 学徒卡夫 - 卡夫的Mac 04 - 阿里云Ubuntu使用Docker配置PHP环境 https://www.bilibili.com/vide... 打包镜像 上传阿里云docker镜像 ...

  2. docker配置java环境(dockerfile方式)

    1.安装包准备 准备Centos镜像: jdk和tomcat压缩包: 2.构建容器 2.1.编写构建文件 在存放安装包的目录下创建构建文件: vi Dockerfile 在Dockerfile添加以下 ...

  3. 用UPUPW配置服务器环境竟然这么简单

    多年没有配置服务器了,为了开发一个项目,不想让别人看代码,就自己用UPUPW在三丰云免费云服务器[ www.sanfengyun.com],开始以为会费周章,会有许多参数设置.可没想到会这么简单. 第 ...

  4. docker配置python环境_PyCharm使用Docker镜像搭建Python开发环境

    在我们平时使用PyCharm的过程中,一般都是连接本地的Python环境进行开发,但是如果是离线的环境呢?这样就不好搭建Python开发环境,因为第三方模块的依赖复杂,不好通过离线安装包的方式安装.本 ...

  5. 在Ubuntu环境下使用docker配置webgoat环境

    1.安装Docker环境 sudo apt install docker.io 2.配置Docker加速 打开配置文件 vim /etc/docker/daemon.json 添加mirrors信息 ...

  6. 实验记录 |6/3 somatic.al配置服务器环境

    6/3(8:40) 之前没有安装Varscan.现在检索安装. 下载Varscan的安装包:https://sourceforge.net/projects/varscan/files/VarScan ...

  7. 【大数据】mac M1 Docker配置spark环境集群

    文章目录 Docker下载 启动 docker 打开 cmd/terminal 浏览器查看部署情况 Docker下载 https://desktop.docker.com/win/main/amd64 ...

  8. Docker配置emqx环境(win11)

    一.安装docker desktop 参考我的上一篇文章: Window11安装Docker Desktop(构建软链接解决Docker自动安装在C盘占用内存问题)_JASON丶LI的博客-CSDN博 ...

  9. 服务器php环境一键配置,phpstudy一键配置服务器环境教程

    phpstudy相比于wampserver具有多站点的优势,可以随意切换php版本,在本地可以模拟真实域名测试项目,不管是前端还是后端会使用phpstudy都是非常必要的. phpstudy下载 首先 ...

最新文章

  1. Oracle BIEE (Business Intelligence) 11g 11.1.1.6.0 学习(2)RPD资料档案库创建
  2. SAP CRM Fiori应用My note的技术实现
  3. UOJ#454-[UER #8]打雪仗【通信题】
  4. 将系统默认记事本替换成自己喜欢的文本编辑器
  5. jenkins未授权访问漏洞记录(端口:7001,80,8080,50000)
  6. Non-resolvable parent POM for com.supermarket:supermarket:0.0.1-SNAPSHOT: Could not transfer artifac
  7. 批处理启动myeclipse
  8. 分享两个网址,一个是使用mssql自带的跟踪工具和分析工具
  9. SendMessage 函数
  10. Linux使用Blowfish生成密码,linux批量生成密码
  11. webpack5学习与实战-(六)-babel-loader解析js文件
  12. java tracert_超强的Tracert工具(WinMTR)
  13. 罗技键盘+android风格,Logitech 罗技 K480 蓝牙键盘,IOS、OSX 和安卓三大系统使用体验...
  14. 如何知道自己的手机注册了多少软件和网站?(亲测有效)
  15. 搭建本地私有pip源
  16. Simulink学习案例2
  17. 深度解析中国养老产业发展前景
  18. 订票成功,感激涕淋……
  19. jsp393学生宿舍管理系统mysql
  20. 使用python,目前最全的Python使用手册

热门文章

  1. c il语言 定义变量,[转载]Skill语言入门
  2. WM8994驱动 STM32 WM8994驱动( STM32f746gdiscovery WM8994驱动)
  3. notepad++ 文件太大,打不开
  4. Altium Designer 20 凡亿教育视频学习-01
  5. python报错:ValueError: not enough values to unpack (expected 6, got 3)
  6. 火狐浏览器下burpsuite 抓取https数据包----记录一下可笑的配置过程
  7. Verilog学习之三输入数的大小比较设计
  8. 蓝桥杯竞赛题库(6)
  9. 判断链表是否有环和怎样找到环
  10. 目标检测YOLO实战应用案例100讲-基于PReNet和YOLOv4融合的 雨天交通目标检测网络