安装官网`
1、需要先安装JDK 1.8+,MYSQL,Zookeeper,psmisc,已安装过后无需重复安装,ubuntu 安装apt-get install psmisc
安装JDK
Mysql
Zookeeper
psmisc
安装hadoop和hive
安装Python2.7,Linux默认存在Python2.X,已存在2.7,无需安装
安装DataX
2、下载安装的tar.gz包

3、新增用户,配置免密登录,赋权(集群是所有的服务器都需要先创建好用户,目录,执行install.sh,会复制执行文件到其余的服务器,别的服务器无需配置dolphinscheduler解压包)

#解压文件包,如果修改文件可以用 mv apache-dolphinscheduler-2.0.3-bin /opt/dolphinscheduler
tar -zxvf apache-dolphinscheduler-2.0.3-bin.tar.gz
mv apache-dolphinscheduler-2.0.3-bin /opt/dolphinscheduler
#创建用户需使用 root 登录 (leo换成自己的用户,如果想修改用户可以用 userdel -r leo删除之后再新增)
#ubuntu需要检测是否是bash执行脚本,dash执行脚本会报错,ll /bin/sh
#可以通过 dpkg-reconfigure dash 弹出界面选择NO更换规则,如果没有界面,可以使用软链接替换 ln -s /bin/bash /bin/sh --force
#ll /bin/sh 查看是否变成bash
#ubuntu 新增用户 useradd -m leo -d /home/leo 设置密码 passwd leo
#集群环境都下面的命令都需要做一次
useradd leo
# 添加密码(xionglang换成自己的密码)
echo "xionglang" | passwd --stdin leo
# 配置 sudo 免密(leo换成前面新增的用户)
chmod u+w /etc/sudoers
vim /etc/sudoers
#找到 root ALL=(ALL) ALL 新增免密登录
#youuser            ALL=(ALL)                ALL
#%youuser           ALL=(ALL)                ALL
#youuser            ALL=(ALL)                NOPASSWD: ALL
#%youuser           ALL=(ALL)                NOPASSWD: ALL
#第一行:允许用户youuser执行sudo命令(需要输入密码)。
#第二行:允许用户组youuser里面的用户执行sudo命令(需要输入密码)。
#第三行:允许用户youuser执行sudo命令,并且在执行的时候不输入密码。
#第四行:允许用户组youuser里面的用户执行sudo命令,并且在执行的时候不输入密码。
leo ALL=(ALL) NOPASSWD: ALL
#保存退出
#取消sudoers的写权限
chmod u-w /etc/sudoers
sed -i 's/Defaults    requirett/#Defaults    requirett/g' /etc/sudoers
#免密登录执行sudo命令配置成功
#设置leo用户免密登录ssh文件
#切换用户
su leo
#生成免密文件
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
#验证是否免密
ssh localhost
#如果无需输入密码,就可以登录表示设置成功
#集群环境添加下面的配置信息
#配置hosts文件
vim /etc/hosts
10.0.0.195 ds1
10.0.0.196 ds2
10.0.0.198 ds3
10.0.0.199 ds4
#保存
#打通集群的SSH连接,并记录连接信息(先输入密码,之后点击yes)
for ip in ds1 ds2 ds3 ds4;
dossh-copy-id  $ip
done

4、上传包,修改包名,赋权

#修改目录权限,使得部署用户对二进制包解压后的 apache-dolphinscheduler-*-bin 目录有操作权限,
chown -R leo:leo dolphinscheduler
#新增数据包文件
mkdir /home/data/adolphinscheduler
#后面可能得添加数据库jar包,也需要检查下权限
chown -R 需要使用的用户,跟配置文件相同:需要使用的用户 /home/data/adolphinscheduler
#每个文件夹都需要赋权个创建的用户,需要检查一下

5、修改配置文件信息(10.0.0.63表示需要修改的服务器,数据库,zookeeper等路径),并修改用户名密码

#echo $PATH 可以查看JDK环境,不用带bin
vim /opt/dolphinscheduler/conf/config/install_config.conf

单机配置

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## ---------------------------------------------------------
# INSTALL MACHINE
# ---------------------------------------------------------
# A comma separated list of machine hostname or IP would be installed DolphinScheduler,
# including master, worker, api, alert. If you want to deploy in pseudo-distributed
# mode, just write a pseudo-distributed hostname
# Example for hostnames: ips="ds1,ds2,ds3,ds4,ds5", Example for IPs: ips="192.168.8.1,192.168.8.2,192.168.8.3,192.168.8.4,192.168.8.5"
ips="10.0.0.63"# Port of SSH protocol, default value is 22. For now we only support same port in all `ips` machine
# modify it if you use different ssh port
sshPort="22"# A comma separated list of machine hostname or IP would be installed Master server, it
# must be a subset of configuration `ips`.
# Example for hostnames: masters="ds1,ds2", Example for IPs: masters="192.168.8.1,192.168.8.2"
masters="10.0.0.63"# A comma separated list of machine <hostname>:<workerGroup> or <IP>:<workerGroup>.All hostname or IP must be a
# subset of configuration `ips`, And workerGroup have default value as `default`, but we recommend you declare behind the hosts
# Example for hostnames: workers="ds1:default,ds2:default,ds3:default", Example for IPs: workers="192.168.8.1:default,192.168.8.2:default,192.168.8.3:default"
workers="10.0.0.63:default"# A comma separated list of machine hostname or IP would be installed Alert server, it
# must be a subset of configuration `ips`.
# Example for hostname: alertServer="ds3", Example for IP: alertServer="192.168.8.3"
alertServer="10.0.0.63"# A comma separated list of machine hostname or IP would be installed API server, it
# must be a subset of configuration `ips`.
# Example for hostname: apiServers="ds1", Example for IP: apiServers="192.168.8.1"
apiServers="10.0.0.63"# A comma separated list of machine hostname or IP would be installed Python gateway server, it
# must be a subset of configuration `ips`.
# Example for hostname: pythonGatewayServers="ds1", Example for IP: pythonGatewayServers="192.168.8.1"
pythonGatewayServers="10.0.0.63"# The directory to install DolphinScheduler for all machine we config above. It will automatically be created by `install.sh` script if not exists.
# Do not set this configuration same as the current path (pwd)
#前面的用户空间
installPath="/home/leo/dolphinscheduler"# The user to deploy DolphinScheduler for all machine we config above. For now user must create by yourself before running `install.sh`
# script. The user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled than the root directory needs
# to be created by this user
deployUser="leo"# The directory to store local data for all machine we config above. Make sure user `deployUser` have permissions to read and write this directory.
#数据目录,需要有权限
dataBasedirPath="/data/dolphinscheduler"# ---------------------------------------------------------
# DolphinScheduler ENV
# ---------------------------------------------------------
# JAVA_HOME, we recommend use same JAVA_HOME in all machine you going to install DolphinScheduler
# and this configuration only support one parameter so far.
javaHome="/usr/java/jdk1.8.0_301-amd64"# DolphinScheduler API service port, also this is your DolphinScheduler UI component's URL port, default value is 12345
apiServerPort="12345"# ---------------------------------------------------------
# Database
# NOTICE: If database value has special characters, such as `.*[]^${}\+?|()@#&`, Please add prefix `\` for escaping.
# ---------------------------------------------------------
# The type for the metadata database
# Supported values: ``postgresql``, ``mysql`, `h2``.
DATABASE_TYPE="mysql"# Spring datasource url, following <HOST>:<PORT>/<database>?<parameter> format, If you using mysql, you could use jdbc
# string jdbc:mysql://127.0.0.1:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8 as example
SPRING_DATASOURCE_URL="jdbc:mysql://10.0.0.63:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=GMT%2b8"# Spring datasource username
SPRING_DATASOURCE_USERNAME="xionglang"# Spring datasource password
SPRING_DATASOURCE_PASSWORD="xionglang"# ---------------------------------------------------------
# Registry Server
# ---------------------------------------------------------
# Registry Server plugin name, should be a substring of `registryPluginDir`, DolphinScheduler use this for verifying configuration consistency
registryPluginName="zookeeper"# Registry Server address.
registryServers="10.0.0.63:2181"# Registry Namespace
registryNamespace="dolphinscheduler"# ---------------------------------------------------------
# Worker Task Server
# ---------------------------------------------------------
# Worker Task Server plugin dir. DolphinScheduler will find and load the worker task plugin jar package from this dir.
taskPluginDir="lib/plugin/task"# resource storage type: HDFS, S3, NONE
#需要上传资源文件到HDFS上,改为HDFS,否则无需修改
resourceStorageType="HDFS"# resource store on HDFS/S3 path, resource file will store to this hdfs path, self configuration, please make sure the directory exists on hdfs and has read write permissions. "/dolphinscheduler" is recommended
#需要上传资源文件到HDFS上,上传文件路径,需要上面定义的leo用户可以新建,删除,修改文件文件夹 chown -R dolphinscheduler:dolphinscheduler /data/dolphinscheduler
resourceUploadPath="/data/dolphinscheduler/upload"# if resourceStorageType is HDFS,defaultFS write namenode address,HA, you need to put core-site.xml and hdfs-site.xml in the conf directory.
# if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
# Note,S3 be sure to create the root directory /dolphinscheduler
defaultFS="hdfs://mycluster:8020"# if resourceStorageType is S3, the following three configuration is required, otherwise please ignore
s3Endpoint="http://192.168.xx.xx:9010"
s3AccessKey="xxxxxxxxxx"
s3SecretKey="xxxxxxxxxx"# resourcemanager port, the default value is 8088 if not specified
resourceManagerHttpAddressPort="8088"# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single node, keep this value empty
yarnHaIps="192.168.xx.xx,192.168.xx.xx"# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single node, you only need to replace 'yarnIp1' to actual resourcemanager hostname
singleYarnIp="yarnIp1"# who has permission to create directory under HDFS/S3 root path
# Note: if kerberos is enabled, please config hdfsRootUser=
#需要上传资源文件到HDFS上,HDFS能创建删除修改文件,跟hadoop用户相同
hdfsRootUser="hadoop"# kerberos config
# whether kerberos starts, if kerberos starts, following four items need to config, otherwise please ignore
kerberosStartUp="false"
# kdc krb5 config file path
krb5ConfPath="$installPath/conf/krb5.conf"
# keytab username,watch out the @ sign should followd by \\
keytabUserName="hdfs-mycluster\\@ESZ.COM"
# username keytab path
keytabPath="$installPath/conf/hdfs.headless.keytab"
# kerberos expire time, the unit is hour
kerberosExpireTime="2"# use sudo or not
sudoEnable="true"# worker tenant auto create
workerTenantAutoCreate="false"

集群配置

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## ---------------------------------------------------------
# INSTALL MACHINE
# ---------------------------------------------------------
# A comma separated list of machine hostname or IP would be installed DolphinScheduler,
# including master, worker, api, alert. If you want to deploy in pseudo-distributed
# mode, just write a pseudo-distributed hostname
# Example for hostnames: ips="ds1,ds2,ds3,ds4,ds5", Example for IPs: ips="192.168.8.1,192.168.8.2,192.168.8.3,192.168.8.4,192.168.8.5"
ips="ds1,ds2,ds3,ds4"# Port of SSH protocol, default value is 22. For now we only support same port in all `ips` machine
# modify it if you use different ssh port
sshPort="22"# A comma separated list of machine hostname or IP would be installed Master server, it
# must be a subset of configuration `ips`.
# Example for hostnames: masters="ds1,ds2", Example for IPs: masters="192.168.8.1,192.168.8.2"
masters="ds1,ds2"# A comma separated list of machine <hostname>:<workerGroup> or <IP>:<workerGroup>.All hostname or IP must be a
# subset of configuration `ips`, And workerGroup have default value as `default`, but we recommend you declare behind the hosts
# Example for hostnames: workers="ds1:default,ds2:default,ds3:default", Example for IPs: workers="192.168.8.1:default,192.168.8.2:default,192.168.8.3:default"
workers="ds1:default,ds2:default,ds3:default,ds4:default"# A comma separated list of machine hostname or IP would be installed Alert server, it
# must be a subset of configuration `ips`.
# Example for hostname: alertServer="ds3", Example for IP: alertServer="192.168.8.3"
alertServer="ds3"# A comma separated list of machine hostname or IP would be installed API server, it
# must be a subset of configuration `ips`.
# Example for hostname: apiServers="ds1", Example for IP: apiServers="192.168.8.1"
apiServers="ds1"# A comma separated list of machine hostname or IP would be installed Python gateway server, it
# must be a subset of configuration `ips`.
# Example for hostname: pythonGatewayServers="ds1", Example for IP: pythonGatewayServers="192.168.8.1"
pythonGatewayServers="ds1"# The directory to install DolphinScheduler for all machine we config above. It will automatically be created by `install.sh` script if not exists.
# Do not set this configuration same as the current path (pwd)
installPath="/home/leo/dolphinscheduler"# The user to deploy DolphinScheduler for all machine we config above. For now user must create by yourself before running `install.sh`
# script. The user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled than the root directory needs
# to be created by this user
deployUser="leo"# The directory to store local data for all machine we config above. Make sure user `deployUser` have permissions to read and write this directory.
dataBasedirPath="/data/dolphinscheduler"# ---------------------------------------------------------
# DolphinScheduler ENV
# ---------------------------------------------------------
# JAVA_HOME, we recommend use same JAVA_HOME in all machine you going to install DolphinScheduler
# and this configuration only support one parameter so far.
javaHome="/opt/jdk1.8.0_101"# DolphinScheduler API service port, also this is your DolphinScheduler UI component's URL port, default value is 12345
apiServerPort="12345"# ---------------------------------------------------------
# Database
# NOTICE: If database value has special characters, such as `.*[]^${}\+?|()@#&`, Please add prefix `\` for escaping.
# ---------------------------------------------------------
# The type for the metadata database
# Supported values: ``postgresql``, ``mysql`, `h2``.
DATABASE_TYPE="mysql"# Spring datasource url, following <HOST>:<PORT>/<database>?<parameter> format, If you using mysql, you could use jdbc
# string jdbc:mysql://127.0.0.1:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8 as example
SPRING_DATASOURCE_URL="jdbc:mysql://10.0.0.196:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=GMT%2b8"# Spring datasource username
SPRING_DATASOURCE_USERNAME="leo"# Spring datasource password
SPRING_DATASOURCE_PASSWORD="123456"# ---------------------------------------------------------
# Registry Server
# ---------------------------------------------------------
# Registry Server plugin name, should be a substring of `registryPluginDir`, DolphinScheduler use this for verifying configuration consistency
registryPluginName="zookeeper"# Registry Server address.
registryServers="10.0.0.198:2181"# Registry Namespace
registryNamespace="dolphinscheduler"# ---------------------------------------------------------
# Worker Task Server
# ---------------------------------------------------------
# Worker Task Server plugin dir. DolphinScheduler will find and load the worker task plugin jar package from this dir.
taskPluginDir="lib/plugin/task"# resource storage type: HDFS, S3, NONE
#需要上传资源文件到HDFS上,改为HDFS,否则无需修改
resourceStorageType="HDFS"# resource store on HDFS/S3 path, resource file will store to this hdfs path, self configuration, please make sure the directory exists on hdfs and has read write permissions. "/dolphinscheduler" is recommended
#需要上传资源文件到HDFS上,上传文件路径,需要上面定义的leo用户可以新建,删除,修改文件文件夹 chown -R dolphinscheduler:dolphinscheduler /data/dolphinscheduler
resourceUploadPath="/data/dolphinscheduler/upload"# if resourceStorageType is HDFS,defaultFS write namenode address,HA, you need to put core-site.xml and hdfs-site.xml in the conf directory.
# if S3,write S3 address,HA,for example :s3a://dolphinscheduler,
# Note,S3 be sure to create the root directory /dolphinscheduler
defaultFS="hdfs://mycluster:8020"# if resourceStorageType is S3, the following three configuration is required, otherwise please ignore
s3Endpoint="http://192.168.xx.xx:9010"
s3AccessKey="xxxxxxxxxx"
s3SecretKey="xxxxxxxxxx"# resourcemanager port, the default value is 8088 if not specified
resourceManagerHttpAddressPort="8088"# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single node, keep this value empty
yarnHaIps="192.168.xx.xx,192.168.xx.xx"# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single node, you only need to replace 'yarnIp1' to actual resourcemanager hostname
singleYarnIp="yarnIp1"# who has permission to create directory under HDFS/S3 root path
# Note: if kerberos is enabled, please config hdfsRootUser=
#需要上传资源文件到HDFS上,HDFS能创建删除修改文件,跟hadoop用户相同
hdfsRootUser="hadoop"# kerberos config
# whether kerberos starts, if kerberos starts, following four items need to config, otherwise please ignore
kerberosStartUp="false"
# kdc krb5 config file path
krb5ConfPath="$installPath/conf/krb5.conf"
# keytab username,watch out the @ sign should followd by \\
keytabUserName="hdfs-mycluster\\@ESZ.COM"
# username keytab path
keytabPath="$installPath/conf/hdfs.headless.keytab"
# kerberos expire time, the unit is hour
kerberosExpireTime="2"# use sudo or not
sudoEnable="true"# worker tenant auto create
workerTenantAutoCreate="false"

6、修改dolphinscheduler_env.sh

cd /opt/dolphinscheduler/conf/env
vim dolphinscheduler_env.sh
#hadoop安装目录
export HADOOP_HOME=/opt/hadoop
export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
export SPARK_HOME1=/opt/spark1
export SPARK_HOME2=/opt/spark2
#python安装目录,确保存在/usr/bin/python2.7,可以使用软链接
export PYTHON_HOME=/usr/
#JDK安装目录
export JAVA_HOME=/opt/jdk1.8.0_101
#HIVE安装目录
export HIVE_HOME=/opt/hive
export FLINK_HOME=/opt/flink
#Datax安装路径
export DATAX_HOME=/opt/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH

7、新增数据库(上面配置的用户名,密码,需要赋权,可以使用可视化工具Navicat等)
8、下载Mysql8.0.16版本驱动
mysql-connector-java

9、把驱动包放入解压包/lib下面

10、执行初始化脚本,在解压包/script下面有个create-dolphinscheduler.sh脚本

sh /opt/dolphinscheduler/script/create-dolphinscheduler.sh

执行完成(可以用工具查看脚本是否执行成功)


11、新增执行脚本(根据实际需要,我是需要统一一个启动脚本,可以直接执行加压包下的 install.sh)

vim dolphinscheduler.sh
#以某一个用户执行某个脚本(不切换环境,防止dolphinscheduler没有执行权限)
cd /opt/dolphinscheduler
#执行的时候如果要输入密码,那就说明免密执行有问题或者文件夹/文件权限有问题
su leo install.sh
#ubuntu执行脚本时如果出现source not found错误,使用cat xxx.sh 查看头部是否是 /bin/bash或者/bin/sh,如果是 /bin/bash可以使用 sudo bash xxx.sh
#如果java -version;javac -version;echo $PATH;echo $JAVA_HOME都整成,还是报找不到 /bin/java
#那么可以使用软链接处理
#sudo ln -s /usr/java/jdk1.8.0_301-amd64 /bin/java

保存 (ESC+:wq),赋权

chmod +777 dolphinscheduler.sh

12、防火墙开放12345端口并重启防火墙

firewall-cmd --zone=public --add-port=12345/tcp --permanent
systemctl restart firewalld

13、启动系统

14、浏览器输入
浏览器访问地址:http://10.0.0.63:12345/dolphinscheduler
默认的用户名和密码是 :admin dolphinscheduler123

15、打开页面失败,可以查看日志信息
tail -f -n 300 (installPath路径)/logs/dolphinscheduler-master.log
比如CPU过高,内存过低

16、集群环境启动


17、优化配置
(一)、Hive执行时间过长,默认30s,内存过低,容易超时;
org.apache.dolphinscheduler.plugin.task.sql.SqlTask

org.apache.dolphinscheduler.plugin.datasource.api.provider.JdbcDataSourceProvider


配置文件修改模板:https://download.csdn.net/download/xionglangs/86088059

CentOS安装DolphinScheduler相关推荐

  1. CentOS 安装docker.ce报错提示containerd.io >= 1.2.2-3问题

    centos安装docker.ce遇到报错,提示如下 # yum install -y docker-ce Last metadata expiration check: 0:01:49 ago on ...

  2. CentOS安装crontab

    CentOS安装crontab: yum install crontabs 说明: service crond start //启动服务 service crond stop //关闭服务 servi ...

  3. CentOS 安装Apache

    # centOS 安装A M P 环境 [参考简书作者,非常感谢!!!](https://www.jianshu.com/p/bc14ff0ab1c7) ## 一 Apache 环境安装 1 安装Ap ...

  4. centos 安装 NTFS支持

    2019独角兽企业重金招聘Python工程师标准>>> 参考的原文网址: centos安装完之后,默认是不支持NTFS磁盘格式的,解决的方法之一就是安装NTFS-3G模块,但是默认的 ...

  5. centos安装及网络配置

    感谢老师传授,共同学习!谢谢!仅供自己日后复习之用! centos安装关键点: 创建分区: / 系统分区 /boot 启动分区 SWAP 交换分区,虚拟内存.主要是缓解物理内存不足. 虚拟化软件: V ...

  6. centos安装tensorFlow的java环境

    参考问题汇总 centos安装tensorFlow版本的时候会遇到的一些问题,参考这个链接:https://blog.csdn.net/luoyexuge/article/details/783212 ...

  7. centos 安装 svn

    centos 安装svn服务 1. # yum install subversion 2.然后检查下安装的版本号 因为版本号不同可能会出现不同的情况 版本 信息 [root@VM_137_37_cen ...

  8. Linux(CentOS)安装分区方案

    为什么80%的码农都做不了架构师?>>>    Linux(CentOS)安装分区方案 /boot(不是必须的):/boot分区用于引导系统,它包含了操作系统的内核和在启动系统过程中 ...

  9. CentOS安装jdk的三种方法

    2019独角兽企业重金招聘Python工程师标准>>> CentOS安装jdk的三种方法 环境 Linux版本:CentOS 6.5.Ubuntu 12.04.5 JDK版本:JDK ...

最新文章

  1. 读书越多会越孤独吗?
  2. VTK:vtkConnectivityFilter用法实战
  3. 大型网站系统架构实践(五)深入探讨web应用高可用方案
  4. POJ 1743 (后缀数组+不重叠最长重复子串)
  5. 《Python Cookbook 3rd》笔记(1.8):字典运算
  6. FreeSql (二十二)Dto 映射查询
  7. php 计算本月第一天 本月最后一天 下个月第一天
  8. mac 安装brew及设置国内镜像
  9. total是什么牌子的电脑_干货!如何用Python在笔记本电脑上分析100GB数据(上)...
  10. Attention Please
  11. 利用js+html做一个简单的体脂率计算
  12. ElementUI全局配置message的弹窗时间
  13. 手机的唯一标识码 php,android手机获取唯一标识的方法
  14. SAE助力南瓜电影7天内全面Severless
  15. java 输出乘法口诀第一列_java输出乘法口诀表
  16. S3C2440 I2C总线控制
  17. Mac安装完Mysql命令不可用解决方案
  18. 跳槽重回前公司?当初的离职原因没解决,早晚还得辞职
  19. 使用java在后台将数据导出为excel文件
  20. setsockopt()函数和getsockopt()函数

热门文章

  1. JS表单验证(HTML+CSS+JS)详细教程
  2. [网络基础]网络设备简单介绍(网络基础知识)
  3. Serverless:微服务架构的终极模式
  4. 处理器后面的字母含义_七夕情人节 celine不可错过字母项链
  5. java正则提取文本手机号
  6. 在Activity中为什么要用managedQuery()
  7. 马斯克回应多年前嘲笑比亚迪;360 周鸿祎训练数字人代替演讲;微软发布自己的 Linux | 极客头条
  8. linux 网卡自动挂死,shell脚本自动检测网络掉线和自动重连
  9. 面试官:Ajax 原理是什么?如何实现?
  10. C# Task.WaitAll 方法