参考链接:https://www.bilibili.com/video/BV1iJ411c7Az?p=63

ELK:三个开源软件的缩写,分别表示:Elasticsearch , Logstash, Kibana , 它们都是开源软件。新增了一个FileBeat,它是一个轻量级的日志收集处理工具(Agent),Filebeat占用资源少,适合于在各个服务器上搜集日志后传输给Logstash

1、ElasticSearch:数据存储与查找

linux安装es

  • 下载es:https://www.elastic.co/cn/products/elasticsearch
  • 安装:即解压
#创建elsearch用户,Elasticsearch不支持root用户运行
useradd elsearch
#解压安装包
tar -xvf elasticsearch-6.5.4.tar.gz -C /itcast/es/
  • 修改配置
#修改配置文件
vim conf/elasticsearch.yml
network.host: 0.0.0.0 #设置ip地址,任意网络均可访问
#说明:在Elasticsearch中如果,network.host不是localhost或者127.0.0.1的话,就会认为是生产环境,
#会对环境的要求比较高,我们的测试环境不一定能够满足,一般情况下需要修改2处配置,如下:
#1:修改jvm启动参数
vim conf/jvm.options
-Xms128m #根据自己机器情况修改
-Xmx128m#2:一个进程在VMAs(虚拟内存区域)创建内存映射最大数量
vim /etc/sysctl.conf
vm.max_map_count=655360
sysctl -p #配置生效
  • 启动与停止ES服务
su - elsearch
cd bin
./elasticsearch 或 ./elasticsearch -d #后台启动
#通过访问进行测试,看到如下信息,就说明ES启动成功了
{"name": "ZO1vdaQ","cluster_name": "elasticsearch","cluster_uuid": "ibiBX0_uQgmRcYV4h55J1A","version": {"number": "6.5.4","build_flavor": "default","build_type": "tar","build_hash": "d2ef93d","build_date": "2018-12-17T21:17:40.758843Z","build_snapshot": false,"lucene_version": "7.5.0","minimum_wire_compatibility_version": "5.6.0","minimum_index_compatibility_version": "5.0.0"},"tagline": "You Know, for Search"
}#停止服务
root@itcast:~# jps
68709 Jps
68072 Elasticsearch
kill 68072 #通过kill结束进程

安装报错

#启动出错,
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at
least [65536]
#解决:切换到root用户,编辑limits.conf 添加类似如下内容
vi /etc/security/limits.conf
#添加如下内容:  添加后需要重新登录配置才生效,exit退出后重新su - elsearch登录
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096[2]: max number of threads [1024] for user [elsearch] is too low, increase to at least
[4096]
#解决:切换到root用户,进入limits.d目录下修改配置文件。
vi /etc/security/limits.d/[xx]-nproc.conf
#修改如下内容:
* soft nproc 1024
#修改为
* soft nproc 4096
[3]: system call filters failed to install; check the logs and fix your configuration
or disable system call filters at your own risk
#解决:Centos6不支持SecComp,而ES5.2.0默认bootstrap.system_call_filter为true
vim config/elasticsearch.yml
#添加:
bootstrap.system_call_filter: false

elasticsearch-head

由于ES官方并没有为ES提供界面管理工具,仅仅是提供了后台的服务。elasticsearch-head是一个为ES开发的一个页
面客户端工具,其源码托管于GitHub,地址为:https://github.com/mobz/elasticsearch-head

4种安装方式:

  • 源码安装,通过npm run start启动(不推荐)
  • 通过docker安装(推荐)
  • 通过chrome插件安装(推荐)
    • https://chrome.google.com/webstore/detail/elasticsearch-head/ffmkiejjmecolpfloofpjologoblkegm/related?utm_source=chrome-ntp-icon
  • 通过ES的plugin方式安装(不推荐)

注意:

由于前后端分离开发,所以会存在跨域问题,需要在服务端做CORS的配置,如下:

1、vim elasticsearch.yml

2、添加http.cors.enabled: true http.cors.allow-origin: “*”

通过chrome插件的方式安装不存在该问题。

IK分词器

Elasticsearch插件地址:https://github.com/medcl/elasticsearch-analysis-ik

  • 安装:
#安装方法:将下载到的elasticsearch-analysis-ik-6.5.4.zip解压到es安装目录:elasticsearch/plugins/ik目录下即可。
mkdir plugins/ik
#解压
unzip elasticsearch-analysis-ik-6.5.4.zip
#重启
./bin/elasticsearch

java客户端

  • 依赖
<dependency><groupId>org.elasticsearch</groupId><artifactId>elasticsearch</artifactId><version>6.5.4</version>
</dependency>
<dependency><groupId>org.elasticsearch.client</groupId><artifactId>elasticsearch-rest-client</artifactId><version>6.5.4</version>
</dependency>
<dependency><groupId>junit</groupId><artifactId>junit</artifactId><version>4.12</version>
</dependency>
<dependency><groupId>com.fasterxml.jackson.core</groupId><artifactId>jackson-databind</artifactId><version>2.9.4</version>
</dependency>
<dependency><groupId>com.fasterxml.jackson.core</groupId><artifactId>jackson-databind</artifactId><version>2.11.3</version><scope>compile</scope>
</dependency>
<!--itcast es高级-->
<dependency><groupId>org.elasticsearch.client</groupId><artifactId>elasticsearch-rest-high-level-client</artifactId><version>6.5.4</version>
</dependency>

rest低级客户端

public class RestEsBase {private static final Logger LOGGER = LoggerFactory.getLogger(RestEsBase.class);private static final ObjectMapper MAPPER = new ObjectMapper();private RestClient restClient;// 初始化@Beforepublic void init(){RestClientBuilder restClientBuilder = restClient.builder(
//                new HttpHost("192.168.43.128", 9200,  "http"),
//                ... 可添加多个作为集群new HttpHost("192.168.43.128", 9200,  "http"));restClientBuilder.setFailureListener(new RestClient.FailureListener(){@Overridepublic void onFailure(Node node) {LOGGER.error("is error..." + node);}});this.restClient = restClientBuilder.build();}// 关闭@Afterpublic void after() throws IOException{restClient.close();}// 查询es状态@Testpublic void testGetInfo() throws IOException {Request request = new Request("GET", "/_cluster/state");request.addParameter("pretty", "true");Response response = this.restClient.performRequest(request);System.out.println(response.getStatusLine());System.out.println(EntityUtils.toString(response.getEntity()));}// 新增数据@Testpublic void testCreateDate() throws IOException {Request request = new Request("Post", "/haoke/house");Map<String, Object> data = new HashMap<>();data.put("id","2001");data.put("title","张江高科");data.put("price","3500");request.setJsonEntity(MAPPER.writeValueAsString(data));Response response = this.restClient.performRequest(request);System.out.println(response.getStatusLine());System.out.println(EntityUtils.toString(response.getEntity()));}// 删除@Testpublic void deleteDate() throws IOException {Request request = new Request("DELETE", "/haoke/house/s6Go-XwB6CaVutaqNdyL");Response response = this.restClient.performRequest(request);System.out.println(EntityUtils.toString(response.getEntity()));}// 根据id查询@Testpublic void testQueryData() throws IOException{Request request = new Request("GET", "haoke/house/uqGm-nwB6CaVutaqcNw7");Response response = this.restClient.performRequest(request);System.out.println(response.getStatusLine());System.out.println(response.getEntity());}
}

rest高级客户端

public class RestEsBaseHighLevel {private static final Logger LOGGER = LoggerFactory.getLogger(RestEsBaseHighLevel.class);private static final ObjectMapper MAPPER = new ObjectMapper();private RestHighLevelClient restHighLevelClient;@Beforepublic void init(){RestClientBuilder restClientBuilder = RestClient.builder(
//                new HttpHost("192.168.43.128", 9200,  "http"),
//                ... 可添加多个作为集群new HttpHost("192.168.43.128", 9200,  "http"));this.restHighLevelClient = new RestHighLevelClient(restClientBuilder);}@Afterpublic void after() throws IOException{restHighLevelClient.close();}// 新增 同步操作@Testpublic void testCreate() throws IOException {Map<String, Object> data = new HashMap<>();data.put("id", "2002");data.put("title", "南京西路 拎包入住 一室一厅");data.put("price", "4500");IndexRequest indexRequest = new IndexRequest("haoke", "haose").source(data);IndexResponse indexResponse = this.restHighLevelClient.index(indexRequest, RequestOptions.DEFAULT);System.out.println("id:" + indexResponse.getId());System.out.println("index:" + indexResponse.getIndex());System.out.println("type:" + indexResponse.getType());System.out.println("version:" + indexResponse.getVersion());System.out.println("result:" + indexResponse.getResult());System.out.println("shardInfo:" + indexResponse.getShardInfo());}// 新增,异步操作@Testpublic void testCreateAsync() throws Exception {Map<String, Object> data = new HashMap<>();data.put("id", "2003");data.put("title", "南京东路 最新房源 二室一厅");data.put("price", "5500");IndexRequest indexRequest = new IndexRequest("haoke", "house").source(data);this.restHighLevelClient.indexAsync(indexRequest,RequestOptions.DEFAULT,new ActionListener<IndexResponse>() {@Overridepublic void onResponse(IndexResponse indexResponse) {System.out.println("id:" + indexResponse.getId());System.out.println("index:" + indexResponse.getIndex());System.out.println("type:" + indexResponse.getType());System.out.println("version:" + indexResponse.getVersion());System.out.println("result:" + indexResponse.getResult());System.out.println("shardInfo:" + indexResponse.getShardInfo());}@Overridepublic void onFailure(Exception e) {System.out.println(e);}});System.out.println("ok");Thread.sleep(20000);}// 查询@Testpublic void testQuery() throws IOException {GetRequest getRequest = new GetRequest("haoke", "house", "vaHB-nwB6CaVutaq-tzA");// 指定返回字段String[] includes = new String[]{"title", "id"};String[] excludes = Strings.EMPTY_ARRAY;FetchSourceContext fetchSourceContext = new FetchSourceContext(true, includes, excludes);getRequest.fetchSourceContext(fetchSourceContext);GetResponse response = this.restHighLevelClient.get(getRequest, RequestOptions.DEFAULT);System.out.println("data: " + response.getSource());}@Testpublic void testQuery2() throws IOException {RestEsUtils.init(new HttpHost("192.168.43.128", 9200,  "http"));String[] includes = new String[]{"title", "id"};Map<String, Object> query = RestEsUtils.query("haoke", "house", "vaHB-nwB6CaVutaq-tzA", includes);System.out.println(query);}// 判断是否存在@Testpublic void testExiste() throws IOException {GetRequest getRequest = new GetRequest("haoke","haose","vaHB-nwB6CaVutaq-tzA");boolean exists = this.restHighLevelClient.exists(getRequest, RequestOptions.DEFAULT);System.out.println("exist:" + exists);}// 删除public void testDelete() throws IOException {DeleteRequest deleteRequest = new DeleteRequest("haoke", "house", "vaHB-nwB6CaVutaq-tzA");DeleteResponse response = this.restHighLevelClient.delete(deleteRequest, RequestOptions.DEFAULT);System.out.println(response.status());}//更新数据@Testpublic void testUpdate() throws Exception {UpdateRequest updateRequest = new UpdateRequest("haoke", "house", "uqGm-nwB6CaVutaqcNw7");Map<String, Object> data = new HashMap<>();data.put("title", "张江高科2");data.put("price", "5000");updateRequest.doc(data);UpdateResponse response = this.restHighLevelClient.update(updateRequest, RequestOptions.DEFAULT);System.out.println("version:" + response.getVersion());}//测试搜索@Testpublic void testSearch() throws Exception{SearchRequest searchRequest = new SearchRequest("haoke");searchRequest.types("house");//SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();sourceBuilder.query(QueryBuilders.matchQuery("title", "拎包入住"));sourceBuilder.from(0);sourceBuilder.size(5);sourceBuilder.timeout(new TimeValue(60, TimeUnit.SECONDS));searchRequest.source(sourceBuilder);SearchResponse search = this.restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);System.out.println("search data count:" + search.getHits().totalHits);SearchHits hits = search.getHits();for (SearchHit hit : hits) {System.out.println(hit.getSourceAsString());}}
}

2、Kibana:数据查看

部署安装

  • 下载安装包(官网):https://www.elastic.co/cn/products/kibana
  • 上传安装
#解压安装包
tar -xvf kibana-6.5.4-linux-x86_64.tar.gz#修改配置文件
vim config/kibana.yml
server.host: "ip" #对外暴露服务的地址
elasticsearch.url: "http://es ip地址:9200" #配置Elasticsearch#启动
./bin/kibana#通过浏览器进行访问
http://192.168.40.133:5601/app/kibana

3、Filebeat:轻量日志采集器

部署安装

  • 下载:https://www.elastic.co/downloads/beats

  • 上传并解压

mkdir ./beats
tar -xvf filebeat-6.5.4-linux-x86_64.tar.gz
cd filebeat-6.5.4-linux-x86_64  # 进入安装目录,
  • 添加配置:【…安装目录下/test.yml】
#指定输入
filebeat.inputs:
#-type: stdin #当前控制台输入
- type: log  # 读取日志文件输入enabled: truepaths:- /home/elsearch/beats/*.log # 日志路径tags: ["haoke-im"] # 添加自定义tag,便于后续处理fields: #添加自定义子段from: haoke-imfields_under_root: true #true为添加到根节点,false为添加到字节点 # 指定索引分片
setup.template.settings:index.number_of_shards: 3 #指定索引分片数# 输出到控制台
#output.consol:
#  pretty: true
#  enable: true# 输出到es
output.elasticsearch: #指定es配置hosts: ["192.168.43.128:9200"]
  • 启动并输入
#启动filebeat 当前目录:【...安装目录下/】
./filebeat -e -c test.yml
# ./filebeat -e -c test.yml -d "publish"
#参数说明
-e: 输出到标准输出,默认输出到syslog和logs下
-c: 指定配置文件
-d: 输出debug信息#根据配置输入路径:如上/home/elsearch/beats/*.log   添加a.log并输入数据保存退出
  • 查看es数据
{"_index": "filebeat-6.5.4-2021.11.08","_type": "doc","_id": "WXT4_3wBfzb1yMzuFiLV","_version": 1,"_score": 1,"_source": {"@timestamp": "2021-11-08T14:33:39.569Z","message": "123","host": {"name": "localhost.localdomain"},"source": "/home/elsearch/beats/a.log","offset": 12,"input": {"type": "log"},"from": "haoke-im","beat": {"version": "6.5.4","name": "localhost.localdomain","hostname": "localhost.localdomain"},"tags": ["haoke-im"],"prospector": {"type": "log"}}
}

读取nginx日志

  • 下载nginx:http://nginx.org/en/download.html
  • 上传解压
mkdir nginx
tar -xvf nginx-1.11.6.tar.gz
yum -y install pcre-devel zlib-devel
./configure
make install#启动
cd 【...安装目录/sbin/】
./nginx
#通过浏览器访问页面并且查看日志
#访问地址:http://服务器地址/
tail -f 【...安装目录/logs/access.log】
  • 添加配置文件
#指定输入
filebeat.inputs:
#-type: stdin
- type: logenabled: truepaths:- /usr/local/nginx/logs/*.logtags: ["nginx"] # 添加自定义tag,便于后续处理fields: #添加自定义子段from: nginx-logfields_under_root: true #true为添加到根节点,false为添加到字节点 # 指定索引分片
setup.template.settings:index.number_of_shards: 3 #指定索引分片数# 输出到控制台
#output.consol:
#  pretty: true
#  enable: true
## 输出到es
output.elasticsearch: #指定es配置hosts: ["192.168.43.128:9200"]
  • 启动后,访问nginx,可以在Elasticsearch中看到索引以及查看数据

Module

日志数据的读取以及处理都是自己手动配置的,其实,在Filebeat中,有大量的Module,可以简化我 们的配置,直接就可以使用,如下:

#目录:【...安装目录/】
#命令
./filebeat modules list#内容
Enabled:Disabled:
apache2
auditd
elasticsearch
haproxy
icinga
iis
kafka
kibana
logstash
mongodb
mysql
nginx
osquery
postgresql
redis
suricata
system
traefik

nginx-model使用

  • 开启或关闭module。如:
./filebeat modules enable nginx #启动
./filebeat modules disable nginx #禁用
  • 查看修改nginxmodel配置
# 进入modle目录
cd modules.d/# 修改nginx.yml
- module: nginx# Access logsaccess:enabled: truevar.paths: ["/usr/local/nginx/logs/access.log*"]# Set custom paths for the log files. If left empty,# Filebeat will choose the paths depending on your OS.#var.paths:# Error logserror:enabled: truevar.paths: ["/usr/local/nginx/logs/error.log*"]# Set custom paths for the log files. If left empty,# Filebeat will choose the paths depending on your OS.#var.paths:
  • 修改或添加filebeat启动配置
# nginx-conf-module.yml
# 指定索引分片
setup.template.settings:index.number_of_shards: 3 #指定索引分片数# 输出到es
output.elasticsearch: #指定es配置hosts: ["192.168.43.128:9200"]# 开启models
filebeat.config.modules:path: ${path.config}/modules.d/*.ymlreload.enabled: false
  • 启动filebeat
./filebeat -c -e nginx-conf-module.yml # 报错
2021-11-13T16:23:09.788+0800   ERROR   fileset/factory.go:142  Error loading pipeline: Error loading pipeline for fileset nginx/access: This module requires the following Elasticsearch plugins: ingest-user-agent, ingest-geoip. You can install them by running the following commands on all the Elasticsearch nodes:sudo bin/elasticsearch-plugin install ingest-user-agentsudo bin/elasticsearch-plugin install ingest-geoip#解决:需要在Elasticsearch中安装ingest-user-agent、ingest-geoip插件。根据命令或离线安装
#离线安装:需要ingest-user-agent.tar、ingest-geoip.tar、ingest-geoip-conf.tar 3个文件
#ingest-user-agent.tar、ingest-geoip.tar解压到plugins下
#ingest-geoip-conf.tar解压到config下
#重启es
  • 刷新nginx,查看es数据

4、Logstash:数据处理

部署安装

  • 下载:https://www.elastic.co/cn/downloads/logstash

  • 上传安装

#检查jdk环境,要求jdk1.8+
java -version
#解压安装包
tar -xvf logstash-6.5.4.tar.gz

读取自定义日志

  • 添加配置文件:【test-pipeline.conf 】
# 输入
input {file { path => "/home/elsearch/logstash/logs/app.log"start_position => "beginning"}
}# 过滤
filter {mutate {split => {"message"=>"|"}}
}# 输出
output {stdout {codec => rubydebug  }
}
  • 启动测试
#启动
./bin/logstash -f ./itcast-pipeline.conf#写日志到文件
cd /home/elsearch/logstash/logs
echo "2019-03-15 21:21:21|ERROR|读取数据出错|参数:id=1002" >> app.log
#输出的结果
{"message" => [[0] "2019-03-15 21:21:21",[1] "ERROR",[2] "读取数据出错",[3] "参数:id=1002"],"host" => "hadoop01","@timestamp" => 2021-11-14T06:26:14.291Z,"path" => "/home/elsearch/logstash/logs/app.log","@version" => "1"
}

解析数据写出到es

  • 添加配置文件
# 输入
input {file { path => "/home/elsearch/logstash/logs/app.log"start_position => "beginning"}
}# 过滤
filter {mutate {split => {"message"=>"|"}}
}# 输出
output {#  stdout {#    codec => rubydebug
#  }elasticsearch {hosts => ["192.168.43.128:9200"]}
}
  • 写日志到文件测试输出
cd /home/elsearch/logstash/logs
echo "2019-03-15 21:21:21|ERROR|读取数据出错|参数:id=1002" >> app.log

5、整合elk收集日志

Elasticsearch + Logstash + Beats + Kibana整合。

5-1、准备项目:test-elk

  • 依赖
<dependencies><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter</artifactId><exclusions><exclusion><groupId>ch.qos.logback</groupId><artifactId>logback-classic</artifactId></exclusion></exclusions></dependency><dependency><groupId>org.apache.commons</groupId><artifactId>commons-lang3</artifactId><version>3.3.2</version></dependency><dependency><groupId>joda-time</groupId><artifactId>joda-time</artifactId><version>2.9.9</version></dependency><dependency><groupId>org.slf4j</groupId><artifactId>slf4j-log4j12</artifactId><version>1.7.26</version></dependency>
</dependencies>
  • log日志文件
log4j.rootLogger=DEBUG,A1,A2log4j.appender.A1=org.apache.log4j.ConsoleAppender
log4j.appender.A1.layout=org.apache.log4j.PatternLayout
log4j.appender.A1.layout.ConversionPattern=[%p] %-d{yyyy-MM-dd HH:mm:ss} [%c] - %m%nlog4j.appender.A2 = org.apache.log4j.DailyRollingFileAppender
log4j.appender.A2.File = /home/elsearch/logstash/logs/app.log
log4j.appender.A2.Append = true
log4j.appender.A2.Threshold = INFO
log4j.appender.A2.layout = org.apache.log4j.PatternLayout
log4j.appender.A2.layout.ConversionPattern =[%p] %-d{yyyy-MM-dd HH:mm:ss} [%c] - %m%n
  • springboot项目模拟操作
package cn.itcast.dashboard;import org.apache.commons.lang3.RandomUtils;
import org.joda.time.DateTime;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.autoconfigure.SpringBootApplication;@SpringBootApplication
public class Main {private static final Logger LOGGER = LoggerFactory.getLogger(Main.class);public static final String[] VISIT = new String[]{"浏览页面", "评论商品", "加入收藏", "加入购物车", "提交订单", "使用优惠券", "领取优惠券", "搜索", "查看订单"};public static void main(String[] args) throws Exception {while(true){Long sleep = RandomUtils.nextLong(200, 1000 * 5);Thread.sleep(sleep);Long maxUserId = 9999L;Long userId = RandomUtils.nextLong(1, maxUserId);String visit = VISIT[RandomUtils.nextInt(0, VISIT.length)];DateTime now = new DateTime();int maxHour = now.getHourOfDay();int maxMillis = now.getMinuteOfHour();int maxSeconds = now.getSecondOfMinute();String date = now.plusHours(-(RandomUtils.nextInt(0, maxHour))).plusMinutes(-(RandomUtils.nextInt(0, maxMillis))).plusSeconds(-(RandomUtils.nextInt(0, maxSeconds))).toString("yyyy-MM-dd HH:mm:ss");String result = "DAU|" + userId + "|" + visit + "|" + date;LOGGER.info(result);Thread.sleep(1*60*1000);}}
}
  • 打jar包上传linux运行(由于用filebeat作为日志收集,需要把项目部署在filebeat同一机器)
# 运行之后,就可以将日志写入到app.log文件中
java -jar test-elk-1.0-SNAPSHOT.jar

5-2、启动es

# 进入es安装目录启动es
./bin/elasticsearch

5-3、配置logstash并启动

  • logstatsh进行数据处理发送到es。
# =====logstatsh安装目录下添加配置文件:【test-elk.conf】
# 输入
input {beats {port => "5044"codec => jsonclient_inactivity_timeout => 36000}
}filter {mutate {split => {"message"=>"|"}}mutate {add_field => {"userId" => "%{message[1]}""visit" => "%{message[2]}""date" => "%{message[3]}"}}mutate {convert => {"userId" => "integer""visit" => "string""date" => "string"}}
}# 输出
output {elasticsearch {hosts => ["192.168.43.128:9200"]codec => "json"}
}## =====启动
./bin/logstash -f test-elk.conf

5-3、配置filebeat并启动

  • 使用filebeat作为日志收集发送到logstash。
## =====filebeat安装目录下添加配置文件:【test-elk.yml】
# 配在输入
filebeat.inputs:
- type: logenabled: truepaths:- /home/elsearch/logstash/logs/*.log# 设置分片数
setup.template.settings:index.number_of_shards: 3 #指定索引分片数# 输出到logstash
output.logstash:hosts: ["192.168.43.129:5044"]## =====启动
./filebeat -e -c test-elk.yml

5-5、查看es中数据

# 进入kibanan安装目录启动kibanan   数据没有写入es时可以删除es中的索引重试。
./bin/kibana
  • 创建index

  • 查看数据

  • 获取通过elasticsearch-head插件查看也可以

【ELK】heima教程elk学习相关推荐

  1. 视频教程-ElasticSearch7.x集群搭建(es7)主从读写分离搭建教程-ELK

    ElasticSearch7.x集群搭建(es7)主从读写分离搭建教程 10多年互联网一线实战经验,现就职于大型知名互联网企业,架构师, 有丰富实战经验和企业面试经验:曾就职于某上市培训机构数年,独特 ...

  2. 【python教程入门学习】Python实现自动玩贪吃蛇程序

    这篇文章主要介绍了通过Python实现的简易的自动玩贪吃蛇游戏的小程序,文中的示例代码讲解详细,感兴趣的小伙伴可以跟随小编一起学一学 实现效果 先看看效果 这比我手动的快多了,而且是单机的,自动玩没惹 ...

  3. python编程16章教程_Python学习笔记__16.2章 TCP编程

    # 这是学习廖雪峰老师python教程的学习笔记 Socket是网络编程的一个抽象概念.通常我们用一个Socket表示"打开了一个网络链接",而打开一个Socket需要知道目标计算 ...

  4. java demo在哪里下载_[Java教程]Java学习 (一)、下载,配置环境变量,第一个demo...

    [Java教程]Java学习 (一).下载,配置环境变量,第一个demo 0 2016-03-01 22:00:18 一.在 http://www.oracle.com 下载java JDK 安装到自 ...

  5. 深度学习入门教程UFLDL学习实验笔记三:主成分分析PCA与白化whitening

     深度学习入门教程UFLDL学习实验笔记三:主成分分析PCA与白化whitening 主成分分析与白化是在做深度学习训练时最常见的两种预处理的方法,主成分分析是一种我们用的很多的降维的一种手段,通 ...

  6. 深度学习入门教程UFLDL学习实验笔记一:稀疏自编码器

     深度学习入门教程UFLDL学习实验笔记一:稀疏自编码器 UFLDL即(unsupervised feature learning & deep learning).这是斯坦福网站上的一篇 ...

  7. 前端获取div里面的标签_web前端教程JavaScript学习笔记DOM

    web前端教程JavaScript学习笔记 DOM一DOM(Document Object Model): 文档对象模型 其实就是操作 html 中的标签的一些能力 我们可以操作哪些内容 获取一个元素 ...

  8. ABAP 标准培训教程 BC400 学习笔记之五:ABAP 编程语言的变量,常量和字面量,以及文本符号

    在 Jerry 的前一篇文章ABAP 标准培训教程 BC400 学习教程之四:ABAP 编程语言的数据类型里,我们实际上已经涉及到了 ABAP 字面量的一种:如下图高亮的 '01' 所示,该文本字面量 ...

  9. ABAP 标准培训教程 BC400 学习笔记之三:ABAP 编程语言的特性和基本构成要素

    SAP ABAP 标准培训教程 BC400 中对 ABAP 编程语言特性的总结如下: Is typed - 强类型编程语言,任何 ABAP 变量在其声明时,其数据类型就已经的确定下来了. Enable ...

最新文章

  1. Qt中文手册 之 QTableWidgetItem
  2. Netty Channel源码分析
  3. 巴曙松:收到了Roger送的BCH,已全捐给慈善基金
  4. python中requests库的用途-数据爬虫(三):python中requests库使用方法详解
  5. 利用python进行统计分析的一些笔记(1)
  6. 计算机组成原理的基础知识,计算机组成原理:基础知识部分习题解答(学习笔记)...
  7. OpenCV——素描
  8. Android项目中,在一个数据库里建立多张表
  9. 思科 GNS3 配置 NAT 端口映射
  10. sklearn 常用api(一)
  11. 《Java线程与并发编程实践》—— 1.2 操作更高级的线程任务
  12. 怎样才能称得上一个好运维
  13. 【洛谷P5018 对称二叉树】
  14. 自动量程万用表的实现原理_自动量程万用表模块设计方案[图]
  15. Landsat系列卫星介绍​
  16. android rn热更新闪退,react-native 导致热更新失败的问题之一
  17. 对象存储s3cmd使用手册
  18. 基于Caffe ResNet-50网络实现图片分类(仅推理)的实验复现
  19. Dynamics CRM: 权限问题之SecLib::AccessCheckEx2 failed
  20. PLC运动控制系列之机械回原点(back to origin)

热门文章

  1. Windows 8.1 Update 优缺点比较(找到使用一段时候后卡顿的原因了)
  2. ASEMI代理ADUM3223ARZ-RL7原装ADI车规级ADUM3223ARZ-RL7
  3. 适合学生党的无线充电宝有哪些?学生党最爱的无线充电宝推荐
  4. oracle事件跟踪器使用,Oracle10046跟踪事件操作步骤
  5. 一个穷学生的创业经历
  6. iTerm2的自动补全
  7. 黄河鲲鹏服务器装系统,【软通鲲鹏云最佳实践21】node-10.16.0 鲲鹏云服务器安装实践...
  8. mysql导出数据带表头
  9. 多租户Saas架构设计分析(基础篇)
  10. 《身体是革命的本钱,该注意时还是要注意!》