Noisy Label Learning

1.f’create_noisy_data’

  • include cuda :
    numpy, random, matplotlib, pandas, tensorflow, os, sklearn
  • Functions:
    1. class cluster_data_preprocess:

      • visualize_data: showing the distribution of the points we gennernate
      • get_centroids: with the use of sklearn make_blobs method to genernate the data points which have been divided into 3 clusters automatically and return the central points’ coordinate
      • calculate_avg_matric: calculate the mean distance between every point and the centroid in a cluster
      • calculate_scores: useless
      • genernate_noisy_data: with the use of the average distance to genernate the appropriate noisy data points
    2. create_clear_data: useless
    3. create_excel: divide the tuple coordinate into two parameters, which are x and y, in order to transfer these coordinates into excel more conveniently
    4. create_data_main: main function of the create data
import numpy as np
import random
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans
import oscolor = ['r', 'g', 'b', 'y', 'm', 'c']
path = r"F:\00PYTHON\noisy_label_learning\noisy_label.csv"class cluster_data_preprocess():def __init__(self):self.data, self.target = make_blobs(n_samples=300, n_features=2, centers=3, random_state=1)self.data, self.target = list(self.data), list(self.target)def visualize_data(self):plt.scatter(self.data[:,0], self.data[:,1], c=self.target)plt.show()def get_centroids(self):cluster = KMeans(n_clusters=3, random_state=8)cluster.fit(self.data)y_pred = cluster.fit_predict(self.data)centroid = cluster.cluster_centers_return list(y_pred), centroiddef calculate_score(self, y_pred, target):correct = 0error = []for i in range(len(y_pred)):if (y_pred[i] == 1 and target[i] == 0) or \(y_pred[i] == 0 and target[i] == 1) or \(y_pred[i] == target[i]):correct += 1else:error.append(self.data[i])return (correct / len(y_pred)), errordef calculate_avg_matric(self, data, centroid, y_pred):avg_matirc = [[0, 0], [0, 0], [0, 0]]for num in range(len(y_pred)):avg_matirc[y_pred[num]][0] += abs(float(centroid[y_pred[num]][0]) - data[num][0])avg_matirc[y_pred[num]][1] += abs(float(centroid[y_pred[num]][1]) - data[num][1])for i in range(3):avg_matirc[i][0] /= len(data)avg_matirc[i][1] /= len(data)return avg_matircdef generate_noisy_data(self, avg_matric, centroid, noisy_scale):noisy_data = []label = []for i in range(3):for num in range(noisy_scale):if i == 1:label.append(0)elif i == 2:label.append(1)elif i ==0 :label.append(2)noisy_data.append((centroid[i][0] + random.choice([-1, 1]) * (avg_matric[i][0]+float(np.random.randint(1, 100)/100)),centroid[i][1] + random.choice([-1, 1]) * (avg_matric[i][1]+float(np.random.randint(1, 100)/100))))return label, noisy_datadef create_clear_data():a_class_T, b_class_T, c_class_T, d_class_T = [], [], [], []for i in range(1000):a_class_T.append((np.random.randint(0, high=100) + np.random.random_sample(),np.random.randint(0, high=100) + np.random.random_sample()))b_class_T.append((np.random.randint(-100, high=0) + np.random.random_sample(),np.random.randint(0, high=100) + np.random.random_sample()))c_class_T.append((np.random.randint(-100, high=0) + np.random.random_sample(),np.random.randint(-100, high=0) + np.random.random_sample()))d_class_T.append((np.random.randint(0, high=100) + np.random.random_sample(),np.random.randint(-100, high=0) + np.random.random_sample()))return a_class_T, b_class_T, c_class_T, d_class_Tdef create_ecxel(data, label, path):data_x, data_y, keys_x, keys_y = [], [], [], []for i in range(len(data)):data_x.append(data[i][0])data_y.append(data[i][1])keys_x.append(label[i])keys_y.append((label[i]+1) * -1)divided_dic = {0:[], 1:[], 2:[], -1:[], -2:[], -3:[]}for i in range(len(data_x)):divided_dic[keys_x[i]].append(data_x[i])divided_dic[keys_y[i]].append(data_y[i])df = pd.DataFrame(data=divided_dic.values(), columns=None, index=divided_dic.keys())df.to_csv(path, sep=',')def create_data_main(noisy_scale):final_data, label_noise = {}, {0: [], 1: [], 2: []}plt.figure(figsize=(20, 8), dpi=100)get_data = cluster_data_preprocess()data, target = get_data.data, get_data.targety_pred, centroid = get_data.get_centroids()avg_matric = get_data.calculate_avg_matric(data, centroid, y_pred)label, noisy = get_data.generate_noisy_data(avg_matric, centroid, noisy_scale)for i in range(len(label)):label_noise[label[i]].append(noisy[i])for i in range(len(data)):plt.scatter(data[i][0], data[i][1], c=color[y_pred[i]])label_noise[y_pred[i]].append(tuple(data[i]))for i in range(len(noisy)):plt.scatter(noisy[i][0], noisy[i][1], c=color[label[i]], marker='p')data.append(noisy[i])y_pred.append(label[i])plt.savefig('./points_showing.png')plt.show()create_ecxel(data, y_pred, path)

2.f’simulate_neural_network’

  • affect of file:
    Due to the PC doesn’ t own strong enough GPU to support us to train ResNet-34 or ResNet-50 on Cifar-10, so we take the measure of simulate the training process of the deep neural network but ignore the feature extraction of the Convolutional Neural Network. We genernate enough clear data and different scale noisy data in f’create_noisy_data’ and mix them up, then we take only one full-connect layer as our backbone learning
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
import warnings
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
import tensorflow as tf
import create_noisy_data
from matplotlib import rcParamsrcParams['font.family']='simhei'#显示中文color = ['black', 'red', 'green', 'blue', 'mistyrose', 'cyan']
path = r"F:\00PYTHON\noisy_label_learning\noisy_label.csv"
warnings.filterwarnings('ignore')def read_data(path):df = pd.read_csv(path)x_train, y_train, x_test, y_test = [], [], [], []for i in range(3):for j in range(len(list(df.iloc[i, :].values[31:]))):y_train.append(i)x_train += list(zip(list(df.iloc[i, :].values[31:]), list(df.iloc[i+3, :].values[31:])))for i in range(3):for j in range(len(list(df.iloc[i, :].values[1:31]))):y_test.append(i)x_test += list(zip(list(df.iloc[i, :].values[1:31]), list(df.iloc[i+3, :].values[1:31])))return x_train, x_test, y_train, y_testdef shuffle_data(x_train, x_test, y_train, y_test):np.random.seed(16)np.random.shuffle(x_train)np.random.seed(16)np.random.shuffle(y_train)np.random.seed(16)np.random.shuffle(x_test)np.random.seed(16)np.random.shuffle(y_test)return x_train, x_test, y_train, y_testclass CrossEntropy2d():def __init__(self):super(CrossEntropy2d, self).__init__()self.criterion = tf.nn.CrossEntropyLoss(weight=None, size_average=True)def forward(self, out, target):n, c, h, w = out.size()         # n:batch_size, c:classout = out.view(-1, c)           # (n*h*w, c)target = target.view(-1)        # (n*h*w)# print('out', out.size(), 'target', target.size())loss = self.criterion(out, target)return lossdef Loss_o(x1, x2):res = 0#print(len(x1[0]), len(x1[1]))for i in range(np.array(x1).shape[0]):for j in range(np.array(x1).shape[1]):res += x1[i][j] * tf.math.log(x2[i][j])return - res / len(x1[0])def Loss_c(x1, x2):res = 0for i in range(np.array(x1).shape[0]):for j in range(np.array(x1).shape[1]):res += x1[i][j] * tf.math.log(x1[i][j]/x2[i][j])return  res / len(x1[0])def Loss_e(x1):res = 0for i in range(np.array(x1).shape[0]):for j in range(np.array(x1).shape[1]):res += x1[i][j] * tf.math.log(x1[i][j])return - res / len(x1[0])def fit_model_with_pencil(x_train, x_test, y_train, y_test):x_train = tf.cast(x_train, tf.float32)x_test = tf.cast(x_test, tf.float32)y_list = y_trainx_list = x_trainlength = len(x_train)test_db = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)w1 = tf.Variable(tf.random.truncated_normal([2, 3], stddev=0.1, seed=1))b = tf.Variable(tf.random.truncated_normal([3], stddev=0.1, seed=1))#y_d = tf.Variable(tf.random.truncated_normal([32, 3], stddev=0.1, seed=1))loss_list, acc_list = [], []epoch_list = []epoches = 120 * 2loss_all, lr = 0, 0.3uu = [40 * x for x in range(1, 6)]Y_d = [0, 0, 0, 0, 0, 0, 0, 0,0, 0, 0, 0, 0, 0, 0, 0,0, 0, 0, 0, 0, 0, 0, 0,0, 0, 0, 0, 0, 0, 0, 0,0, 0, 0, 0, 0, 0, 0, 0]train_db = tf.data.Dataset.from_tensor_slices((x_list, y_list)).batch(32)for epoch in range(epoches):if epoch in uu:lr *= 0.10num = 0#print(y_list)if epoch == 121:while len(y_list) != length:y_list.pop()print(x_list, len(y_list))train_db = tf.data.Dataset.from_tensor_slices((x_list, y_list)).batch(32)for step, (x_train, y_train) in enumerate(train_db):with tf.GradientTape(persistent=True) as tape:y = tf.matmul(x_train, w1) + by = tf.nn.softmax(y)y_ = tf.one_hot(y_train, depth=3)y_w = 3 * y_if epoch == 0:print(num)Y_d[num] = tf.nn.softmax(y_w)y_d = Y_d[num]tape.watch(y_d)# loss = tf.reduce_mean(tf.square(y_ - y)) #MSE损失仅仅用于回归问题分析cce = tf.keras.losses.CategoricalCrossentropy()loss = cce(y_, y)if epoch <= 120:loss_sum_ =  1/3 * cce(y, y/y_d) - 0.1 * cce(y_, y_d) - (0.8/3) * cce(y, y)# loss = tf.reduce_mean(-tf.reduce_sum(y_ * tf.math.log(y) + (1-y_) * tf.math.log(1-y)))loss_all += loss.numpy()num += 1grad = tape.gradient(loss, [w1, b])if epoch <= 120:grad_pencil = tape.gradient(loss_sum_, y_d)tf.Variable(y_d, dtype=tf.float32).assign_sub(200 * grad_pencil)#y_d = tf.argmax(y_d, axis=1)w1.assign_sub(lr * grad[0])b.assign_sub(lr * grad[1])if epoch == 120 :y_m = y_dy_x = tf.argmax(y_d, axis=1)y_d = y_mif num*32+32 <= length:y_list[num*32:num*32+32] = list(y_x.numpy())else :y_list[num * 32:length] = list(y_x.numpy())[0:length-num*32]loss_list.append(loss_all / 4)loss_all = 0total_correct = 0total_number = 0if epoch >= 121:for x_test, y_test in test_db:y = tf.matmul(x_test, w1) + by = tf.nn.softmax(y)pred = tf.argmax(y, axis=1)pred = tf.cast(pred, dtype=y_test.dtype)correct = tf.cast(tf.equal(pred, y_test), dtype=tf.int32)correct = tf.reduce_sum(correct)total_correct += int(correct)total_number += x_test.shape[0]acc = total_correct / total_numberprint(f'准确率为{acc}')acc_list.append(acc)epoch_list.append(epoch)return acc_list, loss_list, epoch_listdef fit_model(x_train, x_test, y_train, y_test):x_train = tf.cast(x_train, tf.float32)x_test = tf.cast(x_test, tf.float32)train_db = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(32)test_db = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)w1 = tf.Variable(tf.random.truncated_normal([2, 3], stddev=0.1, seed=1))b = tf.Variable(tf.random.truncated_normal([3], stddev=0.1, seed=1))loss_list, acc_list = [], []epoch_list = []epoches = 120loss_all, lr = 0, 0.3uu = [40*x for x in range(1,4)]for epoch in range(epoches):if epoch in uu:lr *= 0.10for step, (x_train, y_train) in enumerate(train_db):with tf.GradientTape() as tape:y = tf.matmul(x_train, w1) + by = tf.nn.softmax(y)#print('y is :', np.array(y.numpy()))y_ = tf.one_hot(y_train, depth=3)#loss = tf.reduce_mean(tf.square(y_ - y)) #MSE损失仅仅用于回归问题分析cce = tf.keras.losses.CategoricalCrossentropy()loss = cce(y_, y)#loss = tf.reduce_mean(-tf.reduce_sum(y_ * tf.math.log(y) + (1-y_) * tf.math.log(1-y)))loss_all += loss.numpy()grad = tape.gradient(loss, [w1, b])w1.assign_sub(lr * grad[0])b.assign_sub(lr * grad[1])loss_list.append(loss_all / 4)loss_all = 0total_correct = 0total_number = 0for x_test, y_test in test_db:y = tf.matmul(x_test, w1) + by = tf.nn.softmax(y)pred = tf.argmax(y, axis=1)pred = tf.cast(pred, dtype=y_test.dtype)correct = tf.cast(tf.equal(pred, y_test), dtype=tf.int32)correct = tf.reduce_sum(correct)total_correct += int(correct)total_number += x_test.shape[0]acc = total_correct / total_numberprint(f'准确率为{acc}')acc_list.append(acc)epoch_list.append(epoch)return acc_list, loss_list, epoch_listdef fit_model_api(x_train, x_test, y_train, y_test):model = tf.keras.models.Sequential([tf.keras.layers.Dense(3, activation='softmax', kernel_regularizer=tf.keras.regularizers.l2())])model.compile(optimizer=tf.keras.optimizers.SGD(lr=0.3),loss=tf.keras.losses.CategoricalCrossentropy(from_logits=False),metrics=['sparse_categorical_accuracy'])history = model.fit(x_train, y_train, batch_size=32, epochs=120, validation_data=(x_test, y_test))model.summaryacc = history.history['sparse_categorical_accuracy']val_acc = history.history['val_sparse_categorical_accuracy']loss = history.history['loss']val_loss = history.history['val_loss']plt.subplot(1, 2, 1)plt.plot(acc, label='Training Accuracy')plt.plot(val_acc, label='Validation Accuracy')plt.title('Training and Validation Accuracy')plt.legend()plt.subplot(1, 2, 2)plt.plot(loss, label='Training Loss')plt.plot(val_loss, label='Validation Loss')plt.title('Training and Validation Loss')plt.legend()plt.show()if __name__ == '__main__':choicen_scale = [1, 40, 80, 100, 100]legends, loss_lists, acc_lists = [], [[] for x in range(len(choicen_scale))], [[] for x in range(len(choicen_scale))]labels, labels_1 = [], []for i in range(len(choicen_scale)):noisy_scale = choicen_scale[i]create_noisy_data.create_data_main(noisy_scale)x_train, x_test, y_train, y_test = read_data(path)print(len(x_train), len(x_test), len(y_train), len(y_test), sep='\n')x_train, x_test, y_train, y_test = shuffle_data(x_train, x_test, y_train, y_test)#fit_model_api(np.array(x_train), np.array(x_test), np.array(y_train), np.array(y_test))acc_list, loss_list, epoch_list = fit_model_with_pencil(x_train, x_test, y_train, y_test)loss_lists[i] = loss_listacc_lists[i] = acc_listlabels.append(f'{choicen_scale[i]}规模的噪声的损失')labels_1.append(f'{choicen_scale[i]}规模的噪声的准确率')ln_1, = plt.plot(epoch_list, loss_lists[0][121:], color=color[0], linewidth=2.0)ln_2, = plt.plot(epoch_list, loss_lists[1][121:], color=color[1], linewidth=2.0)ln_3, = plt.plot(epoch_list, loss_lists[2][121:], color=color[2], linewidth=2.0)ln_4, = plt.plot(epoch_list, loss_lists[3][121:], color=color[3], linewidth=2.0)ln_5, = plt.plot(epoch_list, loss_lists[4][121:], color=color[4], linewidth=2.0)plt.title('不同噪声下的损失')plt.legend(handles=[ln_1, ln_2, ln_3, ln_4, ln_5], labels=labels)plt.savefig('./different_noisy_loss_with_pencil.png')plt.show()ln_1, = plt.plot(epoch_list, acc_lists[0][121:], color=color[0], linewidth=2.0)ln_2, = plt.plot(epoch_list, acc_lists[1][121:], color=color[1], linewidth=2.0)ln_3, = plt.plot(epoch_list, acc_lists[2][121:], color=color[2], linewidth=2.0)ln_4, = plt.plot(epoch_list, acc_lists[3][121:], color=color[3], linewidth=2.0)ln_5, = plt.plot(epoch_list, acc_lists[4][121:], color=color[4], linewidth=2.0)plt.title('不同噪声下的准确率')plt.legend(handles=[ln_1, ln_2, ln_3, ln_4, ln_5], labels=labels_1)plt.savefig('./different_noisy_acc_with_pencil.png')plt.show()###############用于可视化点坐标###############'''for i in range(len(x_test)):plt.scatter(x_test[i][0], x_test[i][1], c=color[int(y_test[i])])plt.show()for i in range(len(x_train)):plt.scatter(x_train[i][0], x_train[i][1], c=color[y_train[i]])plt.show()'''###############用于可视化点坐标##############################用于可视化点坐标###############'''for i in range(len(x_test)):plt.scatter(x_test[i][0], x_test[i][1], c=color[int(y_test[i])])plt.show()for i in range(len(x_train)):plt.scatter(x_train[i][0], x_train[i][1], c=color[y_train[i]])plt.show()'''###############用于可视化点坐标###############

3.Result Showing of one FC layer

  • data showing:

  • loss in backbone network showing:

  • accuracy of backbone network showing

4.Noisy data learning

  • reference paper

Probabilistic End-to-end Noise Correction for Learning with Noisy Labels

  • Backbone learning: The auther trained Cifar-10 dataset on ResNet-34, however due to COVID-19 research at home, we own limited source, so I take measures of genernate the data skip the process of feature extraction in ResNet, we genernate 600 points which are divided into 3 clusters automatically, and train them with a network owning just one full connect layer. Then as to noisy data, we gennernate different scale of noisy data, which are one scale (approximate to no noisy), 120 scale, 240 scale, 300 scale and 400 scale, these dataset are all symmetric noise, we add noisy data into every class with the same probability. And with a lot of paper finding, the model can’ t fit to data very well in a bit big learning rate, so we adapt a method which is decrease off 10 percent to learning rate in every 40 epochs, so we can also find the result in above two figures.

  • Pencil learning: In paper DLDL, the auther put forward a method to update the label during the Back Propagation, so Pencil method was also inspired by this method. At the begining, we initialize label (don’ t know if this is a clean label or not) into one-hot encode. And in the forward computa-
    tion, we calculate thress kind of loss.

    • Compatibility loss: As for the ordinary noisy data, we have an original label yd.
      So we need to multiply one constant which is K, this is the number of class. As a result of this, we can make sure that the label softmaxed could be as approximated as possible.Then we can get Compatibility loss.

    • Classification loss
      First of all, we need to know the KL-loss, which is

      Then we can know the classification loss is

    • Entropy loss

    • The overall PENCIL framework

  • Update the parameters
    1 ) network parameters updates: Nothing changed compared with normal Neural Network.
    2 ) label probability updates: We need to update label with taking the gradient of label using sum-loss.

5.Results showing with pencil framework

  • loss of pencil framework

  • accuracy of pencil framework

  • loss of fine-tune learning

  • accuracy of backbone learning(epoch 0 - epoch 120) and fine-tune(epoch 121 - epoch 240)

6.Waiting solved questions

  • Asymmetric Noise:

As for asymmetric noise, following[16] the noisy labels were generated by mapping truck→ automobile, bird→ airplane, deer → horseand cat ↔ dog with probability r. These noise genera-
tion methods are in coincidence with confusions that oftenhappen in the real world.

bird->airplane? These are two similar visual features, if we don’ t set attention mechanism but just extract features with CNN, I believe their corresponding net parameters will be very close, so we can deal with this kind of miswatched figures with PENCIL, but what about others like just be mislabeld which are very far visually, I think if we trained our model with this kind of asymmetric noisy data it would perform worse on those mislabeld but keep far away from each others’ noisy data.

  • Gradient Disappearence: When I use one full connect layer as backbone network, the gradient would disappear when the dataset is too large and the learning rate is too big. Can’ t figure out any methods to set it down.

7.Question solved

  1. We need to add variable into watch when we use grad and the result is None.

Noisy label learning相关推荐

  1. Learning with Noisy Label

    Learning with Noisy Label 学习记录总结 1.1 阅读背景 1.2 理论基础类 1.2.1 paper: understanding deep learning require ...

  2. 意图识别算法:噪音处理之O2U-Net: A Simple Noisy Label Detection Approach for Deep Neural Networks

    目录 问题描述 解决思路 具体过程 预训练阶段 Cyclical Training阶段 clean dataset训练阶段 实验结果 论文下载:O2U-Net: A Simple Noisy Labe ...

  3. 论文翻译 —— Disambiguation-Free Partial Label Learning 非消歧偏标记学习(PL-ECOC)

    标题:Disambiguation-Free Partial Label Learning 文章链接:http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/v ...

  4. [论文阅读笔记58]Learning from Noisy Labels with Deep Neural Networks:A Survey

    1.题目 Learning from Noisy Labels with Deep Neural Networks: A Survey 作者团队:韩国科学技术院(KAIST) Song H , Kim ...

  5. Learning with Noisy Correspondence for Cross-modal Matching个人笔记

    abstract 背景:多模态匹配Cross-modal matching,在不同模型间建立对应关系,已经应用于跨模态检索(retrieval)和vision-and -language unders ...

  6. 【弱监督学习】Learning from Incomplete and Inaccurate Supervision

    1.摘要与简介 现在的有监督学习需要大量的高质量的标签才能进行,而在真实情况下,我们往往不能获得非常多的高质量标签,我们获取到的标签可能是不完全的,同时也可能是不准确的,当然也可能是既不完全.也不准确 ...

  7. 工业界如何解决NER问题?12个trick,与你分享~

    NER是一个已经解决了的问题吗?或许,一切才刚刚开始. 例如,面对下面笔者在工作中遇到的12个关于NER的系列问题,你有什么好的trick呢?不着急,让我们通过本篇文章,逐一解答- Q1.如何快速有效 ...

  8. 【NLP技术应用】工业界求解NER问题的12条黄金法则

    众所周知,命名实体识别(Named Entity Recognition,NER)是一项基础而又重要的NLP词法分析任务,也往往作为信息抽取.问答系统.机器翻译等方向的或显式或隐式的基础任务. 在很多 ...

  9. 置信学习:让样本中的“脏数据“原形毕露

    在实际工作中,你是否遇到过这样一个问题或痛点:无论是通过哪种方式获取的标注数据,数据标注质量可能不过关,存在一些错误?亦或者是数据标注的标准不统一.存在一些歧义?特别是badcase反馈回来,发现训练 ...

最新文章

  1. c语言 字符串 正序再倒序_新特性解读 | MySQL 8.0 索引特性3 -倒序索引
  2. Bash脚本教程之循环
  3. 【转】JPA、Hibernate和Mybatis区别和总结
  4. 10-20-030-简介-Kafka Briker IO
  5. 国庆宅家又羡慕别人的旅游美拍,教你一招轻松穿梭各大景点
  6. 翻译:使用 AWS Deep Racer 的日志分析工具
  7. 工大瑞普虚拟思科实验室full(U7.3)环境配置方法
  8. 第四章-整合管理【核心词:批准】
  9. 前段vue+后端接口PHP实现万年历(带上节假日天干地支凶吉星座神)
  10. SumaTraPDF
  11. latex多行公式加大括号、整体编号及多行编号及不同方法的区别
  12. 128、H3C交换机恢复出厂和各种基本配置
  13. 不为人知的腾讯创业史---蜗牛创业网
  14. [从头学数学] 第208节 带着计算机去高考(序)
  15. 情侣间为不吵架而“约法三章”,12条可参考理由!
  16. java函数式接口意义与场景
  17. 深度神经网络TensorFlow基础学习(3)——卷积神经网络的参数个数和张量大小
  18. 金融时报:电信改革重拼成软骨巨人?
  19. 小程序分类图标提取_腾讯手机管家“垃圾分类”小程序上线 get分类指南
  20. python 视频快速温习_传智播客python12天学会Python系列视频 177个视频教程 完整学......

热门文章

  1. android学习轨迹之二:Android权限标签uses-permission的书写位置
  2. jconsole本地连接
  3. Python电视剧电影台词拼接图
  4. 在企业内部门户如何构建社区应用?
  5. 自己写的一个简单的迅雷下载支持断点续传
  6. 解决vscode 无法识别 cnpm
  7. 7座MPV和7座SUV有多大区别?
  8. 11.4.9 MONTH(date)函数
  9. Linux基础篇–shell脚本编程基础
  10. r语言 四格画图_临度科研|数据统计的理解和运用(四)列联表之卡方检验