import pandas as pd
import numpy as np
path = r"C:\Users\yongx\Desktop\data"
长宽表的变形
当一个特征元素值作为列向量时为长表, 作为列名时为宽表
两种表信息信息上完全等价,包含相同的身高统计数值
# 长表
pd.DataFrame({'Gender':['F','F','M','M'],'Height' : [163,160,175,180]})
|
Gender
|
Height
|
0
|
F
|
163
|
1
|
F
|
160
|
2
|
M
|
175
|
3
|
M
|
180
|
#宽表
pd.DataFrame({'Height : F':[163,160],'Height : M':[175,180]})
|
Height : F
|
Height : M
|
0
|
163
|
175
|
1
|
160
|
180
|
pivot:一种长表变宽表的函数
长表变宽表的三个要素:变形后的行索引,需要转到列索引的列,行列索引对应的数值即 index, columns, values; 新生成表的列索引为columns,对应列的unique值,行索引为index对应列的unique值values对应想要展示的数值列。使用pivot函数时应注意唯一性的要求,即index和columns两列的行组合必须唯一。
df = pd.DataFrame({'Class':[1,1,2,2],'Name':['San Zhang','San Zhang','Si Li','Si Li'],'Subject':['Chinese','Math','Chinese','Math'],'Grade':[80,75,90,85]})
df.pivot(index= 'Name',columns = 'Subject',values = 'Grade')
Subject
|
Chinese
|
Math
|
Name
|
|
|
San Zhang
|
80
|
75
|
Si Li
|
90
|
85
|
#在pandas.__version__ == 1.1.0版本中 pivot同时支持三个参数设置为列表,此时返回为多级索引
df = pd.DataFrame({'Class':[1, 1, 2, 2, 1, 1, 2, 2],'Name':['San Zhang', 'San Zhang', 'Si Li', 'Si Li','San Zhang', 'San Zhang', 'Si Li', 'Si Li'],'Examination': ['Mid', 'Final', 'Mid', 'Final','Mid', 'Final', 'Mid', 'Final'],'Subject':['Chinese', 'Chinese', 'Chinese', 'Chinese','Math', 'Math', 'Math', 'Math'],'Grade':[80, 75, 85, 65, 90, 85, 92, 88],'rank':[10, 15, 21, 15, 20, 7, 6, 2]})df.pivot(index = ['Class','Name'],columns = ['Subject', 'Examination'],values = ['Grade', 'rank'])
|
|
Grade
|
rank
|
|
Subject
|
Chinese
|
Math
|
Chinese
|
Math
|
|
Examination
|
Mid
|
Final
|
Mid
|
Final
|
Mid
|
Final
|
Mid
|
Final
|
Class
|
Name
|
|
|
|
|
|
|
|
|
1
|
San Zhang
|
80
|
75
|
90
|
85
|
10
|
15
|
20
|
7
|
2
|
Si Li
|
85
|
65
|
92
|
88
|
21
|
15
|
6
|
2
|
pivot_table
pivot使用时必须满足唯一性条件,若不满足则可通过使用agg聚合函数将多个值转为一个值,也可直接使用pivot_table函数直接实现
df = pd.DataFrame({'Name':['San Zhang', 'San Zhang','San Zhang', 'San Zhang','Si Li', 'Si Li', 'Si Li', 'Si Li'],'Subject':['Chinese', 'Chinese', 'Math', 'Math','Chinese', 'Chinese', 'Math', 'Math'],'Grade':[80, 90, 100, 90, 70, 80, 85, 95]})
df.pivot_table(index = 'Name',columns = 'Subject',values = 'Grade',aggfunc = 'mean' )
Subject
|
Chinese
|
Math
|
Name
|
|
|
San Zhang
|
85
|
95
|
Si Li
|
75
|
90
|
#aggfunc 可为所有合法聚合字符串、匿名函数、函数(输入序列输出标量)等
#pivot_table 也可以通过设置margins = True来实现边际汇总,
#分别统计语文数学均分、张三李四均分
df.pivot_table(index = 'Name',columns = 'Subject',values = 'Grade',aggfunc = 'mean',margins = True)
Subject
|
Chinese
|
Math
|
All
|
Name
|
|
|
|
San Zhang
|
85
|
95.0
|
90.00
|
Si Li
|
75
|
90.0
|
82.50
|
All
|
80
|
92.5
|
86.25
|
melt
melt函数方法为pivot的逆操作,即实现宽表->长表的转换
df = pd.DataFrame({'Class':[1,2],'Name':['San Zhang', 'Si Li'],'Chinese':[80, 90],'Math':[80, 75]})
df
|
Class
|
Name
|
Chinese
|
Math
|
0
|
1
|
San Zhang
|
80
|
80
|
1
|
2
|
Si Li
|
90
|
75
|
df_melt = df.melt(id_vars = ['Class', 'Name'],value_vars = ['Chinese', 'Math'],var_name = 'Subject',value_name = 'Grade')
df_melt
|
Class
|
Name
|
Subject
|
Grade
|
0
|
1
|
San Zhang
|
Chinese
|
80
|
1
|
2
|
Si Li
|
Chinese
|
90
|
2
|
1
|
San Zhang
|
Math
|
80
|
3
|
2
|
Si Li
|
Math
|
75
|
#使用pivot将其转换成原样
df_melt.pivot(index = ['Class','Name'],columns = 'Subject',values = 'Grade').reset_index().rename_axis(columns={'Subject':''})
|
Class
|
Name
|
Chinese
|
Math
|
0
|
1
|
San Zhang
|
80
|
80
|
1
|
2
|
Si Li
|
90
|
75
|
wide_to_long
使用melt方法时,列索引中被压缩的一组值对应的列元素只能代表同一层次的含义,当列中包含交叉类别并且需要分开压缩,可以使用wide_to_long
df = pd.DataFrame({'Class':[1,2],'Name':['San Zhang', 'Si Li'],'Chinese_Mid':[80, 75], 'Math_Mid':[90, 85],'Chinese_Final':[80, 75], 'Math_Final':[90, 85]})
df
|
Class
|
Name
|
Chinese_Mid
|
Math_Mid
|
Chinese_Final
|
Math_Final
|
0
|
1
|
San Zhang
|
80
|
90
|
80
|
90
|
1
|
2
|
Si Li
|
75
|
85
|
75
|
85
|
'''
stubnames : 变量值的含义即转化之后的表的列
i : 保持不变的id变量
j : 压缩到行的变量名含义
sep : 分隔符
suffix : 正则后缀
'''
pd.wide_to_long(df, stubnames = ['Chinese','Math'],i = ['Class','Name'],j = 'Examination',sep = '_',suffix = '.+')
|
|
|
Chinese
|
Math
|
Class
|
Name
|
Examination
|
|
|
1
|
San Zhang
|
Mid
|
80
|
90
|
Final
|
80
|
90
|
2
|
Si Li
|
Mid
|
75
|
85
|
Final
|
75
|
85
|
df = pd.DataFrame({'Class':[1, 1, 2, 2, 1, 1, 2, 2],'Name':['San Zhang', 'San Zhang', 'Si Li', 'Si Li','San Zhang', 'San Zhang', 'Si Li', 'Si Li'],'Examination': ['Mid', 'Final', 'Mid', 'Final','Mid', 'Final', 'Mid', 'Final'],'Subject':['Chinese', 'Chinese', 'Chinese', 'Chinese','Math', 'Math', 'Math', 'Math'],'Grade':[80, 75, 85, 65, 90, 85, 92, 88],'rank':[10, 15, 21, 15, 20, 7, 6, 2]})res = df.pivot(index = ['Class','Name'],columns = ['Subject', 'Examination'],values = ['Grade', 'rank']).copy()res.columns = res.columns.map(lambda x:'_'.join(x))
res = res.reset_index()
res = pd.wide_to_long(res,stubnames = ['Grade', 'rank'],i = ['Class','Name'],j = 'Subject_Examination',sep = '_',suffix = '.+')
res = res.reset_index()
res[['Subject', 'Examination']] = res['Subject_Examination'].str.split('_', expand = True)
res = res[['Class', 'Name', 'Examination', 'Subject','Grade','rank']].sort_values('Subject')
res = res.reset_index(drop = True)
res
|
Class
|
Name
|
Examination
|
Subject
|
Grade
|
rank
|
0
|
1
|
San Zhang
|
Mid
|
Chinese
|
80
|
10
|
1
|
1
|
San Zhang
|
Final
|
Chinese
|
75
|
15
|
2
|
2
|
Si Li
|
Mid
|
Chinese
|
85
|
21
|
3
|
2
|
Si Li
|
Final
|
Chinese
|
65
|
15
|
4
|
1
|
San Zhang
|
Mid
|
Math
|
90
|
20
|
5
|
1
|
San Zhang
|
Final
|
Math
|
85
|
7
|
6
|
2
|
Si Li
|
Mid
|
Math
|
92
|
6
|
7
|
2
|
Si Li
|
Final
|
Math
|
88
|
2
|
索引变形
stack和unstack
stack和unstack不同于前面几种属于一列或几列元素和列索引之间的转换,stack和unstack实现的为索引之间的转换。
unstack
实现把行索引转化为列索引,主要参数是移动的层号,默认情况下转化最内层,移动到列索引的最内层,同时支持同时转化多个层.unstack
中必须保证被转为列索引的行索引层和被保留的行索引层构成的组合是唯一的,类似于pivot
中的唯一型要求.
stack
实现的最用就是把列索引的层压入行索引,用法完全类似
df = pd.DataFrame(np.ones((4, 2)),index=pd.Index([('A', 'cat', 'big'), ('A', 'dog', 'small'),('B', 'cat', 'big'), ('B', 'dog', 'small')]),columns=['col_1', 'col_2'])
df
|
|
|
col_1
|
col_2
|
A
|
cat
|
big
|
1.0
|
1.0
|
dog
|
small
|
1.0
|
1.0
|
B
|
cat
|
big
|
1.0
|
1.0
|
dog
|
small
|
1.0
|
1.0
|
df.unstack()
|
|
col_1
|
col_2
|
|
|
big
|
small
|
big
|
small
|
A
|
cat
|
1.0
|
NaN
|
1.0
|
NaN
|
dog
|
NaN
|
1.0
|
NaN
|
1.0
|
B
|
cat
|
1.0
|
NaN
|
1.0
|
NaN
|
dog
|
NaN
|
1.0
|
NaN
|
1.0
|
df.unstack(2)
|
|
col_1
|
col_2
|
|
|
big
|
small
|
big
|
small
|
A
|
cat
|
1.0
|
NaN
|
1.0
|
NaN
|
dog
|
NaN
|
1.0
|
NaN
|
1.0
|
B
|
cat
|
1.0
|
NaN
|
1.0
|
NaN
|
dog
|
NaN
|
1.0
|
NaN
|
1.0
|
df.unstack([0,2])
|
col_1
|
col_2
|
|
A
|
B
|
A
|
B
|
|
big
|
small
|
big
|
small
|
big
|
small
|
big
|
small
|
cat
|
1.0
|
NaN
|
1.0
|
NaN
|
1.0
|
NaN
|
1.0
|
NaN
|
dog
|
NaN
|
1.0
|
NaN
|
1.0
|
NaN
|
1.0
|
NaN
|
1.0
|
my_index = df.index.to_list()
my_index[1] = my_index[0]
df.index = pd.Index(my_index)
df
|
|
|
col_1
|
col_2
|
A
|
cat
|
big
|
1.0
|
1.0
|
big
|
1.0
|
1.0
|
B
|
cat
|
big
|
1.0
|
1.0
|
dog
|
small
|
1.0
|
1.0
|
#此时不满足唯一性导致unstack操作无法进行
try :df.unstack()
except Exception as e:Err_Msg = e
Err_Msg
ValueError('Index contains duplicate entries, cannot reshape')
stack
将列索引的层压入行索引,用法类似于unstack
df = pd.DataFrame(np.ones((4,2)),index = pd.Index([('A', 'cat', 'big'),('A', 'dog', 'small'),('B', 'cat', 'big'),('B', 'dog', 'small')]),columns=['index_1', 'index_2']).T
df
|
A
|
B
|
|
cat
|
dog
|
cat
|
dog
|
|
big
|
small
|
big
|
small
|
index_1
|
1.0
|
1.0
|
1.0
|
1.0
|
index_2
|
1.0
|
1.0
|
1.0
|
1.0
|
df.stack()
|
|
A
|
B
|
|
|
cat
|
dog
|
cat
|
dog
|
index_1
|
big
|
1.0
|
NaN
|
1.0
|
NaN
|
small
|
NaN
|
1.0
|
NaN
|
1.0
|
index_2
|
big
|
1.0
|
NaN
|
1.0
|
NaN
|
small
|
NaN
|
1.0
|
NaN
|
1.0
|
df.stack([1,2])
|
|
|
A
|
B
|
index_1
|
cat
|
big
|
1.0
|
1.0
|
dog
|
small
|
1.0
|
1.0
|
index_2
|
cat
|
big
|
1.0
|
1.0
|
dog
|
small
|
1.0
|
1.0
|
聚合和变形的关系
除带有聚合效果的pivot_table之外,所有的函数在变形前后并不会带来values个数的变化,只是将值呈现的形式进行了改变.而分组聚合则生成了新的行列索引并在聚合之后把原来的多个值变为了一个值,values的个数发生了变化.
其他变形函数
crosstab
crosstab
不推荐使用,因为其可以实现的所有功能都能用pivot_tab实现,并且速度更快,默认状态,crosstab可以统计元素组合出现的频数,即count操作. crosstab
对应位置传入的是具体的序列,而pivot_table
传入的则是被调用表对应的名字,传入值时会包错.除默认状态下的count
统计函数之外所有聚合字符串和返回标量的自定义函数都是可用的
df = pd.read_csv(path + '\\learn_pandas.csv')
pd.crosstab(index = df.School, columns = df.Transfer)
Transfer
|
N
|
Y
|
School
|
|
|
Fudan University
|
38
|
1
|
Peking University
|
28
|
2
|
Shanghai Jiao Tong University
|
53
|
0
|
Tsinghua University
|
62
|
4
|
#等价于以下写法
pd.crosstab(index = df.School, columns = df.Transfer,values = [0] * df.shape[0], aggfunc = 'count')
Transfer
|
N
|
Y
|
School
|
|
|
Fudan University
|
38.0
|
1.0
|
Peking University
|
28.0
|
2.0
|
Shanghai Jiao Tong University
|
53.0
|
NaN
|
Tsinghua University
|
62.0
|
4.0
|
#也可使用pivot_table实现等价的操作,
df.pivot_table(index = 'School',columns = 'Transfer',values = 'Name',aggfunc = 'count')
Transfer
|
N
|
Y
|
School
|
|
|
Fudan University
|
38.0
|
1.0
|
Peking University
|
28.0
|
2.0
|
Shanghai Jiao Tong University
|
53.0
|
NaN
|
Tsinghua University
|
62.0
|
4.0
|
#统计对应组合的身高均值
pd.crosstab(index = df.School, columns = df.Transfer,values = df.Height, aggfunc = 'mean')
Transfer
|
N
|
Y
|
School
|
|
|
Fudan University
|
162.043750
|
177.20
|
Peking University
|
163.429630
|
162.40
|
Shanghai Jiao Tong University
|
163.953846
|
NaN
|
Tsinghua University
|
163.253571
|
164.55
|
explode
使用explode
方法可以对某一列的元素进行纵向的展开,被展开的单元格必须存储list,tuple,Series, np.ndarray中的一种类型
df_ex = pd.DataFrame({'A': [[1, 2], 'my_str', {1, 2},pd.Series([3, 4])],'B': 1
})
df_ex.explode('A')
|
A
|
B
|
0
|
1
|
1
|
0
|
2
|
1
|
1
|
my_str
|
1
|
2
|
{1, 2}
|
1
|
3
|
3
|
1
|
3
|
4
|
1
|
get_dummies
get_dummies
作用是把类别特征转为指示变量,常用在特征构建中,OneHot型特征构建
pd.get_dummies(df.Grade).head()
|
Freshman
|
Junior
|
Senior
|
Sophomore
|
0
|
1
|
0
|
0
|
0
|
1
|
1
|
0
|
0
|
0
|
2
|
0
|
0
|
1
|
0
|
3
|
0
|
0
|
0
|
1
|
4
|
0
|
0
|
0
|
1
|
练习
![](/assets/blank.gif)
df = pd.read_csv(path + '\\drugs.csv').sort_values(['State','COUNTY','SubstanceName'],ignore_index=True)
df.head(5)
|
YYYY
|
State
|
COUNTY
|
SubstanceName
|
DrugReports
|
0
|
2011
|
KY
|
ADAIR
|
Buprenorphine
|
3
|
1
|
2012
|
KY
|
ADAIR
|
Buprenorphine
|
5
|
2
|
2013
|
KY
|
ADAIR
|
Buprenorphine
|
4
|
3
|
2014
|
KY
|
ADAIR
|
Buprenorphine
|
27
|
4
|
2015
|
KY
|
ADAIR
|
Buprenorphine
|
5
|
df_pivot = df.pivot(index=['State', 'COUNTY', 'SubstanceName'],columns='YYYY',values='DrugReports').reset_index()
df_pivot.columns.name = ''
df_pivot.head(3)
|
State
|
COUNTY
|
SubstanceName
|
2010
|
2011
|
2012
|
2013
|
2014
|
2015
|
2016
|
2017
|
0
|
KY
|
ADAIR
|
Buprenorphine
|
NaN
|
3.0
|
5.0
|
4.0
|
27.0
|
5.0
|
7.0
|
10.0
|
1
|
KY
|
ADAIR
|
Codeine
|
NaN
|
NaN
|
1.0
|
NaN
|
NaN
|
NaN
|
NaN
|
1.0
|
2
|
KY
|
ADAIR
|
Fentanyl
|
NaN
|
NaN
|
1.0
|
NaN
|
NaN
|
NaN
|
NaN
|
NaN
|
df_melt = df_pivot.melt(id_vars=['State', 'COUNTY', 'SubstanceName'],value_vars=[i for i in range(2010, 2018)],var_name='YYYY',value_name='DrugReports').dropna(subset = ['DrugReports']).reset_index(drop = True)
df_melt[['YYYY','DrugReports']] = df_melt[['YYYY','DrugReports']].astype(int)
df_melt = df_melt[['YYYY','State','COUNTY','SubstanceName','DrugReports']]
df_melt.head(3)
|
YYYY
|
State
|
COUNTY
|
SubstanceName
|
DrugReports
|
0
|
2010
|
KY
|
ADAIR
|
Hydrocodone
|
6
|
1
|
2010
|
KY
|
ADAIR
|
Methadone
|
1
|
2
|
2010
|
KY
|
ALLEN
|
Hydrocodone
|
10
|
df.sort_values(by=['YYYY','State','COUNTY','SubstanceName']).head(3)
|
YYYY
|
State
|
COUNTY
|
SubstanceName
|
DrugReports
|
14
|
2010
|
KY
|
ADAIR
|
Hydrocodone
|
6
|
24
|
2010
|
KY
|
ADAIR
|
Methadone
|
1
|
44
|
2010
|
KY
|
ALLEN
|
Hydrocodone
|
10
|
df.pivot_table(values = 'DrugReports',index='State',columns='YYYY',aggfunc='sum').head(3)
YYYY
|
2010
|
2011
|
2012
|
2013
|
2014
|
2015
|
2016
|
2017
|
State
|
|
|
|
|
|
|
|
|
KY
|
10453
|
10289
|
10722
|
11148
|
11081
|
9865
|
9093
|
9394
|
OH
|
19707
|
20330
|
23145
|
26846
|
30860
|
37127
|
42470
|
46104
|
PA
|
19814
|
19987
|
19959
|
20409
|
24904
|
25651
|
26164
|
27894
|
df.groupby(['State','YYYY'])['DrugReports'].agg('sum').unstack(1).head(3)
YYYY
|
2010
|
2011
|
2012
|
2013
|
2014
|
2015
|
2016
|
2017
|
State
|
|
|
|
|
|
|
|
|
KY
|
10453
|
10289
|
10722
|
11148
|
11081
|
9865
|
9093
|
9394
|
OH
|
19707
|
20330
|
23145
|
26846
|
30860
|
37127
|
42470
|
46104
|
PA
|
19814
|
19987
|
19959
|
20409
|
24904
|
25651
|
26164
|
27894
|
df.groupby(['State','YYYY'])['DrugReports'].sum().to_frame().unstack(0).droplevel(0, axis = 1).T
YYYY
|
2010
|
2011
|
2012
|
2013
|
2014
|
2015
|
2016
|
2017
|
State
|
|
|
|
|
|
|
|
|
KY
|
10453
|
10289
|
10722
|
11148
|
11081
|
9865
|
9093
|
9394
|
OH
|
19707
|
20330
|
23145
|
26846
|
30860
|
37127
|
42470
|
46104
|
PA
|
19814
|
19987
|
19959
|
20409
|
24904
|
25651
|
26164
|
27894
|
VA
|
8685
|
6749
|
7831
|
11675
|
9037
|
8810
|
10195
|
10448
|
WV
|
2890
|
3271
|
3376
|
4046
|
3280
|
2571
|
2548
|
1614
|
![](/assets/blank.gif)
df = pd.DataFrame({'Class':[1,2],'Name':['San Zhang','Si Li'],'Chinese':[80,90],'Math':[80,75]})
df
|
Class
|
Name
|
Chinese
|
Math
|
0
|
1
|
San Zhang
|
80
|
80
|
1
|
2
|
Si Li
|
90
|
75
|
df.melt(id_vars=['Class', 'Name'], value_vars = ['Chinese','Math'],var_name = 'Subject', value_name = 'Grade')
|
Class
|
Name
|
Subject
|
Grade
|
0
|
1
|
San Zhang
|
Chinese
|
80
|
1
|
2
|
Si Li
|
Chinese
|
90
|
2
|
1
|
San Zhang
|
Math
|
80
|
3
|
2
|
Si Li
|
Math
|
75
|
df = df.rename(columns={'Chinese': 'pre_Chinese', 'Math': 'pre_Math'})
df
|
Class
|
Name
|
pre_Chinese
|
pre_Math
|
0
|
1
|
San Zhang
|
80
|
80
|
1
|
2
|
Si Li
|
90
|
75
|
#参考答案df = df.rename(columns={'Chinese': 'pre_Chinese', 'Math': 'pre_Math'})
pd.wide_to_long(df,stubnames=['pre'],i=['Class', 'Name'],j='Subject',sep='_',suffix='.+').reset_index().rename(columns = {'pre':'Grade'})
|
Class
|
Name
|
Subject
|
Grade
|
0
|
1
|
San Zhang
|
Chinese
|
80
|
1
|
1
|
San Zhang
|
Math
|
80
|
2
|
2
|
Si Li
|
Chinese
|
90
|
3
|
2
|
Si Li
|
Math
|
75
|
Pandas组队学习Task05相关推荐
- pandas组队学习 task10-时间序列
[pandas组队学习 task-10](http://inter.joyfulpandas.datawhale.club/Content/ch10 import pandas as pd impor ...
- 强化学习组队学习task05—— 稀疏奖励及模仿学习
文章目录 一.Sparse Reward 1.Reward Shaping ICM(intrinsic curiosity module) 2.Curriculum Learning Reverse ...
- Datawhale组队学习 Task05:字符串(2天)
Task05:字符串(2天) 我们古人没有电影电视,没有游戏网络,所以文人们就会想出一些文字游戏来娱乐.比如宋代的李禺写了这样一首诗:"枯眼望遥山隔水,往来曾见几心知?壶空怕酌一杯酒,笔下难 ...
- 知识图谱组队学习Task05——图数据库查询
Cypher 介绍:作为Neo4j的查询语言,"Cypher"是一个描述性的图形查询语言,允许不必编写图形结构的遍历代码对图形存储有表现力和效率的查询.Cypher还在继续发展和成 ...
- Data Whale第20期组队学习 Pandas学习—第一次综合练习
Data Whale第20期组队学习 Pandas学习-Task Special & 综合练习 一.企业收入的多样性 二.组队学习信息表的变换 三.美国大选投票情况 参考文献 一.企业收入的多 ...
- Data Whale第20期组队学习 Pandas学习—缺失数据
Data Whale第20期组队学习 Pandas学习-缺失数据 一.缺失值的统计和删除 1.1 统计缺失信息 1.2 删除缺失信息 二.缺失值的填充和插值 2.1 利用fillna进行填充 2.2 ...
- Data Whale第20期组队学习 Pandas学习—时序数据
Data Whale第20期组队学习 Pandas学习-时序数据 一.时序中的基本对象 二.时间戳 2.1 Timestamp的构造与属性 2.2 Datetime序列的生成 2.3 dt对象 2.4 ...
- Datawhale组队学习(Pandas) task2-pandas基础
写在前面 看了很多小伙伴task1的笔记,感觉很棒的同时也深受启发,学习过程不仅仅是教材等资料的理解和重复,更应该是自己的思考.串联.发问.尝试,这样才能学得深刻~ 但因为前者更容易,所以自己常常陷入 ...
- 四月青少年编程组队学习(图形化四级)Task05
电子学会 软件编程(图形化)四级 组队学习 试题来源: 青少年软件编程(Scratch)等级考试试卷(四级)[2019.12] 青少年软件编程(Scratch)等级考试试卷(四级)[2020.06] ...
最新文章
- 河南派出所犯罪嫌疑计算机网络人,【出彩河南公安人】息县公安局冯振娇:平凡岗位献青春 恒心不改展风采...
- VS2010开发ribbon风格的程序
- 面试中get和post的区别
- linux vmcore 分析,crash分析vmcore
- 针对github权限导致hexo部署失败的解决方案
- 请求并操作指定url处的xml文件
- 深入解析Windows操作系统(一)概念和工具
- python 图像压缩 jpeg_smally:批量无损压缩JPG和PNG
- Vue基础调色板案例
- 解决Google浏览器中Flash插件禁用问题
- 大白菜 U盘系统指南
- 德银天下招股书再度“失效”,陕汽控股集团提前“套现”约5亿元
- 中关村物联网联盟启动10x10计划 解决物联网产业内冷外热发展难题
- 所有人望向黑洞那一刻,我们短暂地共享了 5500 万光年外的世界
- java里short,int,long,float,double范围及可写位数
- 据说币圈炒币亏钱的人,大多都是因为这6个原因!
- JavaScript Date getTime() 方法
- The requested resource is not available.
- [源码]Meepo路由
- CS61B sp2018笔记 | Efficient Programming
热门文章
- 服务器操作系统镜像,镜像服务器操作系统
- Solaris、Linux和FreeBSD的内核比较
- 计算机技术与维修结课论文,计算机维护技术结课论文.doc
- php 去除二维数组重复,两种php去除二维数组的重复项方法_PHP
- 手把手教你23种设计模式
- 安信可平头哥TG-12F-kit 模块(TG7100c) 接入阿里云生活平台
- 计算机知识office答案,计算机二级office题库及答案
- 转化率最高的10个购物网站的经验
- 为何要用征信约束员工跳槽?
- UE4官方滚球项目源码笔记