sklearn--感知机Perceptron

Perceptron(penalty=None, alpha=0.0001, fit_intercept=True, max_iter=None, tol=None, shuffle=True, verbose=0, eta0=1.0, n_jobs=None, random_state=0, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, class_weight=None, warm_start=False, n_iter=None)

参数说明

penalty : None, ‘l2’ or ‘l1’ or ‘elasticnet’

The penalty (aka regularization term) to be used. Defaults to None.

正则项,l2、l1或弹性网络。L1、L2正则化参考。

alpha : float

Constant that multiplies the regularization term if regularization is used. Defaults to 0.0001

fit_intercept : bool

Whether the intercept should be estimated or not. If False, the data is assumed to be already centered. Defaults to True.

是否需要计算截距。

max_iter : int, optional

The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit. Defaults to 5. Defaults to 1000 from 0.21, or if tol is not None.

tol : float or None, optional

The stopping criterion. If it is not None, the iterations will stop when (loss > previous_loss - tol). Defaults to None. Defaults to 1e-3 from 0.21.

迭代停机的值。

shuffle : bool, optional, default True

Whether or not the training data should be shuffled after each epoch.

verbose : integer, optional

The verbosity level冗余

eta0 : double

Constant by which the updates are multiplied. Defaults to 1.

n_jobs : int or None, optional (default=None)

The number of CPUs to use to do the OVA (One Versus All, for multi-class problems) computation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.

random_state : int, RandomState instance or None, optional, default None

The seed of the pseudo random number generator to use when shuffling the data. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.伪随机数种子。

early_stopping : bool, default=False

Whether to use early stopping to terminate training when validation. score is not improving. If set to True, it will automatically set aside a fraction of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs.New in version 0.20.

validation_fraction : float, default=0.1

The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True.New in version 0.20.训练集中用于验证集的比率。

n_iter_no_change : int, default=5

Number of iterations with no improvement to wait before early stopping.New in version 0.20.

class_weight : dict, {class_label: weight} or “balanced” or None, optional

Preset for the class_weight fit parameter.

Weights associated with classes. If not given, all classes are supposed to have weight one.

The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y))

warm_start : bool, optional

When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. See the Glossary.

n_iter : int, optional

The number of passes over the training data (aka epochs). Defaults to None. Deprecated, will be removed in 0.21.

属性

coef_ : array, shape = [1, n_features] if n_classes == 2 else [n_classes, n_features]

Weights assigned to the features.权重

intercept_ : array, shape = [1] if n_classes == 2 else [n_classes]

Constants in decision function.截距(b)

n_iter_ : int

The actual number of iterations to reach the stopping criterion. For multiclass fits, it is the maximum over every binary fit.

代码示例

import pandas as pd
import numpy as np
from sklearn.datasets import load_iris
import matplotlib.pyplot as plt
from sklearn.linear_model import Perceptron
iris = load_iris()
df = pd.DataFrame(iris.data, columns=iris.feature_names)
df['label'] = iris.target
df.columns=['sepal length', 'sepal width', 'petal length', 'petal width', 'label']
#print(df.label.value_counts())data = np.array(df.iloc[:100, [0, 1, -1]])
X, y = data[:, :-1], data[:, -1]
y = np.array([1 if i == 1 else -1 for i in y])
perceptron = Model()
perceptron.fit(X, y)
print(perceptron.w)
print(perceptron.b)
#sklearnclf = Perceptron(fit_intercept=False, max_iter=1000, shuffle=False)
clf.fit(X, y)
print(clf.coef_)
print(clf.intercept_)

sklearn--感知机Perceptron相关推荐

  1. 基于感知机Perceptron的鸢尾花分类实践

    文章目录 1. 感知机简介 2. 编写感知机实践 2.1 数据处理 2.2 编写感知机类 2.3 多参数组合运行 3. sklearn 感知机实践 4. 附完整代码 本文将使用感知机模型,对鸢尾花进行 ...

  2. sklearn.linear_model.Perceptron详解

    sklearn.linear_model.Perceptron详解 形式 class sklearn.linear_model.Perceptron(*, penalty=None, alpha=0. ...

  3. 感知机(perceptron):原理、python实现及sklearn.linear_model.Perceptron参数详解

    文章目录 1.感知机模型介绍 2.感知机学习策略 3.感知机学习算法 3.1 原始形式 3.2.1算法收敛性的证明 3.2对偶形式 4.python实现感知机算法 4.1手写感知机算法 4.2 sci ...

  4. 感知机perceptron

    定义和模型 (1)f(x)=sign(w⋅x+b)f(x) = sign(w \cdot x+b) \tag{1}f(x)=sign(w⋅x+b)(1) 其中, w和b是模型参数, w向量叫做权重向量 ...

  5. 20151227感知机(perceptron)

    1 感知机 1.1 感知机定义 感知机是一个二分类的线性分类模型,其生成一个分离超平面将实例的特征向量,输出为+1,-1.导入基于误分类的损失函数,利用梯度下降法对损失函数极小化,从而求得此超平面,该 ...

  6. 【监督学习】第三课(机器学习,折半算法,专家算法,感知机perceptron,Winnow,在线学习)

    这里是监督学习第三课,长期更新,求关注! 前两课分别讲了监督学习最简单(普遍)的算法,线性回归,以及knn和常见的问题以及解决方式. 对于线性回归的计算复杂度优化由mn两个参数决定.根据他们的相对大小 ...

  7. 机器学习-感知机perceptron

    在机器学习中,感知机(perceptron)是二分类的线性分类模型,属于监督学习算法.输入为实例的特征向量,输出为实例的类别(取+1和-1).感知机对应于输入空间中将实例划分为两类的分离超平面.感知机 ...

  8. 机器学习理论之(13):感知机 Perceptron;多层感知机(神经网络)

    文章目录 表示学习 (representation Learning) 生物神经元 V.S. 人造神经元 感知机 (Perceptron) 训练感知机(Training Perceptron) 激活函 ...

  9. 机器学习笔记 - 什么是感知机(Perceptron)?

    一.什么是感知机? 由Rosenblatt于1958年首次推出的感知机,他提出了基于原始 MCP 神经元的感知机学习规则.可以说是最古老.最简单的ANN算法.在此出版物之后,基于感知机的技术在神经网络 ...

  10. 深度学习感知机(Perceptron)学习笔记

    1. 简介 神经网络由若干神经元组成,这些神经元负责对输入数据进行相似的计算操作.神经网络如下图所示: 图 1 上图中每个圆圈都是一个神经元,每条线表示神经元之间的连接.我们可以看到,上面的神经元被分 ...

最新文章

  1. git pull问题“error: Your local changes to the following files would be overwritten by merge”解决方案
  2. Linux杂项设备驱动
  3. 计算机职称考试题目做完会有提示么,取得计算机职称的考试心得
  4. 去哪儿网产品经理的专属心得:产品经理的核心价值
  5. 牛客IOI周赛26-提高组(逆序对,对序列,未曾设想的道路) 题解
  6. (Python的)__ name__中包含什么?
  7. matlab变压器损耗仿真,基于Matlab的变压器运行特性仿真专题报告.docx
  8. 【lucene】lucene自定义评分
  9. sklearn之逻辑回归和岭回归
  10. USB Mass Storage大容量存储 The Thirteen Class章节的理解
  11. WPF 利用键盘钩子来捕获键盘,做一些不为人知的事情...完整实例
  12. TypeError: tensor is not a torch image.
  13. SQL中LIMIT子句介绍
  14. 飞机大战-玩家飞机被击中
  15. Maven 中的cannot Resolve情况
  16. ForkJoinPool api 详解
  17. 安庆集团-冲刺日志(第九天)
  18. Jump Server
  19. Protobuf解包组包
  20. mysql模糊搜索 like_Mysql必知必会(3):模糊查询(LIKE)

热门文章

  1. VUE单文件组件 以及webpack打包发布
  2. SUMO(四)—— Detector的运用
  3. 4、netty编写http服务器、增加压缩支持、netty编写client、netty添加SL/TLS保护https支持
  4. 蓝桥杯嵌入式第三课--LED与按键检测
  5. 如何才能学好ASP.NET?
  6. ASP。NET学习七
  7. 文件下载 Content-Disposition中filename中文乱码解决
  8. 牛客21天刷题_day#3
  9. python面向对象实验报告_Python 面向对象 | 菜鸟教程
  10. php消息队列异步,消息队列 - 如何实现php的异步任务队列