您当前的位置: 首页 >  算法
  • 0浏览

    0关注

    2393博文

    0收益

  • 0浏览

    0点赞

    0打赏

    0留言

私信
关注
热门博文

ML之NB:利用NB朴素贝叶斯算法(CountVectorizer/TfidfVectorizer+去除停用词)进行分类预测、评估

一个处女座的程序猿 发布时间:2019-03-03 12:23:17 ,浏览量:0

ML之NB:利用NB朴素贝叶斯算法(CountVectorizer/TfidfVectorizer+去除停用词)进行分类预测、评估

 

 

 

目录

输出结果

设计思路

核心代码

 

 

 

 

 

输出结果

 

 

设计思路

 

核心代码
class CountVectorizer Found at: sklearn.feature_extraction.text

class CountVectorizer(BaseEstimator, VectorizerMixin):
    """Convert a collection of text documents to a matrix of token counts
    
    This implementation produces a sparse representation of the counts using
    scipy.sparse.csr_matrix.
    
    If you do not provide an a-priori dictionary and you do not use an analyzer
    that does some kind of feature selection then the number of features will
    be equal to the vocabulary size found by analyzing the data.
    
    Read more in the :ref:`User Guide `.
    
    Parameters
    ----------
    input : string {'filename', 'file', 'content'}
    If 'filename', the sequence passed as an argument to fit is
    expected to be a list of filenames that need reading to fetch
    the raw content to analyze.
    
    If 'file', the sequence items must have a 'read' method (file-like
    object) that is called to fetch the bytes in memory.
    
    Otherwise the input is expected to be the sequence strings or
    bytes items are expected to be analyzed directly.
    
    encoding : string, 'utf-8' by default.
    If bytes or files are given to analyze, this encoding is used to
    decode.
    
    decode_error : {'strict', 'ignore', 'replace'}
    Instruction on what to do if a byte sequence is given to analyze that
    contains characters not of the given `encoding`. By default, it is
    'strict', meaning that a UnicodeDecodeError will be raised. Other
    values are 'ignore' and 'replace'.
    
    strip_accents : {'ascii', 'unicode', None}
    Remove accents during the preprocessing step.
    'ascii' is a fast method that only works on characters that have
    an direct ASCII mapping.
    'unicode' is a slightly slower method that works on any characters.
    None (default) does nothing.
    
    analyzer : string, {'word', 'char', 'char_wb'} or callable
    Whether the feature should be made of word or character n-grams.
    Option 'char_wb' creates character n-grams only from text inside
    word boundaries; n-grams at the edges of words are padded with space.
    
    If a callable is passed it is used to extract the sequence of features
    out of the raw, unprocessed input.
    
    preprocessor : callable or None (default)
    Override the preprocessing (string transformation) stage while
    preserving the tokenizing and n-grams generation steps.
    
    tokenizer : callable or None (default)
    Override the string tokenization step while preserving the
    preprocessing and n-grams generation steps.
    Only applies if ``analyzer == 'word'``.
    
    ngram_range : tuple (min_n, max_n)
    The lower and upper boundary of the range of n-values for different
    n-grams to be extracted. All values of n such that min_n             
关注
打赏
1664196048
查看更多评论
0.0527s