Source: libstring-tokenizer-perl
Section: perl
Priority: optional
Build-Depends: debhelper (>= 9)
Build-Depends-Indep: perl
Maintainer: Debian Perl Group <pkg-perl-maintainers@lists.alioth.debian.org>
Uploaders: Ben Webb <bjwebb67@googlemail.com>
Standards-Version: 3.9.7
Homepage: https://metacpan.org/release/String-Tokenizer
Vcs-Git: https://anonscm.debian.org/git/pkg-perl/packages/libstring-tokenizer-perl.git
Vcs-Browser: https://anonscm.debian.org/cgit/pkg-perl/packages/libstring-tokenizer-perl.git
Testsuite: autopkgtest-pkg-perl

Package: libstring-tokenizer-perl
Architecture: all
Depends: ${misc:Depends},
         ${perl:Depends}
Description: simple string tokenizer
 String::Tokenizer is a simple string tokenizer which takes a string and splits
 it on whitespace. It also optionally takes a string of characters to use as
 delimiters, and returns them with the token set as well. This allows for
 splitting the string in many different ways.
 .
 This is a very basic tokenizer, so more complex needs should be either
 addressed with a custom written tokenizer or post-processing of the output
 generated by this module. Basically, this will not fill everyones needs, but
 it spans a gap between simple split / /, $string and the other options that
 involve much larger and complex modules.
 .
 Also note that this is not a lexical analyser. Many people confuse
 tokenization with lexical analysis. A tokenizer mearly splits its input into
 specific chunks, a lexical analyzer classifies those chunks. Sometimes these
 two steps are combined, but not here.
