- Name: perl-String-Tokenizer
- Version: 0.60.0
- Release: 3.mga7
- Epoch:
- Group: Development/Perl
- License: GPL+ or Artistic
- Url: http://search.cpan.org/dist/String-Tokenizer
- Summary: A simple string tokenizer
- Architecture: noarch
- Size: 45314
- Distribution: Mageia
- Vendor: Mageia.Org
- Packager: umeabot <umeabot>
Description:
A simple string tokenizer which takes a string and splits it on whitespace.
It also optionally takes a string of characters to use as delimiters, and
returns them with the token set as well. This allows for splitting the
string in many different ways.
This is a very basic tokenizer, so more complex needs should be either
addressed with a custom written tokenizer or post-processing of the output
generated by this module. Basically, this will not fill everyones needs,
but it spans a gap between simple 'split / /, $string' and the other
options that involve much larger and complex modules.
Also note that this is not a lexical analyser. Many people confuse
tokenization with lexical analysis. A tokenizer mearly splits its input
into specific chunks, a lexical analyzer classifies those chunks. Sometimes
these two steps are combined, but not here.
- OptFlags: -O2 -g -pipe -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fstack-protector --param=ssp-buffer-size=4 -fomit-frame-pointer -march=i586 -mtune=generic -fasynchronous-unwind-tables
- Cookie: localhost 1537355239
- Buildhost: localhost