39 Commits

Author SHA1 Message Date
6713699ebe [fix] Cambiamos requirements y setup para que sea compatible con python 3.11 2022-11-12 10:13:46 +01:00
Maxim Vladimirskiy
71d9b6eb78 Merge pull request #227 from mailgun/maxim/develop
PIP-1615: Operate on unicode data exclusively [python3]
2022-02-07 10:43:25 +03:00
Maxim Vladimirskiy
14f106ee76 Operate on unicode data exclusively 2022-02-04 17:31:53 +03:00
Maxim Vladimirskiy
a8c7e6a972 Merge pull request #226 from mailgun/maxim/develop
PIP-1562: Remove max tags limit [python3]
2022-01-06 15:24:57 +03:00
Maxim Vladimirskiy
b30c375c5b Expose extract_from_html_tree 2022-01-06 15:16:43 +03:00
Maxim Vladimirskiy
cec5acf58f Remove max tags limit 2022-01-06 14:18:11 +03:00
Maxim Vladimirskiy
24d0f2d00a Merge pull request #223 from mailgun/maxim/develop
PIP-1509: Optimise sender name check [python3]
2021-11-19 13:11:29 +03:00
Maxim Vladimirskiy
94007b0b92 Optimise sender name check 2021-11-19 11:12:26 +03:00
Maxim Vladimirskiy
1a5548f171 Merge pull request #222 from mailgun/maxim/develop
PIP-1409: Remove version pins from setup.py [python3]
2021-11-11 16:29:30 +03:00
Maxim Vladimirskiy
53c49b9121 Remove version pins from setup.py 2021-11-11 15:36:50 +03:00
Matt Dietz
bd50872043 Merge pull request #217 from mailgun/dietz/REP-1030
Drops Python 2 support [python3]
2021-06-15 09:46:29 -05:00
Matt Dietz
d37c4fd551 Drops Python 2 support
REP-1030

In addition to some python 2 => 3 fixes, this change bumps the scikit-learn
version to latest. The previously pinned version of scikit-learn failed trying
to compile all necessary C modules under python 3.7+ due to included header files
that weren't compatible with C the API implemented in python 3.7+.

Simultaneously, with the restrictive compatibility supported by scikit-learn,
it seemed prudent to drop python 2 support altogether. Otherwise, we'd be stuck
with python 3.4 as the newest possible version we could support.

With this change, tests are currently passing under 3.9.2.

Lastly, imports the original training data. At some point, a new version
of the training data was committed to the repo but no classifier was
trained from it. Using a classifier trained from this new data resulted
in most of the tests failing.
2021-06-10 14:03:25 -05:00
Sergey Obukhov
d9ed7cc6d1 Merge pull request #190 from yoks/master
Add __init__.py into data folder, add data files into MANIFEST.in
2019-07-02 18:56:47 +03:00
Sergey Obukhov
0a0808c0a8 Merge branch 'master' into master 2019-07-01 20:48:46 +03:00
Sergey Obukhov
16354e3528 Merge pull request #191 from mailgun/thrawn/develop
PIP-423: Now removing namespaces from parsed HTML
2019-05-12 11:54:17 +03:00
Derrick J. Wippler
1018e88ec1 Now removing namespaces from parsed HTML 2019-05-10 11:16:12 -05:00
Ivan Anisimov
2916351517 Update setup.py 2019-03-16 22:17:26 +03:00
Ivan Anisimov
46d4b02c81 Update setup.py 2019-03-16 22:15:43 +03:00
Ivan Anisimov
58eac88a10 Update MANIFEST.in 2019-03-16 22:03:40 +03:00
Ivan Anisimov
2ef3d8dfbe Update MANIFEST.in 2019-03-16 22:01:00 +03:00
Ivan Anisimov
7cf4c29340 Create __init__.py 2019-03-16 21:54:09 +03:00
Sergey Obukhov
cdd84563dd Merge pull request #183 from mailgun/sergey/date
fix text with Date: misclassified as quotations splitter
2019-01-18 17:32:10 +03:00
Sergey Obukhov
8138ea9a60 fix text with Date: misclassified as quotations splitter 2019-01-18 16:49:39 +03:00
Sergey Obukhov
c171f9a875 Merge pull request #169 from Savageman/patch-2
Use regex match to detect outlook 2007, 2010, 2013
2018-11-05 10:43:20 +03:00
Sergey Obukhov
3f97a8b8ff Merge branch 'master' into patch-2 2018-11-05 10:42:00 +03:00
Esperat Julian
1147767ff3 Fix regression: windows mail format was left forgotten
Missing a | at the end of the regex, so next lines are part of the global search.
2018-11-04 19:42:12 +01:00
Sergey Obukhov
6a304215c3 Merge pull request #177 from mailgun/obukhov-sergey-patch-1
Update Readme with how to retrain on your own data
2018-11-02 15:22:18 +03:00
Sergey Obukhov
31714506bd Update Readme with how to retrain on your own data 2018-11-02 15:21:36 +03:00
Sergey Obukhov
403d80cf3b Merge pull request #161 from glaand/master
Fix: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration.
2018-11-02 15:03:02 +03:00
Sergey Obukhov
7cf20f2877 Merge branch 'master' into master 2018-11-02 14:52:38 +03:00
Sergey Obukhov
afff08b017 Merge branch 'master' into patch-2 2018-11-02 09:13:42 +03:00
Sergey Obukhov
685abb1905 Merge pull request #171 from gabriellima95/Add-Portuguese-Language
Add Portuguese language to quotations
2018-11-02 09:12:43 +03:00
Sergey Obukhov
41990727a3 Merge branch 'master' into Add-Portuguese-Language 2018-11-02 09:11:07 +03:00
Sergey Obukhov
b113d8ab33 Merge pull request #172 from ad-m/patch-1
Fix catastrophic backtracking in regexp
2018-11-02 09:09:49 +03:00
Adam Dobrawy
7bd0e9cc2f Fix catastrophic backtracking in regexp
Co-Author: @Nipsuli
2018-09-21 22:00:10 +02:00
gabriellima95
1e030a51d4 Add Portuguese language to quotations 2018-09-11 15:27:39 -03:00
Esperat Julian
238a5de5cc Use regex match to detect outlook 2007, 2010, 2013
I encountered a variant of the outlook quotations with a space after the semicolon.

To prevent multiplying the number of rules, I implemented a regex match instead (I found how to here: https://stackoverflow.com/a/34093801/211204).

I documented all the different variants as cleanly as I could.
2018-08-31 12:39:52 +02:00
André Glatzl
53b24ffb3d Cut out first some encoding html tags such as xml and doctype for avoiding conflict with unicode decoding 2017-12-19 15:15:10 +01:00
Sergey Obukhov
a7404afbcb Merge pull request #155 from mailgun/sergey/appointment
fix appointments in text
2017-10-23 16:34:08 -07:00
20 changed files with 2952 additions and 2825 deletions

20
.build/Dockerfile Normal file
View File

@@ -0,0 +1,20 @@
FROM python:3.9-slim-buster AS deps
RUN apt-get update && \
apt-get install -y build-essential git curl python3-dev libatlas3-base libatlas-base-dev liblapack-dev libxml2 libxml2-dev libffi6 libffi-dev musl-dev libxslt-dev
FROM deps AS testable
ARG REPORT_PATH
VOLUME ["/var/mailgun", "/etc/mailgun/ssl", ${REPORT_PATH}]
ADD . /app
WORKDIR /app
COPY wheel/* /wheel/
RUN mkdir -p ${REPORT_PATH}
RUN python ./setup.py build bdist_wheel -d /wheel && \
pip install --no-deps /wheel/*
ENTRYPOINT ["/bin/sh", "/app/run_tests.sh"]

3
.gitignore vendored
View File

@@ -54,3 +54,6 @@ _trial_temp
# OSX
.DS_Store
# vim-backup
*.bak

View File

@@ -5,3 +5,10 @@ include classifier
include LICENSE
include MANIFEST.in
include README.rst
include talon/signature/data/train.data
include talon/signature/data/classifier
include talon/signature/data/classifier_01.npy
include talon/signature/data/classifier_02.npy
include talon/signature/data/classifier_03.npy
include talon/signature/data/classifier_04.npy
include talon/signature/data/classifier_05.npy

View File

@@ -129,6 +129,22 @@ start using it for talon.
.. _EDRM: http://www.edrm.net/resources/data-sets/edrm-enron-email-data-set
.. _forge: https://github.com/mailgun/forge
Training on your dataset
------------------------
talon comes with a pre-processed dataset and a pre-trained classifier. To retrain the classifier on your own dataset of raw emails, structure and annotate them in the same way the `forge`_ project does. Then do:
.. code:: python
from talon.signature.learning.dataset import build_extraction_dataset
from talon.signature.learning import classifier as c
build_extraction_dataset("/path/to/your/P/folder", "/path/to/talon/signature/data/train.data")
c.train(c.init(), "/path/to/talon/signature/data/train.data", "/path/to/talon/signature/data/classifier")
Note that for signature extraction you need just the folder with the positive samples with annotated signature lines (P folder).
.. _forge: https://github.com/mailgun/forge
Research
--------

11
requirements.txt Normal file
View File

@@ -0,0 +1,11 @@
chardet>=1.0.1
# cchardet>=0.3.5
cssselect
html5lib
joblib
lxml>=2.3.3
numpy
regex>=1
scikit-learn>=1.0.0
scipy
six>=1.10.0

4
run_tests.sh Executable file
View File

@@ -0,0 +1,4 @@
#!/usr/bin/env bash
set -ex
REPORT_PATH="${REPORT_PATH:-./}"
nosetests --with-xunit --with-coverage --cover-xml --cover-xml-file $REPORT_PATH/coverage.xml --xunit-file=$REPORT_PATH/nosetests.xml --cover-package=talon .

View File

@@ -5,10 +5,10 @@ from setuptools.command.install import install
class InstallCommand(install):
user_options = install.user_options + [
('no-ml', None, "Don't install without Machine Learning modules."),
("no-ml", None, "Don't install without Machine Learning modules."),
]
boolean_options = install.boolean_options + ['no-ml']
boolean_options = install.boolean_options + ["no-ml"]
def initialize_options(self):
install.initialize_options(self)
@@ -18,46 +18,47 @@ class InstallCommand(install):
install.finalize_options(self)
if self.no_ml:
dist = self.distribution
dist.packages=find_packages(exclude=[
'tests',
'tests.*',
'talon.signature',
'talon.signature.*',
])
for not_required in ['numpy', 'scipy', 'scikit-learn==0.16.1']:
dist.packages = find_packages(
exclude=[
"tests",
"tests.*",
"talon.signature",
"talon.signature.*",
]
)
for not_required in ["numpy", "scipy", "scikit-learn==0.24.1"]:
dist.install_requires.remove(not_required)
setup(name='talon',
version='1.4.5',
description=("Mailgun library "
"to extract message quotations and signatures."),
long_description=open("README.rst").read(),
author='Mailgun Inc.',
author_email='admin@mailgunhq.com',
url='https://github.com/mailgun/talon',
license='APACHE2',
cmdclass={
'install': InstallCommand,
},
packages=find_packages(exclude=['tests', 'tests.*']),
include_package_data=True,
zip_safe=True,
install_requires=[
"lxml>=2.3.3",
"regex>=1",
"numpy",
"scipy",
"scikit-learn>=0.16.1", # pickled versions of classifier, else rebuild
'chardet>=1.0.1',
'cchardet>=0.3.5',
'cssselect',
'six>=1.10.0',
'html5lib'
],
tests_require=[
"mock",
"nose>=1.2.1",
"coverage"
]
)
setup(
name="talon-o2w",
version="1.6.1",
description=(
"Mailgun library " "to extract message quotations and signatures."
),
long_description=open("README.rst").read(),
author="Mailgun Inc.",
author_email="admin@mailgunhq.com",
url="https://github.com/mailgun/talon",
license="APACHE2",
cmdclass={
"install": InstallCommand,
},
packages=find_packages(exclude=["tests", "tests.*"]),
include_package_data=True,
zip_safe=True,
install_requires=[
"lxml",
"regex",
"numpy",
"scipy",
"scikit-learn>=1.0.0",
"chardet",
# "cchardet",
"cssselect",
"six",
"html5lib",
"joblib",
],
tests_require=["mock", "nose", "coverage"],
)

View File

@@ -87,23 +87,24 @@ def cut_gmail_quote(html_message):
def cut_microsoft_quote(html_message):
''' Cuts splitter block and all following blocks. '''
#use EXSLT extensions to have a regex match() function with lxml
ns = {"re": "http://exslt.org/regular-expressions"}
#general pattern: @style='border:none;border-top:solid <color> 1.0pt;padding:3.0pt 0<unit> 0<unit> 0<unit>'
#outlook 2007, 2010 (international) <color=#B5C4DF> <unit=cm>
#outlook 2007, 2010 (american) <color=#B5C4DF> <unit=pt>
#outlook 2013 (international) <color=#E1E1E1> <unit=cm>
#outlook 2013 (american) <color=#E1E1E1> <unit=pt>
#also handles a variant with a space after the semicolon
splitter = html_message.xpath(
#outlook 2007, 2010 (international)
"//div[@style='border:none;border-top:solid #B5C4DF 1.0pt;"
"padding:3.0pt 0cm 0cm 0cm']|"
#outlook 2007, 2010 (american)
"//div[@style='border:none;border-top:solid #B5C4DF 1.0pt;"
"padding:3.0pt 0in 0in 0in']|"
#outlook 2013 (international)
"//div[@style='border:none;border-top:solid #E1E1E1 1.0pt;"
"padding:3.0pt 0cm 0cm 0cm']|"
#outlook 2013 (american)
"//div[@style='border:none;border-top:solid #E1E1E1 1.0pt;"
"padding:3.0pt 0in 0in 0in']|"
#outlook 2007, 2010, 2013 (international, american)
"//div[@style[re:match(., 'border:none; ?border-top:solid #(E1E1E1|B5C4DF) 1.0pt; ?"
"padding:3.0pt 0(in|cm) 0(in|cm) 0(in|cm)')]]|"
#windows mail
"//div[@style='padding-top: 5px; "
"border-top-color: rgb(229, 229, 229); "
"border-top-width: 1px; border-top-style: solid;']"
, namespaces=ns
)
if splitter:

View File

@@ -6,23 +6,22 @@ original messages (without quoted messages)
"""
from __future__ import absolute_import
import regex as re
import logging
from copy import deepcopy
from lxml import html, etree
from talon.utils import (get_delimiter, html_tree_to_text,
html_document_fromstring)
from talon import html_quotations
import regex as re
from lxml import etree, html
from six.moves import range
import six
from talon import html_quotations
from talon.utils import (get_delimiter, html_document_fromstring,
html_tree_to_text)
log = logging.getLogger(__name__)
RE_FWD = re.compile("^[-]+[ ]*Forwarded message[ ]*[-]+$", re.I | re.M)
RE_FWD = re.compile("^[-]+[ ]*Forwarded message[ ]*[-]+\s*$", re.I | re.M)
RE_ON_DATE_SMB_WROTE = re.compile(
u'(-*[>]?[ ]?({0})[ ].*({1})(.*\n){{0,2}}.*({2}):?-*)'.format(
@@ -38,6 +37,8 @@ RE_ON_DATE_SMB_WROTE = re.compile(
'Op',
# German
'Am',
# Portuguese
'Em',
# Norwegian
u'',
# Swedish, Danish
@@ -64,6 +65,8 @@ RE_ON_DATE_SMB_WROTE = re.compile(
'schreef','verzond','geschreven',
# German
'schrieb',
# Portuguese
'escreveu',
# Norwegian, Swedish
'skrev',
# Vietnamese
@@ -90,7 +93,7 @@ RE_ON_DATE_WROTE_SMB = re.compile(
)
RE_QUOTATION = re.compile(
r'''
r"""
(
# quotation border: splitter line or a number of quotation marker lines
(?:
@@ -108,10 +111,10 @@ RE_QUOTATION = re.compile(
# after quotations should be text only or nothing at all
[te]*$
''', re.VERBOSE)
""", re.VERBOSE)
RE_EMPTY_QUOTATION = re.compile(
r'''
r"""
(
# quotation border: splitter line or a number of quotation marker lines
(?:
@@ -121,7 +124,7 @@ RE_EMPTY_QUOTATION = re.compile(
)
)
e*
''', re.VERBOSE)
""", re.VERBOSE)
# ------Original Message------ or ---- Reply Message ----
# With variations in other languages.
@@ -135,13 +138,17 @@ RE_ORIGINAL_MESSAGE = re.compile(u'[\s]*[-]+[ ]*({})[ ]*[-]+'.format(
'Oprindelig meddelelse',
))), re.I)
RE_FROM_COLON_OR_DATE_COLON = re.compile(u'(_+\r?\n)?[\s]*(:?[*]?{})[\s]?:[*]?.*'.format(
RE_FROM_COLON_OR_DATE_COLON = re.compile(u'((_+\r?\n)?[\s]*:?[*]?({})[\s]?:([^\n$]+\n){{1,2}}){{2,}}'.format(
u'|'.join((
# "From" in different languages.
'From', 'Van', 'De', 'Von', 'Fra', u'Från',
# "Date" in different languages.
'Date', 'Datum', u'Envoyé', 'Skickat', 'Sendt',
))), re.I)
'Date', '[S]ent', 'Datum', u'Envoyé', 'Skickat', 'Sendt', 'Gesendet',
# "Subject" in different languages.
'Subject', 'Betreff', 'Objet', 'Emne', u'Ämne',
# "To" in different languages.
'To', 'An', 'Til', u'À', 'Till'
))), re.I | re.M)
# ---- John Smith wrote ----
RE_ANDROID_WROTE = re.compile(u'[\s]*[-]+.*({})[ ]*[-]+'.format(
@@ -185,9 +192,6 @@ RE_PARENTHESIS_LINK = re.compile("\(https?://")
SPLITTER_MAX_LINES = 6
MAX_LINES_COUNT = 1000
# an extensive research shows that exceeding this limit
# leads to excessive processing time
MAX_HTML_LEN = 2794202
QUOT_PATTERN = re.compile('^>+ ?')
NO_QUOT_LINE = re.compile('^[^>].*[\S].*')
@@ -286,7 +290,7 @@ def process_marked_lines(lines, markers, return_flags=[False, -1, -1]):
# inlined reply
# use lookbehind assertions to find overlapping entries e.g. for 'mtmtm'
# both 't' entries should be found
for inline_reply in re.finditer('(?<=m)e*((?:t+e*)+)m', markers):
for inline_reply in re.finditer('(?<=m)e*(t[te]*)m', markers):
# long links could break sequence of quotation lines but they shouldn't
# be considered an inline reply
links = (
@@ -338,9 +342,6 @@ def _replace_link_brackets(msg_body):
Converts msg_body into a unicode
"""
if isinstance(msg_body, bytes):
msg_body = msg_body.decode('utf8')
def link_wrapper(link):
newline_index = msg_body[:link.start()].rfind("\n")
if msg_body[newline_index + 1] == ">":
@@ -380,8 +381,6 @@ def postprocess(msg_body):
def extract_from_plain(msg_body):
"""Extracts a non quoted message from provided plain text."""
stripped_text = msg_body
delimiter = get_delimiter(msg_body)
msg_body = preprocess(msg_body, delimiter)
# don't process too long messages
@@ -413,22 +412,27 @@ def extract_from_html(msg_body):
Returns a unicode string.
"""
if isinstance(msg_body, six.text_type):
msg_body = msg_body.encode('utf8')
elif not isinstance(msg_body, bytes):
msg_body = msg_body.encode('ascii')
if msg_body.strip() == "":
return msg_body
result = _extract_from_html(msg_body)
if isinstance(result, bytes):
result = result.decode('utf8')
msg_body = msg_body.replace("\r\n", "\n")
# Cut out xml and doctype tags to avoid conflict with unicode decoding.
msg_body = re.sub(r"\<\?xml.+\?\>|\<\!DOCTYPE.+]\>", "", msg_body)
html_tree = html_document_fromstring(msg_body)
if html_tree is None:
return msg_body
result = extract_from_html_tree(html_tree)
if not result:
return msg_body
return result
def _extract_from_html(msg_body):
def extract_from_html_tree(html_tree):
"""
Extract not quoted message from provided html message body
using tags and plain text algorithm.
Extract not quoted message from provided parsed html tree using tags and
plain text algorithm.
Cut out the 'blockquote', 'gmail_quote' tags.
Cut Microsoft quotations.
@@ -441,15 +445,6 @@ def _extract_from_html(msg_body):
then checking deleted checkpoints,
then deleting necessary tags.
"""
if msg_body.strip() == b'':
return msg_body
msg_body = msg_body.replace(b'\r\n', b'\n')
html_tree = html_document_fromstring(msg_body)
if html_tree is None:
return msg_body
cut_quotations = (html_quotations.cut_gmail_quote(html_tree) or
html_quotations.cut_zimbra_quote(html_tree) or
html_quotations.cut_blockquote(html_tree) or
@@ -467,7 +462,7 @@ def _extract_from_html(msg_body):
# Don't process too long messages
if len(lines) > MAX_LINES_COUNT:
return msg_body
return None
# Collect checkpoints on each line
line_checkpoints = [
@@ -486,7 +481,7 @@ def _extract_from_html(msg_body):
lines_were_deleted, first_deleted, last_deleted = return_flags
if not lines_were_deleted and not cut_quotations:
return msg_body
return None
if lines_were_deleted:
#collect checkpoints from deleted lines
@@ -500,9 +495,73 @@ def _extract_from_html(msg_body):
)
if _readable_text_empty(html_tree_copy):
return msg_body
return None
return html.tostring(html_tree_copy)
# NOTE: We remove_namespaces() because we are using an HTML5 Parser, HTML
# parsers do not recognize namespaces in HTML tags. As such the rendered
# HTML tags are no longer recognizable HTML tags. Example: <o:p> becomes
# <oU0003Ap>. When we port this to golang we should look into using an
# XML Parser NOT and HTML5 Parser since we do not know what input a
# customer will send us. Switching to a common XML parser in python
# opens us up to a host of vulnerabilities.
# See https://docs.python.org/3/library/xml.html#xml-vulnerabilities
#
# The down sides to removing the namespaces is that customers might
# judge the XML namespaces important. If that is the case then support
# should encourage customers to preform XML parsing of the un-stripped
# body to get the full unmodified XML payload.
#
# Alternatives to this approach are
# 1. Ignore the U0003A in tag names and let the customer deal with it.
# This is not ideal, as most customers use stripped-html for viewing
# emails sent from a recipient, as such they cannot control the HTML
# provided by a recipient.
# 2. Preform a string replace of 'U0003A' to ':' on the rendered HTML
# string. While this would solve the issue simply, it runs the risk
# of replacing data outside the <tag> which might be essential to
# the customer.
remove_namespaces(html_tree_copy)
s = html.tostring(html_tree_copy, encoding="ascii")
if not s:
return None
return s.decode("ascii")
def remove_namespaces(root):
"""
Given the root of an HTML document iterate through all the elements
and remove any namespaces that might have been provided and remove
any attributes that contain a namespace
<html xmlns:o="urn:schemas-microsoft-com:office:office">
becomes
<html>
<o:p>Hi</o:p>
becomes
<p>Hi</p>
Start tags do NOT have a namespace; COLON characters have no special meaning.
if we don't remove the namespace the parser translates the tag name into a
unicode representation. For example <o:p> becomes <oU0003Ap>
See https://www.w3.org/TR/2011/WD-html5-20110525/syntax.html#start-tags
"""
for child in root.iter():
for key, value in child.attrib.items():
# If the attribute includes a colon
if key.rfind("U0003A") != -1:
child.attrib.pop(key)
# If the tag includes a colon
idx = child.tag.rfind("U0003A")
if idx != -1:
child.tag = child.tag[idx+6:]
return root
def split_emails(msg):
@@ -557,7 +616,6 @@ def _correct_splitlines_in_headers(markers, lines):
updated_markers = ""
i = 0
in_header_block = False
for m in markers:
# Only set in_header_block flag when we hit an 's' and line is a header
if m == 's':
@@ -586,10 +644,10 @@ def _readable_text_empty(html_tree):
def is_splitter(line):
'''
"""
Returns Matcher object if provided string is a splitter and
None otherwise.
'''
"""
for pattern in SPLITTER_PATTERNS:
matcher = re.match(pattern, line)
if matcher:
@@ -597,12 +655,12 @@ def is_splitter(line):
def text_content(context):
'''XPath Extension function to return a node text content.'''
"""XPath Extension function to return a node text content."""
return context.context_node.xpath("string()").strip()
def tail(context):
'''XPath Extension function to return a node tail text.'''
"""XPath Extension function to return a node tail text."""
return context.context_node.tail or ''

View File

@@ -23,17 +23,14 @@ trained against, don't forget to regenerate:
from __future__ import absolute_import
import os
from . import extraction
from . extraction import extract #noqa
from . learning import classifier
DATA_DIR = os.path.join(os.path.dirname(__file__), 'data')
EXTRACTOR_FILENAME = os.path.join(DATA_DIR, 'classifier')
EXTRACTOR_DATA = os.path.join(DATA_DIR, 'train.data')
from talon.signature import extraction
from talon.signature.extraction import extract
from talon.signature.learning import classifier
def initialize():
extraction.EXTRACTOR = classifier.load(EXTRACTOR_FILENAME,
EXTRACTOR_DATA)
data_dir = os.path.join(os.path.dirname(__file__), 'data')
extractor_filename = os.path.join(data_dir, 'classifier')
extractor_data_filename = os.path.join(data_dir, 'train.data')
extraction.EXTRACTOR = classifier.load(extractor_filename,
extractor_data_filename)

View File

@@ -0,0 +1 @@

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@@ -8,7 +8,7 @@ body belongs to the signature.
from __future__ import absolute_import
from numpy import genfromtxt
from sklearn.externals import joblib
import joblib
from sklearn.svm import LinearSVC

View File

@@ -5,21 +5,17 @@
* regexp's constants used when evaluating signature's features
"""
from __future__ import absolute_import
import unicodedata
import regex as re
from talon.utils import to_unicode
from talon.signature.constants import SIGNATURE_MAX_LINES
rc = re.compile
RE_EMAIL = rc('\S@\S')
RE_RELAX_PHONE = rc('(\(? ?[\d]{2,3} ?\)?.{,3}?){2,}')
RE_URL = rc(r'''https?://|www\.[\S]+\.[\S]''')
RE_URL = rc(r"""https?://|www\.[\S]+\.[\S]""")
# Taken from:
# http://www.cs.cmu.edu/~vitor/papers/sigFilePaper_finalversion.pdf
@@ -55,7 +51,7 @@ BAD_SENDER_NAMES = [
def binary_regex_search(prog):
'''Returns a function that returns 1 or 0 depending on regex search result.
"""Returns a function that returns 1 or 0 depending on regex search result.
If regular expression compiled into prog is present in a string
the result of calling the returned function with the string will be 1
@@ -66,12 +62,12 @@ def binary_regex_search(prog):
1
>>> binary_regex_search(re.compile("12"))("34")
0
'''
"""
return lambda s: 1 if prog.search(s) else 0
def binary_regex_match(prog):
'''Returns a function that returns 1 or 0 depending on regex match result.
"""Returns a function that returns 1 or 0 depending on regex match result.
If a string matches regular expression compiled into prog
the result of calling the returned function with the string will be 1
@@ -82,7 +78,7 @@ def binary_regex_match(prog):
1
>>> binary_regex_match(re.compile("12"))("3 12")
0
'''
"""
return lambda s: 1 if prog.match(s) else 0
@@ -102,7 +98,7 @@ def flatten_list(list_to_flatten):
def contains_sender_names(sender):
'''Returns a functions to search sender\'s name or it\'s part.
"""Returns a functions to search sender\'s name or it\'s part.
>>> feature = contains_sender_names("Sergey N. Obukhov <xxx@example.com>")
>>> feature("Sergey Obukhov")
@@ -115,7 +111,7 @@ def contains_sender_names(sender):
1
>>> contains_sender_names("<serobnic@mail.ru>")("serobnic")
1
'''
"""
names = '( |$)|'.join(flatten_list([[e, e.capitalize()]
for e in extract_names(sender)]))
names = names or sender
@@ -135,20 +131,25 @@ def extract_names(sender):
>>> extract_names('')
[]
"""
sender = to_unicode(sender, precise=True)
# Remove non-alphabetical characters
sender = "".join([char if char.isalpha() else ' ' for char in sender])
# Remove too short words and words from "black" list i.e.
# words like `ru`, `gmail`, `com`, `org`, etc.
sender = [word for word in sender.split() if len(word) > 1 and
not word in BAD_SENDER_NAMES]
# Remove duplicates
names = list(set(sender))
names = list()
for word in sender.split():
if len(word) < 2:
continue
if word in BAD_SENDER_NAMES:
continue
if word in names:
continue
names.append(word)
return names
def categories_percent(s, categories):
'''Returns category characters percent.
"""Returns category characters percent.
>>> categories_percent("qqq ggg hhh", ["Po"])
0.0
@@ -160,9 +161,8 @@ def categories_percent(s, categories):
50.0
>>> categories_percent("s.s,5s", ["Po", "Nd"])
50.0
'''
"""
count = 0
s = to_unicode(s, precise=True)
for c in s:
if unicodedata.category(c) in categories:
count += 1
@@ -170,19 +170,18 @@ def categories_percent(s, categories):
def punctuation_percent(s):
'''Returns punctuation percent.
"""Returns punctuation percent.
>>> punctuation_percent("qqq ggg hhh")
0.0
>>> punctuation_percent("q,w.")
50.0
'''
"""
return categories_percent(s, ['Po'])
def capitalized_words_percent(s):
'''Returns capitalized words percent.'''
s = to_unicode(s, precise=True)
"""Returns capitalized words percent."""
words = re.split('\s', s)
words = [w for w in words if w.strip()]
words = [w for w in words if len(w) > 2]
@@ -208,20 +207,26 @@ def many_capitalized_words(s):
def has_signature(body, sender):
'''Checks if the body has signature. Returns True or False.'''
"""Checks if the body has signature. Returns True or False."""
non_empty = [line for line in body.splitlines() if line.strip()]
candidate = non_empty[-SIGNATURE_MAX_LINES:]
upvotes = 0
sender_check = contains_sender_names(sender)
for line in candidate:
# we check lines for sender's name, phone, email and url,
# those signature lines don't take more then 27 lines
if len(line.strip()) > 27:
continue
elif contains_sender_names(sender)(line):
if sender_check(line):
return True
elif (binary_regex_search(RE_RELAX_PHONE)(line) +
binary_regex_search(RE_EMAIL)(line) +
binary_regex_search(RE_URL)(line) == 1):
if (binary_regex_search(RE_RELAX_PHONE)(line) +
binary_regex_search(RE_EMAIL)(line) +
binary_regex_search(RE_URL)(line) == 1):
upvotes += 1
if upvotes > 1:
return True
return False

View File

@@ -1,110 +1,17 @@
# coding:utf-8
from __future__ import annotations
from __future__ import absolute_import
from random import shuffle
import cchardet
import chardet
import html5lib
import regex as re
import six
from html5lib import HTMLParser
from lxml.cssselect import CSSSelector
from lxml.etree import _Element
from lxml.html import html5parser
from talon.constants import RE_DELIMITER
def safe_format(format_string, *args, **kwargs):
"""
Helper: formats string with any combination of bytestrings/unicode
strings without raising exceptions
"""
try:
if not args and not kwargs:
return format_string
else:
return format_string.format(*args, **kwargs)
# catch encoding errors and transform everything into utf-8 string
# before logging:
except (UnicodeEncodeError, UnicodeDecodeError):
format_string = to_utf8(format_string)
args = [to_utf8(p) for p in args]
kwargs = {k: to_utf8(v) for k, v in six.iteritems(kwargs)}
return format_string.format(*args, **kwargs)
# ignore other errors
except:
return u''
def to_unicode(str_or_unicode, precise=False):
"""
Safely returns a unicode version of a given string
>>> utils.to_unicode('привет')
u'привет'
>>> utils.to_unicode(u'привет')
u'привет'
If `precise` flag is True, tries to guess the correct encoding first.
"""
if not isinstance(str_or_unicode, six.text_type):
encoding = quick_detect_encoding(str_or_unicode) if precise else 'utf-8'
return six.text_type(str_or_unicode, encoding, 'replace')
return str_or_unicode
def detect_encoding(string):
"""
Tries to detect the encoding of the passed string.
Defaults to UTF-8.
"""
assert isinstance(string, bytes)
try:
detected = chardet.detect(string)
if detected:
return detected.get('encoding') or 'utf-8'
except Exception as e:
pass
return 'utf-8'
def quick_detect_encoding(string):
"""
Tries to detect the encoding of the passed string.
Uses cchardet. Fallbacks to detect_encoding.
"""
assert isinstance(string, bytes)
try:
detected = cchardet.detect(string)
if detected:
return detected.get('encoding') or detect_encoding(string)
except Exception as e:
pass
return detect_encoding(string)
def to_utf8(str_or_unicode):
"""
Safely returns a UTF-8 version of a given string
>>> utils.to_utf8(u'hi')
'hi'
"""
if not isinstance(str_or_unicode, six.text_type):
return str_or_unicode.encode("utf-8", "ignore")
return str(str_or_unicode)
def random_token(length=7):
vals = ("a b c d e f g h i j k l m n o p q r s t u v w x y z "
"0 1 2 3 4 5 6 7 8 9").split(' ')
shuffle(vals)
return ''.join(vals[:length])
def get_delimiter(msg_body):
def get_delimiter(msg_body: str) -> str:
delimiter = RE_DELIMITER.search(msg_body)
if delimiter:
delimiter = delimiter.group()
@@ -114,7 +21,7 @@ def get_delimiter(msg_body):
return delimiter
def html_tree_to_text(tree):
def html_tree_to_text(tree: _Element) -> str:
for style in CSSSelector('style')(tree):
style.getparent().remove(style)
@@ -131,7 +38,7 @@ def html_tree_to_text(tree):
for el in tree.iter():
el_text = (el.text or '') + (el.tail or '')
if len(el_text) > 1:
if el.tag in _BLOCKTAGS:
if el.tag in _BLOCKTAGS + _HARDBREAKS:
text += "\n"
if el.tag == 'li':
text += " * "
@@ -142,29 +49,26 @@ def html_tree_to_text(tree):
if href:
text += "(%s) " % href
if el.tag in _HARDBREAKS and text and not text.endswith("\n"):
if (el.tag in _HARDBREAKS and text and
not text.endswith("\n") and not el_text):
text += "\n"
retval = _rm_excessive_newlines(text)
return _encode_utf8(retval)
text = _rm_excessive_newlines(text)
return text
def html_to_text(string):
def html_to_text(s: str) -> str | None:
"""
Dead-simple HTML-to-text converter:
>>> html_to_text("one<br>two<br>three")
>>> "one\ntwo\nthree"
<<< "one\ntwo\nthree"
NOTES:
1. the string is expected to contain UTF-8 encoded HTML!
2. returns utf-8 encoded str (not unicode)
3. if html can't be parsed returns None
"""
if isinstance(string, six.text_type):
string = string.encode('utf8')
s = _prepend_utf8_declaration(string)
s = s.replace(b"\n", b"")
s = _prepend_utf8_declaration(s)
s = s.replace("\n", "")
tree = html_fromstring(s)
if tree is None:
@@ -173,74 +77,46 @@ def html_to_text(string):
return html_tree_to_text(tree)
def html_fromstring(s):
def html_fromstring(s: str) -> _Element:
"""Parse html tree from string. Return None if the string can't be parsed.
"""
if isinstance(s, six.text_type):
s = s.encode('utf8')
try:
if html_too_big(s):
return None
return html5parser.fromstring(s, parser=_html5lib_parser())
except Exception:
pass
return html5parser.fromstring(s, parser=_html5lib_parser())
def html_document_fromstring(s):
def html_document_fromstring(s: str) -> _Element:
"""Parse html tree from string. Return None if the string can't be parsed.
"""
if isinstance(s, six.text_type):
s = s.encode('utf8')
try:
if html_too_big(s):
return None
return html5parser.document_fromstring(s, parser=_html5lib_parser())
except Exception:
pass
return html5parser.document_fromstring(s, parser=_html5lib_parser())
def cssselect(expr, tree):
def cssselect(expr: str, tree: str) -> list[_Element]:
return CSSSelector(expr)(tree)
def html_too_big(s):
if isinstance(s, six.text_type):
s = s.encode('utf8')
return s.count(b'<') > _MAX_TAGS_COUNT
def _contains_charset_spec(s):
def _contains_charset_spec(s: str) -> str:
"""Return True if the first 4KB contain charset spec
"""
return s.lower().find(b'html; charset=', 0, 4096) != -1
return s.lower().find('html; charset=', 0, 4096) != -1
def _prepend_utf8_declaration(s):
def _prepend_utf8_declaration(s: str) -> str:
"""Prepend 'utf-8' encoding declaration if the first 4KB don't have any
"""
return s if _contains_charset_spec(s) else _UTF8_DECLARATION + s
def _rm_excessive_newlines(s):
def _rm_excessive_newlines(s: str) -> str:
"""Remove excessive newlines that often happen due to tons of divs
"""
return _RE_EXCESSIVE_NEWLINES.sub("\n\n", s).strip()
def _encode_utf8(s):
"""Encode in 'utf-8' if unicode
"""
return s.encode('utf-8') if isinstance(s, six.text_type) else s
def _html5lib_parser():
def _html5lib_parser() -> HTMLParser:
"""
html5lib is a pure-python library that conforms to the WHATWG HTML spec
and is not vulnarable to certain attacks common for XML libraries
"""
return html5lib.HTMLParser(
return HTMLParser(
# build lxml tree
html5lib.treebuilders.getTreeBuilder("lxml"),
# remove namespace value from inside lxml.html.html5paser element tag
@@ -250,14 +126,10 @@ def _html5lib_parser():
)
_UTF8_DECLARATION = (b'<meta http-equiv="Content-Type" content="text/html;'
b'charset=utf-8">')
_UTF8_DECLARATION = ('<meta http-equiv="Content-Type" content="text/html;'
'charset=utf-8">')
_BLOCKTAGS = ['div', 'p', 'ul', 'li', 'h1', 'h2', 'h3']
_HARDBREAKS = ['br', 'hr', 'tr']
_RE_EXCESSIVE_NEWLINES = re.compile("\n{2,10}")
# an extensive research shows that exceeding this limit
# might lead to excessive processing time
_MAX_TAGS_COUNT = 419

3
test-requirements.txt Normal file
View File

@@ -0,0 +1,3 @@
coverage
mock
nose>=1.2.1

View File

@@ -4,13 +4,17 @@ from __future__ import absolute_import
# noinspection PyUnresolvedReferences
import re
from unittest.mock import Mock, patch
from nose.tools import assert_false, assert_true, eq_, ok_
from tests.fixtures import (OLK_SRC_BODY_SECTION,
REPLY_QUOTATIONS_SHARE_BLOCK,
REPLY_SEPARATED_BY_HR)
from talon import quotations, utils as u
from . import *
from .fixtures import *
RE_WHITESPACE = re.compile("\s")
RE_DOUBLE_WHITESPACE = re.compile("\s")
RE_WHITESPACE = re.compile(r"\s")
RE_DOUBLE_WHITESPACE = re.compile(r"\s")
def test_quotation_splitter_inside_blockquote():
@@ -165,7 +169,7 @@ def test_unicode_in_reply():
<blockquote>
Quote
</blockquote>""".encode("utf-8")
</blockquote>"""
eq_("<html><head></head><body>Reply&#160;&#160;Text<br><div><br></div>"
"</body></html>",
@@ -313,7 +317,6 @@ def extract_reply_and_check(filename):
msg_body = f.read()
reply = quotations.extract_from_html(msg_body)
plain_reply = u.html_to_text(reply)
plain_reply = plain_reply.decode('utf8')
eq_(RE_WHITESPACE.sub('', "Hi. I am fine.\n\nThanks,\nAlex"),
RE_WHITESPACE.sub('', plain_reply))
@@ -390,18 +393,6 @@ def test_gmail_forwarded_msg():
eq_(RE_WHITESPACE.sub('', msg_body), RE_WHITESPACE.sub('', extracted))
@patch.object(u, '_MAX_TAGS_COUNT', 4)
def test_too_large_html():
msg_body = 'Reply' \
'<div class="gmail_quote">' \
'<div class="gmail_quote">On 11-Apr-2011, at 6:54 PM, Bob &lt;bob@example.com&gt; wrote:' \
'<div>Test</div>' \
'</div>' \
'</div>'
eq_(RE_WHITESPACE.sub('', msg_body),
RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body)))
def test_readable_html_empty():
msg_body = """
<blockquote>
@@ -424,3 +415,23 @@ def test_readable_html_empty():
def test_bad_html():
bad_html = "<html></html>"
eq_(bad_html, quotations.extract_from_html(bad_html))
def test_remove_namespaces():
msg_body = """
<html xmlns:o="urn:schemas-microsoft-com:office:office" xmlns="http://www.w3.org/TR/REC-html40">
<body>
<o:p>Dear Sir,</o:p>
<o:p>Thank you for the email.</o:p>
<blockquote>thing</blockquote>
</body>
</html>
"""
rendered = quotations.extract_from_html(msg_body)
assert_true("<p>" in rendered)
assert_true("xmlns" in rendered)
assert_true("<o:p>" not in rendered)
assert_true("<xmlns:o>" not in rendered)

View File

@@ -453,7 +453,8 @@ def test_link_closed_with_quotation_marker_on_new_line():
msg_body = '''8.45am-1pm
From: somebody@example.com
Date: Wed, 16 May 2012 00:15:02 -0600
<http://email.example.com/c/dHJhY2tpbmdfY29kZT1mMDdjYzBmNzM1ZjYzMGIxNT
> <bob@example.com <mailto:bob@example.com> >
@@ -494,7 +495,9 @@ def test_from_block_starts_with_date():
msg_body = """Blah
Date: Wed, 16 May 2012 00:15:02 -0600
To: klizhentas@example.com"""
To: klizhentas@example.com
"""
eq_('Blah', quotations.extract_from_plain(msg_body))
@@ -564,11 +567,12 @@ def test_mark_message_lines():
# next line should be marked as splitter
'_____________',
'From: foo@bar.com',
'Date: Wed, 16 May 2012 00:15:02 -0600',
'',
'> Hi',
'',
'Signature']
eq_('tessemet', quotations.mark_message_lines(lines))
eq_('tesssemet', quotations.mark_message_lines(lines))
lines = ['Just testing the email reply',
'',
@@ -807,7 +811,7 @@ def test_split_email():
>
>
"""
expected_markers = "stttttsttttetesetesmmmmmmssmmmmmmsmmmmmmmm"
expected_markers = "stttttsttttetesetesmmmmmmsmmmmmmmmmmmmmmmm"
markers = quotations.split_emails(msg)
eq_(markers, expected_markers)
@@ -822,4 +826,16 @@ The user experience was unparallelled. Please continue production. I'm sending p
that this line is intact."""
parsed = quotations.extract_from_plain(msg_body)
eq_(msg_body, parsed.decode('utf8'))
eq_(msg_body, parsed)
def test_appointment_2():
msg_body = """Invitation for an interview:
Date: Wednesday 3, October 2011
Time: 7 : 00am
Address: 130 Fox St
Please bring in your ID."""
parsed = quotations.extract_from_plain(msg_body)
eq_(msg_body, parsed)

View File

@@ -2,9 +2,6 @@
from __future__ import absolute_import
import cchardet
import six
from talon import utils as u
from . import *
@@ -15,58 +12,6 @@ def test_get_delimiter():
eq_('\n', u.get_delimiter('abc'))
def test_unicode():
eq_(u'hi', u.to_unicode('hi'))
eq_(type(u.to_unicode('hi')), six.text_type)
eq_(type(u.to_unicode(u'hi')), six.text_type)
eq_(type(u.to_unicode('привет')), six.text_type)
eq_(type(u.to_unicode(u'привет')), six.text_type)
eq_(u"привет", u.to_unicode('привет'))
eq_(u"привет", u.to_unicode(u'привет'))
# some latin1 stuff
eq_(u"Versión", u.to_unicode(u'Versi\xf3n'.encode('iso-8859-2'), precise=True))
def test_detect_encoding():
eq_('ascii', u.detect_encoding(b'qwe').lower())
ok_(u.detect_encoding(
u'Versi\xf3n'.encode('iso-8859-2')).lower() in [
'iso-8859-1', 'iso-8859-2'])
eq_('utf-8', u.detect_encoding(u'привет'.encode('utf8')).lower())
# fallback to utf-8
with patch.object(u.chardet, 'detect') as detect:
detect.side_effect = Exception
eq_('utf-8', u.detect_encoding('qwe'.encode('utf8')).lower())
def test_quick_detect_encoding():
eq_('ascii', u.quick_detect_encoding(b'qwe').lower())
ok_(u.quick_detect_encoding(
u'Versi\xf3n'.encode('windows-1252')).lower() in [
'windows-1252', 'windows-1250'])
eq_('utf-8', u.quick_detect_encoding(u'привет'.encode('utf8')).lower())
@patch.object(cchardet, 'detect')
@patch.object(u, 'detect_encoding')
def test_quick_detect_encoding_edge_cases(detect_encoding, cchardet_detect):
cchardet_detect.return_value = {'encoding': 'ascii'}
eq_('ascii', u.quick_detect_encoding(b"qwe"))
cchardet_detect.assert_called_once_with(b"qwe")
# fallback to detect_encoding
cchardet_detect.return_value = {}
detect_encoding.return_value = 'utf-8'
eq_('utf-8', u.quick_detect_encoding(b"qwe"))
# exception
detect_encoding.reset_mock()
cchardet_detect.side_effect = Exception()
detect_encoding.return_value = 'utf-8'
eq_('utf-8', u.quick_detect_encoding(b"qwe"))
ok_(detect_encoding.called)
def test_html_to_text():
html = """<body>
<p>Hello world!</p>
@@ -80,11 +25,11 @@ Haha
</p>
</body>"""
text = u.html_to_text(html)
eq_(b"Hello world! \n\n * One! \n * Two \nHaha", text)
eq_(u"привет!", u.html_to_text("<b>привет!</b>").decode('utf8'))
eq_("Hello world! \n\n * One! \n * Two \nHaha", text)
eq_(u"привет!", u.html_to_text("<b>привет!</b>"))
html = '<body><br/><br/>Hi</body>'
eq_(b'Hi', u.html_to_text(html))
eq_('Hi', u.html_to_text(html))
html = """Hi
<style type="text/css">
@@ -104,60 +49,23 @@ font: 13px 'Lucida Grande', Arial, sans-serif;
}
</style>"""
eq_(b'Hi', u.html_to_text(html))
eq_('Hi', u.html_to_text(html))
html = """<div>
<!-- COMMENT 1 -->
<span>TEXT 1</span>
<p>TEXT 2 <!-- COMMENT 2 --></p>
</div>"""
eq_(b'TEXT 1 \nTEXT 2', u.html_to_text(html))
eq_('TEXT 1 \nTEXT 2', u.html_to_text(html))
def test_comment_no_parent():
s = b'<!-- COMMENT 1 --> no comment'
s = '<!-- COMMENT 1 --> no comment'
d = u.html_document_fromstring(s)
eq_(b"no comment", u.html_tree_to_text(d))
@patch.object(u.html5parser, 'fromstring', Mock(side_effect=Exception()))
def test_html_fromstring_exception():
eq_(None, u.html_fromstring("<html></html>"))
@patch.object(u, 'html_too_big', Mock())
@patch.object(u.html5parser, 'fromstring')
def test_html_fromstring_too_big(fromstring):
eq_(None, u.html_fromstring("<html></html>"))
assert_false(fromstring.called)
@patch.object(u.html5parser, 'document_fromstring')
def test_html_document_fromstring_exception(document_fromstring):
document_fromstring.side_effect = Exception()
eq_(None, u.html_document_fromstring("<html></html>"))
@patch.object(u, 'html_too_big', Mock())
@patch.object(u.html5parser, 'document_fromstring')
def test_html_document_fromstring_too_big(document_fromstring):
eq_(None, u.html_document_fromstring("<html></html>"))
assert_false(document_fromstring.called)
eq_("no comment", u.html_tree_to_text(d))
@patch.object(u, 'html_fromstring', Mock(return_value=None))
def test_bad_html_to_text():
bad_html = "one<br>two<br>three"
eq_(None, u.html_to_text(bad_html))
@patch.object(u, '_MAX_TAGS_COUNT', 3)
def test_html_too_big():
eq_(False, u.html_too_big("<div></div>"))
eq_(True, u.html_too_big("<div><span>Hi</span></div>"))
@patch.object(u, '_MAX_TAGS_COUNT', 3)
def test_html_to_text():
eq_(b"Hello", u.html_to_text("<div>Hello</div>"))
eq_(None, u.html_to_text("<div><span>Hi</span></div>"))