97 Commits

Author SHA1 Message Date
Sergey Obukhov
8441bc7328 Merge pull request #106 from mailgun/sergey/html5lib
use html5lib to parse html
2016-08-19 15:58:07 -07:00
Sergey Obukhov
37c95ff97b fallback untouched html if we can not parse html tree 2016-08-19 11:38:12 -07:00
Sergey Obukhov
5b1ca33c57 fix cssselect 2016-08-16 17:11:41 -07:00
Sergey Obukhov
ec8e09b34e fix 2016-08-15 20:31:04 -07:00
Sergey Obukhov
bcf97eccfa use html5lib to parse html 2016-08-15 19:36:21 -07:00
Sergey Obukhov
f53b5cc7a6 Merge pull request #105 from mailgun/sergey/fromstring
html with comment that has no parent crashes html_tree_to_text
2016-08-15 13:40:37 -07:00
Sergey Obukhov
27adde7aa7 bump version 2016-08-15 13:21:10 -07:00
Sergey Obukhov
a9719833e0 html with comment that has no parent crashes html_tree_to_text 2016-08-12 17:40:12 -07:00
Sergey Obukhov
7bf37090ca Merge pull request #101 from mailgun/sergey/empty-html
if html stripped off quotations does not have readable text fallback …
2016-08-12 12:18:50 -07:00
Sergey Obukhov
44fcef7123 bump version 2016-08-11 23:59:18 -07:00
Sergey Obukhov
69a44b10a1 Merge branch 'master' into sergey/empty-html 2016-08-11 23:58:11 -07:00
Sergey Obukhov
b085e3d049 Merge pull request #104 from mailgun/sergey/spaces
fixes mailgun/talon#103 keep newlines when parsing html quotations
2016-08-11 23:56:26 -07:00
Sergey Obukhov
4b953bcddc fixes mailgun/talon#103 keep newlines when parsing html quotations 2016-08-11 20:17:37 -07:00
Sergey Obukhov
315eaa7080 if html stripped off quotations does not have readable text fallback to unparsed html 2016-08-11 19:55:23 -07:00
Sergey Obukhov
5a9bc967f1 Merge pull request #100 from mailgun/sergey/restrict
do not parse html quotations if html is longer then certain threshold
2016-08-11 16:08:03 -07:00
Sergey Obukhov
a0d7236d0b bump version and add a comment 2016-08-11 15:49:09 -07:00
Sergey Obukhov
21e9a31ffe add test 2016-08-09 17:15:49 -07:00
Sergey Obukhov
4ee46c0a97 do not parse html quotations if html is longer then certain threshold 2016-08-09 17:08:58 -07:00
Sergey Obukhov
10d9a930f9 Merge pull request #99 from mailgun/sergey/capitalized
consider word capitilized only if it is camel case - not all upper case
2016-07-20 16:47:12 -07:00
Sergey Obukhov
a21ccdb21b consider word capitilized only if it is camel case - not all upper case 2016-07-19 17:37:36 -07:00
Sergey Obukhov
7cdd7a8f35 Merge pull request #98 from mailgun/sergey/1.2.11
version bump
2016-07-19 16:22:24 -07:00
Sergey Obukhov
01e03a47e0 version bump 2016-07-19 15:51:46 -07:00
Sergey Obukhov
1b9a71551a Merge pull request #97 from umairwaheed/strip-talon
Strip down Talon
2016-07-19 15:46:56 -07:00
Umair Khan
911efd1db4 Move encoding detection inside if condition. 2016-07-19 09:44:40 +05:00
Umair Khan
e61f0a68c4 Add six library to setup.py 2016-07-19 09:40:03 +05:00
Umair Khan
cefbcffd59 Make tests/text_quotations_test.py compatible with Python 3. 2016-07-13 14:45:26 +05:00
Umair Khan
622a98d6d5 Make utils compatible with Python 3. 2016-07-13 13:00:24 +05:00
Umair Khan
7901f5d1dc Convert msg_body into unicode in preprocess. 2016-07-13 11:18:10 +05:00
Umair Khan
555c34d7a8 Make sure html_to_text processes bytes 2016-07-13 11:18:10 +05:00
Umair Khan
dcc0d1de20 Convert msg_body to bytes in extract_from_html 2016-07-13 11:18:06 +05:00
Umair Khan
7bdf4d622b Only encode if str 2016-07-13 08:01:47 +05:00
Umair Khan
4a7207b0d0 Only convert to unicode if str 2016-07-13 08:01:47 +05:00
Umair Khan
ad9c2ca0e8 Upgrade quotations.py 2016-07-13 08:01:44 +05:00
Umair Khan
da998ddb60 Run modernizer on the code. 2016-07-12 17:25:46 +05:00
Umair Khan
07f68815df Allow installation of ML free version.
Add an option to the install script, `--no-ml`, that when given will
install Talon without ML support.

Fixes #96
2016-07-12 15:08:53 +05:00
Sergey Obukhov
35645f9ade Merge pull request #95 from mailgun/sergey/forge
open-sourcing email dataset
2016-06-10 15:45:29 -07:00
Sergey Obukhov
7c3d91301c open-sourcing email dataset 2016-06-10 14:10:53 -07:00
Sergey Obukhov
5bcf7403ad Merge pull request #94 from mailgun/obukhov-sergey-patch-1
Update README.rst
2016-05-31 20:16:13 -07:00
Sergey Obukhov
2d6c092b65 bump version 2016-05-31 18:42:47 -07:00
Sergey Obukhov
6d0689cad6 Update README.rst 2016-05-31 18:39:07 -07:00
Sergey Obukhov
3f80e93ee0 Merge pull request #93 from mailgun/sergey/version-bump
bump
2016-05-31 18:15:28 -07:00
Sergey Obukhov
1b18abab1d bump 2016-05-31 16:53:41 -07:00
Sergey Obukhov
03dd5af5ab Merge pull request #91 from KevinCathcart/patch-1
Support outlook 2007/2010 running in en-us locale
2016-05-31 16:50:35 -07:00
Sergey Obukhov
dfba82b07c Merge pull request #92 from mailgun/obukhov-sergey-kuntzcamera
Update README.rst
2016-05-31 15:42:34 -07:00
Sergey Obukhov
08ca02c87f Update README.rst 2016-05-31 15:14:32 -07:00
Kevin Cathcart
b61f4ec095 Support outlook 2007/2010 running in en-us locale
My American English copy of outlook 2007 is using inches in the reply separator rather than centimeters. The separator is otherwise Identical. What a strange thing to localize. I'm guessing it uses whatever it thinks the preferred units for page margins are.
2016-05-23 17:23:53 -04:00
Sergey Obukhov
9dbe6a494b Merge pull request #90 from mailgun/sergey/89
fixes mailgun/talon#89
2016-05-17 16:01:56 -07:00
Sergey Obukhov
44e70939d6 fixes mailgun/talon#89 2016-05-17 15:31:01 -07:00
Sergey Obukhov
ab6066eafa Merge pull request #87 from mailgun/sergey/1.2.6
bump up version
2016-04-07 17:54:12 -07:00
Sergey Obukhov
42258cdd36 bump up version 2016-04-07 17:51:48 -07:00
Sergey Obukhov
d3de9e6893 Merge pull request #86 from dougkeen/master
Fix #85 (exception when stripping gmail quotes)
2016-04-07 17:47:38 -07:00
Doug Keen
333beb94af Fix #85 (exception when stripping gmail quotes) 2016-04-04 14:22:50 -07:00
Sergey Obukhov
f3c0942c49 Merge pull request #80 from mailgun/sergey/12
fixes mailgun/talon#12
2016-03-04 13:33:46 -08:00
Sergey Obukhov
02adf53ab9 fixes mailgun/talon#12 2016-03-04 13:14:50 -08:00
Sergey Obukhov
3497b5cab4 Merge pull request #79 from mailgun/sergey/version
bump version
2016-02-29 15:13:51 -08:00
Sergey Obukhov
9c17dca17c bump version 2016-02-29 14:50:52 -08:00
Sergey Obukhov
de342d3177 Merge pull request #78 from defkev/master
Added Zimbra HTML quotation extraction
2016-02-29 14:14:09 -08:00
defkev
743b452daf Added Zimbra HTML quotation extraction 2016-02-21 16:56:52 +01:00
Sergey Obukhov
c762f3c337 Merge pull request #77 from mailgun/sergey/fix-gmail-fwd
fixes mailgun/talon#18
2016-02-19 19:08:37 -08:00
Sergey Obukhov
31803d41bc fixes mailgun/talon#18 2016-02-19 19:07:10 -08:00
Sergey Obukhov
2ecd9779fc bump up version 2016-02-19 18:32:07 -08:00
Sergey Obukhov
5a7047233e Merge pull request #76 from mailgun/sergey/fix-date-splitter
fixes mailgun/talon#19
2016-02-19 18:28:23 -08:00
Sergey Obukhov
999e9c3725 fixes mailgun/talon#19 2016-02-19 17:53:52 -08:00
Sergey Obukhov
f6940fe878 bump up version 2015-12-18 19:15:58 -08:00
Sergey Obukhov
ce65ff8fc8 Merge pull request #71 from clara-labs/ms-2010-issue
First pass at handling issue with ms outlook 2010 with unenclosed quo…
2015-12-18 19:14:13 -08:00
Sergey Obukhov
eed6784f25 Merge pull request #70 from mailgun/sergey/gmail
fixes mailgun/talon#38 mailgun/talon#20
2015-12-18 19:00:13 -08:00
Sergey Obukhov
3d9ae356ea add more tests, make standard reply tests more relaxed 2015-12-18 18:56:41 -08:00
Carlos Correa
f688d074b5 First pass at handling issue with ms outlook 2010 with unenclosed quoted text. 2015-12-10 19:16:13 -08:00
Sergey Obukhov
41457d8fbd fixes mailgun/talon#38 mailgun/talon#20 2015-12-05 00:37:02 -08:00
Sergey Obukhov
2c416ecc0e Merge pull request #62 from tgwizard/better-support-for-scandinavian-languages
Add better support for Scandinavian languages
2015-10-14 21:48:10 -07:00
Sergey Obukhov
3ab33c557b Merge pull request #65 from mailgun/sergey/cssselect
add cssselect to dependencies
2015-10-14 20:34:02 -07:00
Sergey Obukhov
8db05f4950 add cssselect to dependencies 2015-10-14 20:31:26 -07:00
Sergey Obukhov
3d5bc82a03 Merge pull request #61 from tgwizard/fix-for-apple-mail
Add fix for Apple Mail email format
2015-10-14 12:38:06 -07:00
Adam Renberg
14e3a0d80b Add better support for Scandinavian languages
This is a port of https://github.com/tictail/claw/pull/6 by @simonflore.
2015-09-21 21:42:01 +02:00
Adam Renberg
fcd9e2716a Add fix for Apple Mail email format
Where they have an initial > on the "date line".
2015-09-21 21:33:57 +02:00
Sergey Obukhov
d62d633215 bump up version 2015-09-21 09:55:51 -07:00
Sergey Obukhov
3b0c9273c1 Merge pull request #60 from mailgun/sergey/26
fixes mailgun/talon#26
2015-09-21 09:54:35 -07:00
Sergey Obukhov
e4c1c11845 remove print 2015-09-21 09:52:47 -07:00
Sergey Obukhov
ae508fe0e5 fixes mailgun/talon#26 2015-09-21 09:51:26 -07:00
Sergey Obukhov
2cb9b5399c bump up version 2015-09-18 05:23:29 -07:00
Sergey Obukhov
134c47f515 Merge pull request #59 from mailgun/sergey/43
fixes mailgun/talon#43
2015-09-18 05:20:51 -07:00
Sergey Obukhov
d328c9d128 fixes mailgun/talon#43 2015-09-18 05:19:59 -07:00
Sergey Obukhov
77b62b0fef Merge pull request #58 from mailgun/sergey/52
fixes mailgun/talon#52
2015-09-18 04:48:50 -07:00
Sergey Obukhov
ad09b18f3f fixes mailgun/talon#52 2015-09-18 04:47:23 -07:00
Sergey Obukhov
b5af9c03a5 bump up version 2015-09-11 10:42:26 -07:00
Sergey Obukhov
176c7e7532 Merge pull request #57 from mailgun/sergey/to_unicode
use precise encoding when converting to unicode
2015-09-11 10:40:52 -07:00
Sergey Obukhov
15976888a0 use precise encoding when converting to unicode 2015-09-11 10:38:28 -07:00
Sergey Obukhov
9bee502903 bump up version 2015-09-11 06:27:12 -07:00
Sergey Obukhov
e3cb8dc3e6 Merge pull request #56 from mailgun/sergey/1000+German+NL
process first 1000 lines for long messages, support for German and Dutch
2015-09-11 06:20:34 -07:00
Sergey Obukhov
385285e5de process first 1000 lines for long messages, support for German and Dutch 2015-09-11 06:17:14 -07:00
Sergey Obukhov
127771dac9 bump up version 2015-09-11 04:51:39 -07:00
Sergey Obukhov
cc98befba5 Merge pull request #50 from Easy-D/preserve-regular-blockquotes
Preserve regular blockquotes
2015-09-11 04:49:36 -07:00
Sergey Obukhov
567549cba4 bump up talon version 2015-09-10 10:47:16 -07:00
Sergey Obukhov
76c4f49be8 Merge pull request #55 from mailgun/sergey/lxml
unpin lxml version
2015-09-10 10:44:59 -07:00
Sergey Obukhov
d9d89dc250 unpin lxml version 2015-09-10 10:44:05 -07:00
Easy-D
390b0a6dc9 preserve regular blockquotes 2015-07-16 21:31:41 +02:00
Easy-D
ed6b861a47 add failing test that shows how regular blockquotes are removed 2015-07-16 21:24:49 +02:00
33 changed files with 3418 additions and 2820 deletions

View File

@@ -1,9 +1,7 @@
recursive-include tests *
recursive-include talon *
recursive-exclude tests *.pyc *~ recursive-exclude tests *.pyc *~
recursive-exclude talon *.pyc *~ recursive-exclude talon *.pyc *~
include train.data include train.data
include classifier include classifier
include LICENSE include LICENSE
include MANIFEST.in include MANIFEST.in
include README.rst include README.rst

View File

@@ -95,7 +95,7 @@ classifiers. The core of machine learning algorithm lays in
apply to a message (``featurespace.py``), how data sets are built apply to a message (``featurespace.py``), how data sets are built
(``dataset.py``), classifiers interface (``classifier.py``). (``dataset.py``), classifiers interface (``classifier.py``).
The data used for training is taken from our personal email Currently the data used for training is taken from our personal email
conversations and from `ENRON`_ dataset. As a result of applying our set conversations and from `ENRON`_ dataset. As a result of applying our set
of features to the dataset we provide files ``classifier`` and of features to the dataset we provide files ``classifier`` and
``train.data`` that dont have any personal information but could be ``train.data`` that dont have any personal information but could be
@@ -116,8 +116,19 @@ or
from talon.signature.learning.classifier import train, init from talon.signature.learning.classifier import train, init
train(init(), EXTRACTOR_DATA, EXTRACTOR_FILENAME) train(init(), EXTRACTOR_DATA, EXTRACTOR_FILENAME)
Open-source Dataset
-------------------
Recently we started a `forge`_ project to create an open-source, annotated dataset of raw emails. In the project we
used a subset of `ENRON`_ data, cleansed of private, health and financial information by `EDRM`_. At the moment over 190
emails are annotated. Any contribution and collaboration on the project are welcome. Once the dataset is ready we plan to
start using it for talon.
.. _scikit-learn: http://scikit-learn.org .. _scikit-learn: http://scikit-learn.org
.. _ENRON: https://www.cs.cmu.edu/~enron/ .. _ENRON: https://www.cs.cmu.edu/~enron/
.. _EDRM: http://www.edrm.net/resources/data-sets/edrm-enron-email-data-set
.. _forge: https://github.com/mailgun/forge
Research Research
-------- --------

View File

@@ -1,8 +1,35 @@
from __future__ import absolute_import
from setuptools import setup, find_packages from setuptools import setup, find_packages
from setuptools.command.install import install
class InstallCommand(install):
user_options = install.user_options + [
('no-ml', None, "Don't install without Machine Learning modules."),
]
boolean_options = install.boolean_options + ['no-ml']
def initialize_options(self):
install.initialize_options(self)
self.no_ml = None
def finalize_options(self):
install.finalize_options(self)
if self.no_ml:
dist = self.distribution
dist.packages=find_packages(exclude=[
'tests',
'tests.*',
'talon.signature',
'talon.signature.*',
])
for not_required in ['numpy', 'scipy', 'scikit-learn==0.16.1']:
dist.install_requires.remove(not_required)
setup(name='talon', setup(name='talon',
version='1.0.3', version='1.3.0',
description=("Mailgun library " description=("Mailgun library "
"to extract message quotations and signatures."), "to extract message quotations and signatures."),
long_description=open("README.rst").read(), long_description=open("README.rst").read(),
@@ -10,16 +37,23 @@ setup(name='talon',
author_email='admin@mailgunhq.com', author_email='admin@mailgunhq.com',
url='https://github.com/mailgun/talon', url='https://github.com/mailgun/talon',
license='APACHE2', license='APACHE2',
packages=find_packages(exclude=['tests']), cmdclass={
'install': InstallCommand,
},
packages=find_packages(exclude=['tests', 'tests.*']),
include_package_data=True, include_package_data=True,
zip_safe=True, zip_safe=True,
install_requires=[ install_requires=[
"lxml==2.3.3", "lxml>=2.3.3",
"regex>=1", "regex>=1",
"html2text",
"numpy", "numpy",
"scipy", "scipy",
"scikit-learn==0.16.1", # pickled versions of classifier, else rebuild "scikit-learn==0.16.1", # pickled versions of classifier, else rebuild
'chardet>=1.0.1',
'cchardet>=0.3.5',
'cssselect',
'six>=1.10.0',
'html5lib'
], ],
tests_require=[ tests_require=[
"mock", "mock",

View File

@@ -1,7 +1,13 @@
from __future__ import absolute_import
from talon.quotations import register_xpath_extensions from talon.quotations import register_xpath_extensions
from talon import signature try:
from talon import signature
ML_ENABLED = True
except ImportError:
ML_ENABLED = False
def init(): def init():
register_xpath_extensions() register_xpath_extensions()
signature.initialize() if ML_ENABLED:
signature.initialize()

View File

@@ -1,3 +1,4 @@
from __future__ import absolute_import
import regex as re import regex as re

View File

@@ -3,8 +3,10 @@ The module's functions operate on message bodies trying to extract original
messages (without quoted messages) from html messages (without quoted messages) from html
""" """
from __future__ import absolute_import
import regex as re import regex as re
from talon.utils import cssselect
CHECKPOINT_PREFIX = '#!%!' CHECKPOINT_PREFIX = '#!%!'
CHECKPOINT_SUFFIX = '!%!#' CHECKPOINT_SUFFIX = '!%!#'
@@ -12,6 +14,7 @@ CHECKPOINT_PATTERN = re.compile(CHECKPOINT_PREFIX + '\d+' + CHECKPOINT_SUFFIX)
# HTML quote indicators (tag ids) # HTML quote indicators (tag ids)
QUOTE_IDS = ['OLK_SRC_BODY_SECTION'] QUOTE_IDS = ['OLK_SRC_BODY_SECTION']
RE_FWD = re.compile("^[-]+[ ]*Forwarded message[ ]*[-]+$", re.I | re.M)
def add_checkpoint(html_note, counter): def add_checkpoint(html_note, counter):
@@ -76,8 +79,8 @@ def delete_quotation_tags(html_note, counter, quotation_checkpoints):
def cut_gmail_quote(html_message): def cut_gmail_quote(html_message):
''' Cuts the outermost block element with class gmail_quote. ''' ''' Cuts the outermost block element with class gmail_quote. '''
gmail_quote = html_message.cssselect('.gmail_quote') gmail_quote = cssselect('div.gmail_quote', html_message)
if gmail_quote: if gmail_quote and (gmail_quote[0].text is None or not RE_FWD.match(gmail_quote[0].text)):
gmail_quote[0].getparent().remove(gmail_quote[0]) gmail_quote[0].getparent().remove(gmail_quote[0])
return True return True
@@ -85,9 +88,12 @@ def cut_gmail_quote(html_message):
def cut_microsoft_quote(html_message): def cut_microsoft_quote(html_message):
''' Cuts splitter block and all following blocks. ''' ''' Cuts splitter block and all following blocks. '''
splitter = html_message.xpath( splitter = html_message.xpath(
#outlook 2007, 2010 #outlook 2007, 2010 (international)
"//div[@style='border:none;border-top:solid #B5C4DF 1.0pt;" "//div[@style='border:none;border-top:solid #B5C4DF 1.0pt;"
"padding:3.0pt 0cm 0cm 0cm']|" "padding:3.0pt 0cm 0cm 0cm']|"
#outlook 2007, 2010 (american)
"//div[@style='border:none;border-top:solid #B5C4DF 1.0pt;"
"padding:3.0pt 0in 0in 0in']|"
#windows mail #windows mail
"//div[@style='padding-top: 5px; " "//div[@style='padding-top: 5px; "
"border-top-color: rgb(229, 229, 229); " "border-top-color: rgb(229, 229, 229); "
@@ -130,7 +136,7 @@ def cut_microsoft_quote(html_message):
def cut_by_id(html_message): def cut_by_id(html_message):
found = False found = False
for quote_id in QUOTE_IDS: for quote_id in QUOTE_IDS:
quote = html_message.cssselect('#{}'.format(quote_id)) quote = cssselect('#{}'.format(quote_id), html_message)
if quote: if quote:
found = True found = True
quote[0].getparent().remove(quote[0]) quote[0].getparent().remove(quote[0])
@@ -138,9 +144,14 @@ def cut_by_id(html_message):
def cut_blockquote(html_message): def cut_blockquote(html_message):
''' Cuts blockquote with wrapping elements. ''' ''' Cuts the last non-nested blockquote with wrapping elements.'''
quote = html_message.find('.//blockquote') quote = html_message.xpath(
if quote is not None: '(.//blockquote)'
'[not(@class="gmail_quote") and not(ancestor::blockquote)]'
'[last()]')
if quote:
quote = quote[0]
quote.getparent().remove(quote) quote.getparent().remove(quote)
return True return True
@@ -154,21 +165,58 @@ def cut_from_block(html_message):
if block: if block:
block = block[-1] block = block[-1]
parent_div = None
while block.getparent() is not None: while block.getparent() is not None:
if block.tag == 'div': if block.tag == 'div':
block.getparent().remove(block) parent_div = block
break
block = block.getparent()
if parent_div is not None:
maybe_body = parent_div.getparent()
# In cases where removing this enclosing div will remove all
# content, we should assume the quote is not enclosed in a tag.
parent_div_is_all_content = (
maybe_body is not None and maybe_body.tag == 'body' and
len(maybe_body.getchildren()) == 1)
if not parent_div_is_all_content:
parent = block.getparent()
next_sibling = block.getnext()
# remove all tags after found From block
# (From block and quoted message are in separate divs)
while next_sibling is not None:
parent.remove(block)
block = next_sibling
next_sibling = block.getnext()
# remove the last sibling (or the
# From block if no siblings)
if block is not None:
parent.remove(block)
return True return True
else: else:
block = block.getparent() return False
else:
# handle the case when From: block goes right after e.g. <hr> # handle the case when From: block goes right after e.g. <hr>
# and not enclosed in some tag # and not enclosed in some tag
block = html_message.xpath( block = html_message.xpath(
("//*[starts-with(mg:tail(), 'From:')]|" ("//*[starts-with(mg:tail(), 'From:')]|"
"//*[starts-with(mg:tail(), 'Date:')]")) "//*[starts-with(mg:tail(), 'Date:')]"))
if block: if block:
block = block[0] block = block[0]
while(block.getnext() is not None):
block.getparent().remove(block.getnext()) if RE_FWD.match(block.getparent().text or ''):
block.getparent().remove(block) return False
return True
while(block.getnext() is not None):
block.getparent().remove(block.getnext())
block.getparent().remove(block)
return True
def cut_zimbra_quote(html_message):
zDivider = html_message.xpath('//hr[@data-marker="__DIVIDER__"]')
if zDivider:
zDivider[0].getparent().remove(zDivider[0])
return True

View File

@@ -5,15 +5,18 @@ The module's functions operate on message bodies trying to extract
original messages (without quoted messages) original messages (without quoted messages)
""" """
from __future__ import absolute_import
import regex as re import regex as re
import logging import logging
from copy import deepcopy from copy import deepcopy
from lxml import html, etree from lxml import html, etree
import html2text
from talon.utils import get_delimiter from talon.utils import (get_delimiter, html_tree_to_text,
html_document_fromstring)
from talon import html_quotations from talon import html_quotations
from six.moves import range
import six
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@@ -22,7 +25,7 @@ log = logging.getLogger(__name__)
RE_FWD = re.compile("^[-]+[ ]*Forwarded message[ ]*[-]+$", re.I | re.M) RE_FWD = re.compile("^[-]+[ ]*Forwarded message[ ]*[-]+$", re.I | re.M)
RE_ON_DATE_SMB_WROTE = re.compile( RE_ON_DATE_SMB_WROTE = re.compile(
u'(-*[ ]?({0})[ ].*({1})(.*\n){{0,2}}.*({2}):?-*)'.format( u'(-*[>]?[ ]?({0})[ ].*({1})(.*\n){{0,2}}.*({2}):?-*)'.format(
# Beginning of the line # Beginning of the line
u'|'.join(( u'|'.join((
# English # English
@@ -32,7 +35,13 @@ RE_ON_DATE_SMB_WROTE = re.compile(
# Polish # Polish
'W dniu', 'W dniu',
# Dutch # Dutch
'Op' 'Op',
# German
'Am',
# Norwegian
u'',
# Swedish, Danish
'Den',
)), )),
# Date and sender separator # Date and sender separator
u'|'.join(( u'|'.join((
@@ -50,18 +59,28 @@ RE_ON_DATE_SMB_WROTE = re.compile(
# Polish # Polish
u'napisał', u'napisał',
# Dutch # Dutch
'schreef','verzond','geschreven' 'schreef','verzond','geschreven',
# German
'schrieb',
# Norwegian, Swedish
'skrev',
)) ))
)) ))
# Special case for languages where text is translated like this: 'on {date} wrote {somebody}:' # Special case for languages where text is translated like this: 'on {date} wrote {somebody}:'
RE_ON_DATE_WROTE_SMB = re.compile( RE_ON_DATE_WROTE_SMB = re.compile(
u'(-*[ ]?({0})[ ].*(.*\n){{0,2}}.*({1})[ ].*:)'.format( u'(-*[>]?[ ]?({0})[ ].*(.*\n){{0,2}}.*({1})[ ]*.*:)'.format(
# Beginning of the line # Beginning of the line
u'|'.join((
'Op', 'Op',
#German
'Am'
)),
# Ending of the line # Ending of the line
u'|'.join(( u'|'.join((
# Dutch # Dutch
'schreef','verzond','geschreven' 'schreef','verzond','geschreven',
# German
'schrieb'
)) ))
) )
) )
@@ -92,7 +111,7 @@ RE_EMPTY_QUOTATION = re.compile(
( (
# quotation border: splitter line or a number of quotation marker lines # quotation border: splitter line or a number of quotation marker lines
(?: (?:
s (?:se*)+
| |
(?:me*){2,} (?:me*){2,}
) )
@@ -115,20 +134,27 @@ RE_ORIGINAL_MESSAGE = re.compile(u'[\s]*[-]+[ ]*({})[ ]*[-]+'.format(
RE_FROM_COLON_OR_DATE_COLON = re.compile(u'(_+\r?\n)?[\s]*(:?[*]?{})[\s]?:[*]? .*'.format( RE_FROM_COLON_OR_DATE_COLON = re.compile(u'(_+\r?\n)?[\s]*(:?[*]?{})[\s]?:[*]? .*'.format(
u'|'.join(( u'|'.join((
# "From" in different languages. # "From" in different languages.
'From', 'Van', 'De', 'Von', 'Fra', 'From', 'Van', 'De', 'Von', 'Fra', u'Från',
# "Date" in different languages. # "Date" in different languages.
'Date', 'Datum', u'Envoyé' 'Date', 'Datum', u'Envoyé', 'Skickat', 'Sendt',
))), re.I) ))), re.I)
SPLITTER_PATTERNS = [ SPLITTER_PATTERNS = [
RE_ORIGINAL_MESSAGE, RE_ORIGINAL_MESSAGE,
# <date> <person>
re.compile("(\d+/\d+/\d+|\d+\.\d+\.\d+).*@", re.VERBOSE),
RE_ON_DATE_SMB_WROTE, RE_ON_DATE_SMB_WROTE,
RE_ON_DATE_WROTE_SMB, RE_ON_DATE_WROTE_SMB,
RE_FROM_COLON_OR_DATE_COLON, RE_FROM_COLON_OR_DATE_COLON,
# 02.04.2012 14:20 пользователь "bob@example.com" <
# bob@xxx.mailgun.org> написал:
re.compile("(\d+/\d+/\d+|\d+\.\d+\.\d+).*@", re.S),
# 2014-10-17 11:28 GMT+03:00 Bob <
# bob@example.com>:
re.compile("\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}\s+GMT.*@", re.S),
# Thu, 26 Jun 2014 14:00:51 +0400 Bob <bob@example.com>:
re.compile('\S{3,10}, \d\d? \S{3,10} 20\d\d,? \d\d?:\d\d(:\d\d)?' re.compile('\S{3,10}, \d\d? \S{3,10} 20\d\d,? \d\d?:\d\d(:\d\d)?'
'( \S+){3,6}@\S+:') '( \S+){3,6}@\S+:'),
# Sent from Samsung MobileName <address@example.com> wrote:
re.compile('Sent from Samsung .*@.*> wrote')
] ]
@@ -139,6 +165,9 @@ RE_PARENTHESIS_LINK = re.compile("\(https?://")
SPLITTER_MAX_LINES = 4 SPLITTER_MAX_LINES = 4
MAX_LINES_COUNT = 1000 MAX_LINES_COUNT = 1000
# an extensive research shows that exceeding this limit
# leads to excessive processing time
MAX_HTML_LEN = 2794202
QUOT_PATTERN = re.compile('^>+ ?') QUOT_PATTERN = re.compile('^>+ ?')
NO_QUOT_LINE = re.compile('^[^>].*[\S].*') NO_QUOT_LINE = re.compile('^[^>].*[\S].*')
@@ -169,7 +198,7 @@ def mark_message_lines(lines):
>>> mark_message_lines(['answer', 'From: foo@bar.com', '', '> question']) >>> mark_message_lines(['answer', 'From: foo@bar.com', '', '> question'])
'tsem' 'tsem'
""" """
markers = bytearray(len(lines)) markers = ['e' for _ in lines]
i = 0 i = 0
while i < len(lines): while i < len(lines):
if not lines[i].strip(): if not lines[i].strip():
@@ -181,10 +210,11 @@ def mark_message_lines(lines):
else: else:
# in case splitter is spread across several lines # in case splitter is spread across several lines
splitter = is_splitter('\n'.join(lines[i:i + SPLITTER_MAX_LINES])) splitter = is_splitter('\n'.join(lines[i:i + SPLITTER_MAX_LINES]))
if splitter: if splitter:
# append as many splitter markers as lines in splitter # append as many splitter markers as lines in splitter
splitter_lines = splitter.group().splitlines() splitter_lines = splitter.group().splitlines()
for j in xrange(len(splitter_lines)): for j in range(len(splitter_lines)):
markers[i + j] = 's' markers[i + j] = 's'
# skip splitter lines # skip splitter lines
@@ -194,7 +224,7 @@ def mark_message_lines(lines):
markers[i] = 't' markers[i] = 't'
i += 1 i += 1
return markers return ''.join(markers)
def process_marked_lines(lines, markers, return_flags=[False, -1, -1]): def process_marked_lines(lines, markers, return_flags=[False, -1, -1]):
@@ -208,6 +238,7 @@ def process_marked_lines(lines, markers, return_flags=[False, -1, -1]):
return_flags = [were_lines_deleted, first_deleted_line, return_flags = [were_lines_deleted, first_deleted_line,
last_deleted_line] last_deleted_line]
""" """
markers = ''.join(markers)
# if there are no splitter there should be no markers # if there are no splitter there should be no markers
if 's' not in markers and not re.search('(me*){3}', markers): if 's' not in markers and not re.search('(me*){3}', markers):
markers = markers.replace('m', 't') markers = markers.replace('m', 't')
@@ -253,10 +284,15 @@ def preprocess(msg_body, delimiter, content_type='text/plain'):
Replaces link brackets so that they couldn't be taken for quotation marker. Replaces link brackets so that they couldn't be taken for quotation marker.
Splits line in two if splitter pattern preceded by some text on the same Splits line in two if splitter pattern preceded by some text on the same
line (done only for 'On <date> <person> wrote:' pattern). line (done only for 'On <date> <person> wrote:' pattern).
Converts msg_body into a unicode.
""" """
# normalize links i.e. replace '<', '>' wrapping the link with some symbols # normalize links i.e. replace '<', '>' wrapping the link with some symbols
# so that '>' closing the link couldn't be mistakenly taken for quotation # so that '>' closing the link couldn't be mistakenly taken for quotation
# marker. # marker.
if isinstance(msg_body, bytes):
msg_body = msg_body.decode('utf8')
def link_wrapper(link): def link_wrapper(link):
newline_index = msg_body[:link.start()].rfind("\n") newline_index = msg_body[:link.start()].rfind("\n")
if msg_body[newline_index + 1] == ">": if msg_body[newline_index + 1] == ">":
@@ -293,12 +329,8 @@ def extract_from_plain(msg_body):
delimiter = get_delimiter(msg_body) delimiter = get_delimiter(msg_body)
msg_body = preprocess(msg_body, delimiter) msg_body = preprocess(msg_body, delimiter)
lines = msg_body.splitlines()
# don't process too long messages # don't process too long messages
if len(lines) > MAX_LINES_COUNT: lines = msg_body.splitlines()[:MAX_LINES_COUNT]
return stripped_text
markers = mark_message_lines(lines) markers = mark_message_lines(lines)
lines = process_marked_lines(lines, markers) lines = process_marked_lines(lines, markers)
@@ -323,44 +355,62 @@ def extract_from_html(msg_body):
then extracting quotations from text, then extracting quotations from text,
then checking deleted checkpoints, then checking deleted checkpoints,
then deleting necessary tags. then deleting necessary tags.
"""
if msg_body.strip() == '': Returns a unicode string.
"""
if isinstance(msg_body, six.text_type):
msg_body = msg_body.encode('utf8')
elif not isinstance(msg_body, bytes):
msg_body = msg_body.encode('ascii')
result = _extract_from_html(msg_body)
if isinstance(result, bytes):
result = result.decode('utf8')
return result
def _extract_from_html(msg_body):
"""
Extract not quoted message from provided html message body
using tags and plain text algorithm.
Cut out the 'blockquote', 'gmail_quote' tags.
Cut Microsoft quotations.
Then use plain text algorithm to cut out splitter or
leftover quotation.
This works by adding checkpoint text to all html tags,
then converting html to text,
then extracting quotations from text,
then checking deleted checkpoints,
then deleting necessary tags.
"""
if len(msg_body) > MAX_HTML_LEN:
return msg_body return msg_body
html_tree = html.document_fromstring( if msg_body.strip() == b'':
msg_body, return msg_body
parser=html.HTMLParser(encoding="utf-8")
) msg_body = msg_body.replace(b'\r\n', b'\n')
html_tree = html_document_fromstring(msg_body)
if html_tree is None:
return msg_body
cut_quotations = (html_quotations.cut_gmail_quote(html_tree) or cut_quotations = (html_quotations.cut_gmail_quote(html_tree) or
html_quotations.cut_zimbra_quote(html_tree) or
html_quotations.cut_blockquote(html_tree) or html_quotations.cut_blockquote(html_tree) or
html_quotations.cut_microsoft_quote(html_tree) or html_quotations.cut_microsoft_quote(html_tree) or
html_quotations.cut_by_id(html_tree) or html_quotations.cut_by_id(html_tree) or
html_quotations.cut_from_block(html_tree) html_quotations.cut_from_block(html_tree)
) )
html_tree_copy = deepcopy(html_tree) html_tree_copy = deepcopy(html_tree)
number_of_checkpoints = html_quotations.add_checkpoint(html_tree, 0) number_of_checkpoints = html_quotations.add_checkpoint(html_tree, 0)
quotation_checkpoints = [False] * number_of_checkpoints quotation_checkpoints = [False] * number_of_checkpoints
msg_with_checkpoints = html.tostring(html_tree) plain_text = html_tree_to_text(html_tree)
plain_text = preprocess(plain_text, '\n', content_type='text/html')
h = html2text.HTML2Text()
h.body_width = 0 # generate plain text without wrap
# html2text adds unnecessary star symbols. Remove them.
# Mask star symbols
msg_with_checkpoints = msg_with_checkpoints.replace('*', '3423oorkg432')
plain_text = h.handle(msg_with_checkpoints)
# Remove created star symbols
plain_text = plain_text.replace('*', '')
# Unmask saved star symbols
plain_text = plain_text.replace('3423oorkg432', '*')
delimiter = get_delimiter(plain_text)
plain_text = preprocess(plain_text, delimiter, content_type='text/html')
lines = plain_text.splitlines() lines = plain_text.splitlines()
# Don't process too long messages # Don't process too long messages
@@ -383,25 +433,30 @@ def extract_from_html(msg_body):
process_marked_lines(lines, markers, return_flags) process_marked_lines(lines, markers, return_flags)
lines_were_deleted, first_deleted, last_deleted = return_flags lines_were_deleted, first_deleted, last_deleted = return_flags
if not lines_were_deleted and not cut_quotations:
return msg_body
if lines_were_deleted: if lines_were_deleted:
#collect checkpoints from deleted lines #collect checkpoints from deleted lines
for i in xrange(first_deleted, last_deleted): for i in range(first_deleted, last_deleted):
for checkpoint in line_checkpoints[i]: for checkpoint in line_checkpoints[i]:
quotation_checkpoints[checkpoint] = True quotation_checkpoints[checkpoint] = True
else:
if cut_quotations:
return html.tostring(html_tree_copy)
else:
return msg_body
# Remove tags with quotation checkpoints # Remove tags with quotation checkpoints
html_quotations.delete_quotation_tags( html_quotations.delete_quotation_tags(
html_tree_copy, 0, quotation_checkpoints html_tree_copy, 0, quotation_checkpoints
) )
if _readable_text_empty(html_tree_copy):
return msg_body
return html.tostring(html_tree_copy) return html.tostring(html_tree_copy)
def _readable_text_empty(html_tree):
return not bool(html_tree_to_text(html_tree).strip())
def is_splitter(line): def is_splitter(line):
''' '''
Returns Matcher object if provided string is a splitter and Returns Matcher object if provided string is a splitter and
@@ -415,7 +470,7 @@ def is_splitter(line):
def text_content(context): def text_content(context):
'''XPath Extension function to return a node text content.''' '''XPath Extension function to return a node text content.'''
return context.context_node.text_content().strip() return context.context_node.xpath("string()").strip()
def tail(context): def tail(context):

View File

@@ -20,6 +20,7 @@ trained against, don't forget to regenerate:
* signature/data/classifier * signature/data/classifier
""" """
from __future__ import absolute_import
import os import os
from . import extraction from . import extraction

View File

@@ -1,3 +1,4 @@
from __future__ import absolute_import
import logging import logging
import regex as re import regex as re
@@ -111,7 +112,7 @@ def extract_signature(msg_body):
return (stripped_body.strip(), return (stripped_body.strip(),
signature.strip()) signature.strip())
except Exception, e: except Exception as e:
log.exception('ERROR extracting signature') log.exception('ERROR extracting signature')
return (msg_body, None) return (msg_body, None)

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import absolute_import
import logging import logging
import regex as re import regex as re

View File

@@ -5,6 +5,7 @@ The classifier could be used to detect if a certain line of the message
body belongs to the signature. body belongs to the signature.
""" """
from __future__ import absolute_import
from numpy import genfromtxt from numpy import genfromtxt
from sklearn.svm import LinearSVC from sklearn.svm import LinearSVC
from sklearn.externals import joblib from sklearn.externals import joblib

View File

@@ -16,11 +16,13 @@ suffix and the corresponding sender file has the same name except for the
suffix which should be `_sender`. suffix which should be `_sender`.
""" """
from __future__ import absolute_import
import os import os
import regex as re import regex as re
from talon.signature.constants import SIGNATURE_MAX_LINES from talon.signature.constants import SIGNATURE_MAX_LINES
from talon.signature.learning.featurespace import build_pattern, features from talon.signature.learning.featurespace import build_pattern, features
from six.moves import range
SENDER_SUFFIX = '_sender' SENDER_SUFFIX = '_sender'
@@ -144,7 +146,7 @@ def build_extraction_dataset(folder, dataset_filename,
if not sender or not msg: if not sender or not msg:
continue continue
lines = msg.splitlines() lines = msg.splitlines()
for i in xrange(1, min(SIGNATURE_MAX_LINES, for i in range(1, min(SIGNATURE_MAX_LINES,
len(lines)) + 1): len(lines)) + 1):
line = lines[-i] line = lines[-i]
label = -1 label = -1

View File

@@ -7,9 +7,12 @@ The body and the message sender string are converted into unicode before
applying features to them. applying features to them.
""" """
from __future__ import absolute_import
from talon.signature.constants import (SIGNATURE_MAX_LINES, from talon.signature.constants import (SIGNATURE_MAX_LINES,
TOO_LONG_SIGNATURE_LINE) TOO_LONG_SIGNATURE_LINE)
from talon.signature.learning.helpers import * from talon.signature.learning.helpers import *
from six.moves import zip
from functools import reduce
def features(sender=''): def features(sender=''):

View File

@@ -6,6 +6,7 @@
""" """
from __future__ import absolute_import
import unicodedata import unicodedata
import regex as re import regex as re
@@ -16,7 +17,7 @@ from talon.signature.constants import SIGNATURE_MAX_LINES
rc = re.compile rc = re.compile
RE_EMAIL = rc('@') RE_EMAIL = rc('\S@\S')
RE_RELAX_PHONE = rc('(\(? ?[\d]{2,3} ?\)?.{,3}?){2,}') RE_RELAX_PHONE = rc('(\(? ?[\d]{2,3} ?\)?.{,3}?){2,}')
RE_URL = rc(r'''https?://|www\.[\S]+\.[\S]''') RE_URL = rc(r'''https?://|www\.[\S]+\.[\S]''')
@@ -120,7 +121,7 @@ def contains_sender_names(sender):
names = names or sender names = names or sender
if names != '': if names != '':
return binary_regex_search(re.compile(names)) return binary_regex_search(re.compile(names))
return lambda s: False return lambda s: 0
def extract_names(sender): def extract_names(sender):
@@ -134,7 +135,7 @@ def extract_names(sender):
>>> extract_names('') >>> extract_names('')
[] []
""" """
sender = to_unicode(sender) sender = to_unicode(sender, precise=True)
# Remove non-alphabetical characters # Remove non-alphabetical characters
sender = "".join([char if char.isalpha() else ' ' for char in sender]) sender = "".join([char if char.isalpha() else ' ' for char in sender])
# Remove too short words and words from "black" list i.e. # Remove too short words and words from "black" list i.e.
@@ -161,7 +162,7 @@ def categories_percent(s, categories):
50.0 50.0
''' '''
count = 0 count = 0
s = to_unicode(s) s = to_unicode(s, precise=True)
for c in s: for c in s:
if unicodedata.category(c) in categories: if unicodedata.category(c) in categories:
count += 1 count += 1
@@ -181,15 +182,16 @@ def punctuation_percent(s):
def capitalized_words_percent(s): def capitalized_words_percent(s):
'''Returns capitalized words percent.''' '''Returns capitalized words percent.'''
s = to_unicode(s) s = to_unicode(s, precise=True)
words = re.split('\s', s) words = re.split('\s', s)
words = [w for w in words if w.strip()] words = [w for w in words if w.strip()]
words = [w for w in words if len(w) > 2]
capitalized_words_counter = 0 capitalized_words_counter = 0
valid_words_counter = 0 valid_words_counter = 0
for word in words: for word in words:
if not INVALID_WORD_START.match(word): if not INVALID_WORD_START.match(word):
valid_words_counter += 1 valid_words_counter += 1
if word[0].isupper(): if word[0].isupper() and not word[1].isupper():
capitalized_words_counter += 1 capitalized_words_counter += 1
if valid_words_counter > 0 and len(words) > 1: if valid_words_counter > 0 and len(words) > 1:
return 100 * float(capitalized_words_counter) / valid_words_counter return 100 * float(capitalized_words_counter) / valid_words_counter

View File

@@ -1,12 +1,19 @@
# coding:utf-8 # coding:utf-8
from __future__ import absolute_import
import logging import logging
from random import shuffle from random import shuffle
import chardet
import cchardet
import regex as re
from lxml.html import html5parser
from lxml.cssselect import CSSSelector
import html5lib
from talon.constants import RE_DELIMITER from talon.constants import RE_DELIMITER
import six
log = logging.getLogger(__name__)
def safe_format(format_string, *args, **kwargs): def safe_format(format_string, *args, **kwargs):
@@ -25,7 +32,7 @@ def safe_format(format_string, *args, **kwargs):
except (UnicodeEncodeError, UnicodeDecodeError): except (UnicodeEncodeError, UnicodeDecodeError):
format_string = to_utf8(format_string) format_string = to_utf8(format_string)
args = [to_utf8(p) for p in args] args = [to_utf8(p) for p in args]
kwargs = {k: to_utf8(v) for k, v in kwargs.iteritems()} kwargs = {k: to_utf8(v) for k, v in six.iteritems(kwargs)}
return format_string.format(*args, **kwargs) return format_string.format(*args, **kwargs)
# ignore other errors # ignore other errors
@@ -42,19 +49,51 @@ def to_unicode(str_or_unicode, precise=False):
u'привет' u'привет'
If `precise` flag is True, tries to guess the correct encoding first. If `precise` flag is True, tries to guess the correct encoding first.
""" """
encoding = detect_encoding(str_or_unicode) if precise else 'utf-8' if not isinstance(str_or_unicode, six.text_type):
if isinstance(str_or_unicode, str): encoding = quick_detect_encoding(str_or_unicode) if precise else 'utf-8'
return unicode(str_or_unicode, encoding, 'replace') return six.text_type(str_or_unicode, encoding, 'replace')
return str_or_unicode return str_or_unicode
def detect_encoding(string):
"""
Tries to detect the encoding of the passed string.
Defaults to UTF-8.
"""
assert isinstance(string, bytes)
try:
detected = chardet.detect(string)
if detected:
return detected.get('encoding') or 'utf-8'
except Exception as e:
pass
return 'utf-8'
def quick_detect_encoding(string):
"""
Tries to detect the encoding of the passed string.
Uses cchardet. Fallbacks to detect_encoding.
"""
assert isinstance(string, bytes)
try:
detected = cchardet.detect(string)
if detected:
return detected.get('encoding') or detect_encoding(string)
except Exception as e:
pass
return detect_encoding(string)
def to_utf8(str_or_unicode): def to_utf8(str_or_unicode):
""" """
Safely returns a UTF-8 version of a given string Safely returns a UTF-8 version of a given string
>>> utils.to_utf8(u'hi') >>> utils.to_utf8(u'hi')
'hi' 'hi'
""" """
if isinstance(str_or_unicode, unicode): if not isinstance(str_or_unicode, six.text_type):
return str_or_unicode.encode("utf-8", "ignore") return str_or_unicode.encode("utf-8", "ignore")
return str(str_or_unicode) return str(str_or_unicode)
@@ -74,3 +113,129 @@ def get_delimiter(msg_body):
delimiter = '\n' delimiter = '\n'
return delimiter return delimiter
def html_tree_to_text(tree):
for style in CSSSelector('style')(tree):
style.getparent().remove(style)
for c in tree.xpath('//comment()'):
parent = c.getparent()
# comment with no parent does not impact produced text
if parent is None:
continue
parent.remove(c)
text = ""
for el in tree.iter():
el_text = (el.text or '') + (el.tail or '')
if len(el_text) > 1:
if el.tag in _BLOCKTAGS:
text += "\n"
if el.tag == 'li':
text += " * "
text += el_text.strip() + " "
# add href to the output
href = el.attrib.get('href')
if href:
text += "(%s) " % href
if el.tag in _HARDBREAKS and text and not text.endswith("\n"):
text += "\n"
retval = _rm_excessive_newlines(text)
return _encode_utf8(retval)
def html_to_text(string):
"""
Dead-simple HTML-to-text converter:
>>> html_to_text("one<br>two<br>three")
>>> "one\ntwo\nthree"
NOTES:
1. the string is expected to contain UTF-8 encoded HTML!
2. returns utf-8 encoded str (not unicode)
3. if html can't be parsed returns None
"""
if isinstance(string, six.text_type):
string = string.encode('utf8')
s = _prepend_utf8_declaration(string)
s = s.replace(b"\n", b"")
tree = html_fromstring(s)
if tree is None:
return None
return html_tree_to_text(tree)
def html_fromstring(s):
"""Parse html tree from string. Return None if the string can't be parsed.
"""
try:
return html5parser.fromstring(s, parser=_HTML5LIB_PARSER)
except Exception:
pass
def html_document_fromstring(s):
"""Parse html tree from string. Return None if the string can't be parsed.
"""
try:
return html5parser.document_fromstring(s, parser=_HTML5LIB_PARSER)
except Exception:
pass
def cssselect(expr, tree):
return CSSSelector(expr)(tree)
def _contains_charset_spec(s):
"""Return True if the first 4KB contain charset spec
"""
return s.lower().find(b'html; charset=', 0, 4096) != -1
def _prepend_utf8_declaration(s):
"""Prepend 'utf-8' encoding declaration if the first 4KB don't have any
"""
return s if _contains_charset_spec(s) else _UTF8_DECLARATION + s
def _rm_excessive_newlines(s):
"""Remove excessive newlines that often happen due to tons of divs
"""
return _RE_EXCESSIVE_NEWLINES.sub("\n\n", s).strip()
def _encode_utf8(s):
"""Encode in 'utf-8' if unicode
"""
return s.encode('utf-8') if isinstance(s, six.text_type) else s
_UTF8_DECLARATION = (b'<meta http-equiv="Content-Type" content="text/html;'
b'charset=utf-8">')
_BLOCKTAGS = ['div', 'p', 'ul', 'li', 'h1', 'h2', 'h3']
_HARDBREAKS = ['br', 'hr', 'tr']
_RE_EXCESSIVE_NEWLINES = re.compile("\n{2,10}")
# html5lib is a pure-python library that conforms to the WHATWG HTML spec
# and is not vulnarable to certain attacks common for XML libraries
_HTML5LIB_PARSER = html5lib.HTMLParser(
# build lxml tree
html5lib.treebuilders.getTreeBuilder("lxml"),
# remove namespace value from inside lxml.html.html5paser element tag
# otherwise it yields something like "{http://www.w3.org/1999/xhtml}div"
# instead of "div", throwing the algo off
namespaceHTMLElements=False
)

View File

@@ -1,3 +1,4 @@
from __future__ import absolute_import
from nose.tools import * from nose.tools import *
from mock import * from mock import *

View File

@@ -1,3 +1,4 @@
<?xml version="1.0" encoding="UTF-8"?>
<html> <html>
<head> <head>
<style><!-- <style><!--

View File

@@ -0,0 +1,87 @@
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-2022-jp">
<meta name="Generator" content="Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman","serif";}
h3
{mso-style-priority:9;
mso-style-link:"Heading 3 Char";
mso-margin-top-alt:auto;
margin-right:0in;
mso-margin-bottom-alt:auto;
margin-left:0in;
font-size:13.5pt;
font-family:"Times New Roman","serif";
font-weight:bold;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p
{mso-style-priority:99;
mso-margin-top-alt:auto;
margin-right:0in;
mso-margin-bottom-alt:auto;
margin-left:0in;
font-size:12.0pt;
font-family:"Times New Roman","serif";}
span.Heading3Char
{mso-style-name:"Heading 3 Char";
mso-style-priority:9;
mso-style-link:"Heading 3";
font-family:"Cambria","serif";
color:#4F81BD;
font-weight:bold;}
span.EmailStyle19
{mso-style-type:personal-reply;
font-family:"Calibri","sans-serif";
color:#1F497D;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="blue" vlink="purple">
<div class="WordSection1">
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:&quot;Calibri&quot;,&quot;sans-serif&quot;;color:#1F497D">Hi. I am fine.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:&quot;Calibri&quot;,&quot;sans-serif&quot;;color:#1F497D">Thanks,<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:&quot;Calibri&quot;,&quot;sans-serif&quot;;color:#1F497D">Alex<o:p></o:p></span></p>
<p class="MsoNormal"><b><span style="font-size:10.0pt;font-family:&quot;Tahoma&quot;,&quot;sans-serif&quot;">From:</span></b><span style="font-size:10.0pt;font-family:&quot;Tahoma&quot;,&quot;sans-serif&quot;"> Foo [mailto:foo@bar.com]
<b>On Behalf Of </b>baz@bar.com<br>
<b>Sent:</b> Monday, January 01, 2000 12:00 AM<br>
<b>To:</b> john@bar.com<br>
<b>Cc:</b> jane@bar.io<br>
<b>Subject:</b> Conversation<o:p></o:p></span></p>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
<p>Hello! How are you?<o:p></o:p></p>
<p class="MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
</body>
</html>

View File

@@ -0,0 +1,19 @@
Content-Type: text/plain;
charset=us-ascii
Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2104\))
Subject: Re: Hello there
X-Universally-Unique-Identifier: 85B1075D-5841-46A9-8565-FCB287A93AC4
From: Adam Renberg <adam@tictail.com>
In-Reply-To: <CABzQGhkMXDxUt_tSVQcg=43aniUhtsVfCZVzu-PG0kwS_uzqMw@mail.gmail.com>
Date: Sat, 22 Aug 2015 19:22:20 +0200
Content-Transfer-Encoding: 7bit
X-Smtp-Server: smtp.gmail.com:adam@tictail.com
Message-Id: <68001B29-8EA4-444C-A894-0537D2CA5208@tictail.com>
References: <CABzQGhkMXDxUt_tSVQcg=43aniUhtsVfCZVzu-PG0kwS_uzqMw@mail.gmail.com>
To: Adam Renberg <tgwizard@gmail.com>
Hello
> On 22 Aug 2015, at 19:21, Adam Renberg <tgwizard@gmail.com> wrote:
>
> Hi there!

View File

@@ -1,13 +1,12 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import absolute_import
from . import * from . import *
from . fixtures import * from . fixtures import *
import regex as re import regex as re
from talon import quotations from talon import quotations, utils as u
import html2text
RE_WHITESPACE = re.compile("\s") RE_WHITESPACE = re.compile("\s")
@@ -28,7 +27,7 @@ def test_quotation_splitter_inside_blockquote():
</blockquote>""" </blockquote>"""
eq_("<html><body><p>Reply</p></body></html>", eq_("<html><head></head><body>Reply</body></html>",
RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body))) RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body)))
@@ -45,7 +44,25 @@ def test_quotation_splitter_outside_blockquote():
</div> </div>
</blockquote> </blockquote>
""" """
eq_("<html><body><p>Reply</p><div></div></body></html>", eq_("<html><head></head><body>Reply</body></html>",
RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body)))
def test_regular_blockquote():
msg_body = """Reply
<blockquote>Regular</blockquote>
<div>
On 11-Apr-2011, at 6:54 PM, Bob &lt;bob@example.com&gt; wrote:
</div>
<blockquote>
<div>
<blockquote>Nested</blockquote>
</div>
</blockquote>
"""
eq_("<html><head></head><body>Reply<blockquote>Regular</blockquote></body></html>",
RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body))) RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body)))
@@ -68,6 +85,7 @@ Reply
reply = """ reply = """
<html> <html>
<head></head>
<body> <body>
Reply Reply
@@ -111,7 +129,30 @@ def test_gmail_quote():
</div> </div>
</div> </div>
</div>""" </div>"""
eq_("<html><body><p>Reply</p></body></html>", eq_("<html><head></head><body>Reply</body></html>",
RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body)))
def test_gmail_quote_compact():
msg_body = 'Reply' \
'<div class="gmail_quote">' \
'<div class="gmail_quote">On 11-Apr-2011, at 6:54 PM, Bob &lt;bob@example.com&gt; wrote:' \
'<div>Test</div>' \
'</div>' \
'</div>'
eq_("<html><head></head><body>Reply</body></html>",
RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body)))
def test_gmail_quote_blockquote():
msg_body = """Message
<blockquote class="gmail_quote">
<div class="gmail_default">
My name is William Shakespeare.
<br/>
</div>
</blockquote>"""
eq_(RE_WHITESPACE.sub('', msg_body),
RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body))) RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body)))
@@ -122,11 +163,11 @@ def test_unicode_in_reply():
<br> <br>
</div> </div>
<blockquote class="gmail_quote"> <blockquote>
Quote Quote
</blockquote>""".encode("utf-8") </blockquote>""".encode("utf-8")
eq_("<html><body><p>Reply&#160;&#160;Text<br></p><div><br></div>" eq_("<html><head></head><body>Reply&#160;&#160;Text<br><div><br></div>"
"</body></html>", "</body></html>",
RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body))) RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body)))
@@ -152,6 +193,7 @@ def test_blockquote_disclaimer():
stripped_html = """ stripped_html = """
<html> <html>
<head></head>
<body> <body>
<div> <div>
<div> <div>
@@ -183,7 +225,7 @@ def test_date_block():
</div> </div>
</div> </div>
""" """
eq_('<html><body><div>message<br></div></body></html>', eq_('<html><head></head><body><div>message<br></div></body></html>',
RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body))) RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body)))
@@ -200,7 +242,7 @@ Subject: You Have New Mail From Mary!<br><br>
text text
</div></div> </div></div>
""" """
eq_('<html><body><div>message<br></div></body></html>', eq_('<html><head></head><body><div>message<br></div></body></html>',
RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body))) RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body)))
@@ -218,7 +260,7 @@ def test_reply_shares_div_with_from_block():
</div> </div>
</body>''' </body>'''
eq_('<html><body><div>Blah<br><br></div></body></html>', eq_('<html><head></head><body><div>Blah<br><br></div></body></html>',
RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body))) RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body)))
@@ -229,37 +271,47 @@ def test_reply_quotations_share_block():
def test_OLK_SRC_BODY_SECTION_stripped(): def test_OLK_SRC_BODY_SECTION_stripped():
eq_('<html><body><div>Reply</div></body></html>', eq_('<html><head></head><body><div>Reply</div></body></html>',
RE_WHITESPACE.sub( RE_WHITESPACE.sub(
'', quotations.extract_from_html(OLK_SRC_BODY_SECTION))) '', quotations.extract_from_html(OLK_SRC_BODY_SECTION)))
def test_reply_separated_by_hr(): def test_reply_separated_by_hr():
eq_('<html><body><div>Hi<div>there</div></div></body></html>', eq_('<html><head></head><body><div>Hi<div>there</div></div></body></html>',
RE_WHITESPACE.sub( RE_WHITESPACE.sub(
'', quotations.extract_from_html(REPLY_SEPARATED_BY_HR))) '', quotations.extract_from_html(REPLY_SEPARATED_BY_HR)))
RE_REPLY = re.compile(r"^Hi\. I am fine\.\s*\n\s*Thanks,\s*\n\s*Alex\s*$") def test_from_block_and_quotations_in_separate_divs():
msg_body = '''
Reply
<div>
<hr/>
<div>
<font>
<b>From: bob@example.com</b>
<b>Date: Thu, 24 Mar 2016 08:07:12 -0700</b>
</font>
</div>
<div>
Quoted message
</div>
</div>
'''
eq_('<html><head></head><body>Reply<div><hr></div></body></html>',
RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body)))
def extract_reply_and_check(filename): def extract_reply_and_check(filename):
f = open(filename) f = open(filename)
msg_body = f.read().decode("utf-8") msg_body = f.read()
reply = quotations.extract_from_html(msg_body) reply = quotations.extract_from_html(msg_body)
plain_reply = u.html_to_text(reply)
plain_reply = plain_reply.decode('utf8')
h = html2text.HTML2Text() eq_(RE_WHITESPACE.sub('', "Hi. I am fine.\n\nThanks,\nAlex"),
h.body_width = 0 RE_WHITESPACE.sub('', plain_reply))
plain_reply = h.handle(reply)
#remove &nbsp; spaces
plain_reply = plain_reply.replace(u'\xa0', u' ')
if RE_REPLY.match(plain_reply):
eq_(1, 1)
else:
eq_("Hi. I am fine.\n\nThanks,\nAlex", plain_reply)
def test_gmail_reply(): def test_gmail_reply():
@@ -282,6 +334,10 @@ def test_ms_outlook_2007_reply():
extract_reply_and_check("tests/fixtures/html_replies/ms_outlook_2007.html") extract_reply_and_check("tests/fixtures/html_replies/ms_outlook_2007.html")
def test_ms_outlook_2010_reply():
extract_reply_and_check("tests/fixtures/html_replies/ms_outlook_2010.html")
def test_thunderbird_reply(): def test_thunderbird_reply():
extract_reply_and_check("tests/fixtures/html_replies/thunderbird.html") extract_reply_and_check("tests/fixtures/html_replies/thunderbird.html")
@@ -292,3 +348,74 @@ def test_windows_mail_reply():
def test_yandex_ru_reply(): def test_yandex_ru_reply():
extract_reply_and_check("tests/fixtures/html_replies/yandex_ru.html") extract_reply_and_check("tests/fixtures/html_replies/yandex_ru.html")
def test_CRLF():
"""CR is not converted to '&#13;'
"""
symbol = '&#13;'
extracted = quotations.extract_from_html('<html>\r\n</html>')
assert_false(symbol in extracted)
eq_('<html></html>', RE_WHITESPACE.sub('', extracted))
msg_body = """My
reply
<blockquote>
<div>
On 11-Apr-2011, at 6:54 PM, Bob &lt;bob@example.com&gt; wrote:
</div>
<div>
Test
</div>
</blockquote>"""
msg_body = msg_body.replace('\n', '\r\n')
extracted = quotations.extract_from_html(msg_body)
assert_false(symbol in extracted)
# Keep new lines otherwise "My reply" becomes one word - "Myreply"
eq_("<html><head></head><body>My\nreply\n</body></html>", extracted)
def test_gmail_forwarded_msg():
msg_body = """<div dir="ltr"><br><div class="gmail_quote">---------- Forwarded message ----------<br>From: <b class="gmail_sendername">Bob</b> <span dir="ltr">&lt;<a href="mailto:bob@example.com">bob@example.com</a>&gt;</span><br>Date: Fri, Feb 11, 2010 at 5:59 PM<br>Subject: Bob WFH today<br>To: Mary &lt;<a href="mailto:mary@example.com">mary@example.com</a>&gt;<br><br><br><div dir="ltr">eom</div>
</div><br></div>"""
extracted = quotations.extract_from_html(msg_body)
eq_(RE_WHITESPACE.sub('', msg_body), RE_WHITESPACE.sub('', extracted))
@patch.object(quotations, 'MAX_HTML_LEN', 1)
def test_too_large_html():
msg_body = 'Reply' \
'<div class="gmail_quote">' \
'<div class="gmail_quote">On 11-Apr-2011, at 6:54 PM, Bob &lt;bob@example.com&gt; wrote:' \
'<div>Test</div>' \
'</div>' \
'</div>'
eq_(RE_WHITESPACE.sub('', msg_body),
RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body)))
def test_readable_html_empty():
msg_body = """
<blockquote>
Reply
<div>
On 11-Apr-2011, at 6:54 PM, Bob &lt;bob@example.com&gt; wrote:
</div>
<div>
Test
</div>
</blockquote>"""
eq_(RE_WHITESPACE.sub('', msg_body),
RE_WHITESPACE.sub('', quotations.extract_from_html(msg_body)))
@patch.object(quotations, 'html_document_fromstring', Mock(return_value=None))
def test_bad_html():
bad_html = "<html></html>"
eq_(bad_html, quotations.extract_from_html(bad_html))

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import absolute_import
from . import * from . import *
from . fixtures import * from . fixtures import *

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import absolute_import
from .. import * from .. import *
from talon.signature import bruteforce from talon.signature import bruteforce

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import absolute_import
from .. import * from .. import *
import os import os
@@ -8,6 +9,7 @@ from talon.signature.learning import dataset
from talon import signature from talon import signature
from talon.signature import extraction as e from talon.signature import extraction as e
from talon.signature import bruteforce from talon.signature import bruteforce
from six.moves import range
def test_message_shorter_SIGNATURE_MAX_LINES(): def test_message_shorter_SIGNATURE_MAX_LINES():
@@ -75,6 +77,31 @@ def test_basic():
signature.extract(msg_body, 'Sergey')) signature.extract(msg_body, 'Sergey'))
def test_capitalized():
msg_body = """Hi Mary,
Do you still need a DJ for your wedding? I've included a video demo of one of our DJs available for your wedding date.
DJ Doe
http://example.com
Password: SUPERPASSWORD
Would you like to check out more?
At your service,
John Smith
Doe Inc
555-531-7967"""
sig = """John Smith
Doe Inc
555-531-7967"""
eq_(sig, signature.extract(msg_body, 'Doe')[1])
def test_over_2_text_lines_after_signature(): def test_over_2_text_lines_after_signature():
body = """Blah body = """Blah
@@ -127,20 +154,20 @@ def test_mark_lines():
def test_process_marked_lines(): def test_process_marked_lines():
# no signature found # no signature found
eq_((range(5), None), e._process_marked_lines(range(5), 'telt')) eq_((list(range(5)), None), e._process_marked_lines(list(range(5)), 'telt'))
# signature in the middle of the text # signature in the middle of the text
eq_((range(9), None), e._process_marked_lines(range(9), 'tesestelt')) eq_((list(range(9)), None), e._process_marked_lines(list(range(9)), 'tesestelt'))
# long line splits signature # long line splits signature
eq_((range(7), [7, 8]), eq_((list(range(7)), [7, 8]),
e._process_marked_lines(range(9), 'tsslsless')) e._process_marked_lines(list(range(9)), 'tsslsless'))
eq_((range(20), [20]), eq_((list(range(20)), [20]),
e._process_marked_lines(range(21), 'ttttttstttesllelelets')) e._process_marked_lines(list(range(21)), 'ttttttstttesllelelets'))
# some signature lines could be identified as text # some signature lines could be identified as text
eq_(([0], range(1, 9)), e._process_marked_lines(range(9), 'tsetetest')) eq_(([0], list(range(1, 9))), e._process_marked_lines(list(range(9)), 'tsetetest'))
eq_(([], range(5)), eq_(([], list(range(5))),
e._process_marked_lines(range(5), "ststt")) e._process_marked_lines(list(range(5)), "ststt"))

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import absolute_import
from ... import * from ... import *
import os import os

View File

@@ -1,12 +1,15 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import absolute_import
from ... import * from ... import *
from talon.signature.learning import featurespace as fs from talon.signature.learning import featurespace as fs
def test_apply_features(): def test_apply_features():
s = '''John Doe s = '''This is John Doe
Tuesday @3pm suits. I'll chat to you then.
VP Research and Development, Xxxx Xxxx Xxxxx VP Research and Development, Xxxx Xxxx Xxxxx
@@ -19,11 +22,12 @@ john@example.com'''
# note that we don't consider the first line because signatures don't # note that we don't consider the first line because signatures don't
# usually take all the text, empty lines are not considered # usually take all the text, empty lines are not considered
eq_(result, [[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1], eq_(result, [[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
with patch.object(fs, 'SIGNATURE_MAX_LINES', 4): with patch.object(fs, 'SIGNATURE_MAX_LINES', 5):
features = fs.features(sender) features = fs.features(sender)
new_result = fs.apply_features(s, features) new_result = fs.apply_features(s, features)
# result remains the same because we don't consider empty lines # result remains the same because we don't consider empty lines

View File

@@ -1,11 +1,13 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import absolute_import
from ... import * from ... import *
import regex as re import regex as re
from talon.signature.learning import helpers as h from talon.signature.learning import helpers as h
from talon.signature.learning.helpers import * from talon.signature.learning.helpers import *
from six.moves import range
# First testing regex constants. # First testing regex constants.
VALID = ''' VALID = '''
@@ -154,7 +156,7 @@ def test_extract_names():
# check that extracted names could be compiled # check that extracted names could be compiled
try: try:
re.compile("|".join(extracted_names)) re.compile("|".join(extracted_names))
except Exception, e: except Exception as e:
ok_(False, ("Failed to compile extracted names {}" ok_(False, ("Failed to compile extracted names {}"
"\n\nReason: {}").format(extracted_names, e)) "\n\nReason: {}").format(extracted_names, e))
if expected_names: if expected_names:
@@ -190,10 +192,11 @@ def test_punctuation_percent(categories_percent):
def test_capitalized_words_percent(): def test_capitalized_words_percent():
eq_(0.0, h.capitalized_words_percent('')) eq_(0.0, h.capitalized_words_percent(''))
eq_(100.0, h.capitalized_words_percent('Example Corp')) eq_(100.0, h.capitalized_words_percent('Example Corp'))
eq_(50.0, h.capitalized_words_percent('Qqq qqq QQQ 123 sss')) eq_(50.0, h.capitalized_words_percent('Qqq qqq Aqs 123 sss'))
eq_(100.0, h.capitalized_words_percent('Cell 713-444-7368')) eq_(100.0, h.capitalized_words_percent('Cell 713-444-7368'))
eq_(100.0, h.capitalized_words_percent('8th Floor')) eq_(100.0, h.capitalized_words_percent('8th Floor'))
eq_(0.0, h.capitalized_words_percent('(212) 230-9276')) eq_(0.0, h.capitalized_words_percent('(212) 230-9276'))
eq_(50.0, h.capitalized_words_percent('Password: REMARKABLE'))
def test_has_signature(): def test_has_signature():
@@ -204,7 +207,7 @@ def test_has_signature():
'sender@example.com')) 'sender@example.com'))
assert_false(h.has_signature('http://www.example.com/555-555-5555', assert_false(h.has_signature('http://www.example.com/555-555-5555',
'sender@example.com')) 'sender@example.com'))
long_line = ''.join(['q' for e in xrange(28)]) long_line = ''.join(['q' for e in range(28)])
assert_false(h.has_signature(long_line + ' sender', 'sender@example.com')) assert_false(h.has_signature(long_line + ' sender', 'sender@example.com'))
# wont crash on an empty string # wont crash on an empty string
assert_false(h.has_signature('', '')) assert_false(h.has_signature('', ''))

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import absolute_import
from . import * from . import *
from . fixtures import * from . fixtures import *
@@ -7,16 +8,19 @@ import os
import email.iterators import email.iterators
from talon import quotations from talon import quotations
import six
from six.moves import range
from six import StringIO
@patch.object(quotations, 'MAX_LINES_COUNT', 1) @patch.object(quotations, 'MAX_LINES_COUNT', 1)
def test_too_many_lines(): def test_too_many_lines():
msg_body = """Test reply msg_body = """Test reply
Hi
-----Original Message----- -----Original Message-----
Test""" Test"""
eq_(msg_body, quotations.extract_from_plain(msg_body)) eq_("Test reply", quotations.extract_from_plain(msg_body))
def test_pattern_on_date_somebody_wrote(): def test_pattern_on_date_somebody_wrote():
@@ -32,6 +36,19 @@ On 11-Apr-2011, at 6:54 PM, Roman Tkachenko <romant@example.com> wrote:
eq_("Test reply", quotations.extract_from_plain(msg_body)) eq_("Test reply", quotations.extract_from_plain(msg_body))
def test_pattern_sent_from_samsung_smb_wrote():
msg_body = """Test reply
Sent from Samsung MobileName <address@example.com> wrote:
>
> Test
>
> Roman"""
eq_("Test reply", quotations.extract_from_plain(msg_body))
def test_pattern_on_date_wrote_somebody(): def test_pattern_on_date_wrote_somebody():
eq_('Lorem', quotations.extract_from_plain( eq_('Lorem', quotations.extract_from_plain(
"""Lorem """Lorem
@@ -54,6 +71,18 @@ On 04/19/2011 07:10 AM, Roman Tkachenko wrote:
eq_("Test reply", quotations.extract_from_plain(msg_body)) eq_("Test reply", quotations.extract_from_plain(msg_body))
def test_date_time_email_splitter():
msg_body = """Test reply
2014-10-17 11:28 GMT+03:00 Postmaster <
postmaster@sandboxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mailgun.org>:
> First from site
>
"""
eq_("Test reply", quotations.extract_from_plain(msg_body))
def test_pattern_on_date_somebody_wrote_allows_space_in_front(): def test_pattern_on_date_somebody_wrote_allows_space_in_front():
msg_body = """Thanks Thanmai msg_body = """Thanks Thanmai
On Mar 8, 2012 9:59 AM, "Example.com" < On Mar 8, 2012 9:59 AM, "Example.com" <
@@ -113,7 +142,7 @@ def _check_pattern_original_message(original_message_indicator):
-----{}----- -----{}-----
Test""" Test"""
eq_('Test reply', quotations.extract_from_plain(msg_body.format(unicode(original_message_indicator)))) eq_('Test reply', quotations.extract_from_plain(msg_body.format(six.text_type(original_message_indicator))))
def test_english_original_message(): def test_english_original_message():
_check_pattern_original_message('Original Message') _check_pattern_original_message('Original Message')
@@ -311,6 +340,33 @@ Emne: The manager has commented on your Loop
Blah-blah-blah Blah-blah-blah
""")) """))
def test_swedish_from_block():
eq_('Allo! Follow up MIME!', quotations.extract_from_plain(
u"""Allo! Follow up MIME!
Från: Anno Sportel [mailto:anno.spoel@hsbcssad.com]
Skickat: den 26 augusti 2015 14:45
Till: Isacson Leiff
Ämne: RE: Week 36
Blah-blah-blah
"""))
def test_swedish_from_line():
eq_('Lorem', quotations.extract_from_plain(
"""Lorem
Den 14 september, 2015 02:23:18, Valentino Rudy (valentino@rudy.be) skrev:
Veniam laborum mlkshk kale chips authentic. Normcore mumblecore laboris, fanny pack readymade eu blog chia pop-up freegan enim master cleanse.
"""))
def test_norwegian_from_line():
eq_('Lorem', quotations.extract_from_plain(
u"""Lorem
På 14 september 2015 på 02:23:18, Valentino Rudy (valentino@rudy.be) skrev:
Veniam laborum mlkshk kale chips authentic. Normcore mumblecore laboris, fanny pack readymade eu blog chia pop-up freegan enim master cleanse.
"""))
def test_dutch_from_block(): def test_dutch_from_block():
eq_('Gluten-free culpa lo-fi et nesciunt nostrud.', quotations.extract_from_plain( eq_('Gluten-free culpa lo-fi et nesciunt nostrud.', quotations.extract_from_plain(
"""Gluten-free culpa lo-fi et nesciunt nostrud. """Gluten-free culpa lo-fi et nesciunt nostrud.
@@ -610,6 +666,15 @@ def test_preprocess_postprocess_2_links():
eq_(msg_body, quotations.extract_from_plain(msg_body)) eq_(msg_body, quotations.extract_from_plain(msg_body))
def body_iterator(msg, decode=False):
for subpart in msg.walk():
payload = subpart.get_payload(decode=decode)
if isinstance(payload, six.text_type):
yield payload
else:
yield payload.decode('utf8')
def test_standard_replies(): def test_standard_replies():
for filename in os.listdir(STANDARD_REPLIES): for filename in os.listdir(STANDARD_REPLIES):
filename = os.path.join(STANDARD_REPLIES, filename) filename = os.path.join(STANDARD_REPLIES, filename)
@@ -617,8 +682,8 @@ def test_standard_replies():
continue continue
with open(filename) as f: with open(filename) as f:
message = email.message_from_file(f) message = email.message_from_file(f)
body = email.iterators.typed_subpart_iterator(message, subtype='plain').next() body = next(email.iterators.typed_subpart_iterator(message, subtype='plain'))
text = ''.join(email.iterators.body_line_iterator(body, True)) text = ''.join(body_iterator(body, True))
stripped_text = quotations.extract_from_plain(text) stripped_text = quotations.extract_from_plain(text)
reply_text_fn = filename[:-4] + '_reply_text' reply_text_fn = filename[:-4] + '_reply_text'

View File

@@ -1,9 +1,133 @@
# coding:utf-8
from __future__ import absolute_import
from . import * from . import *
from talon import utils from talon import utils as u
import cchardet
import six
from lxml import html
def test_get_delimiter(): def test_get_delimiter():
eq_('\r\n', utils.get_delimiter('abc\r\n123')) eq_('\r\n', u.get_delimiter('abc\r\n123'))
eq_('\n', utils.get_delimiter('abc\n123')) eq_('\n', u.get_delimiter('abc\n123'))
eq_('\n', utils.get_delimiter('abc')) eq_('\n', u.get_delimiter('abc'))
def test_unicode():
eq_ (u'hi', u.to_unicode('hi'))
eq_ (type(u.to_unicode('hi')), six.text_type )
eq_ (type(u.to_unicode(u'hi')), six.text_type )
eq_ (type(u.to_unicode('привет')), six.text_type )
eq_ (type(u.to_unicode(u'привет')), six.text_type )
eq_ (u"привет", u.to_unicode('привет'))
eq_ (u"привет", u.to_unicode(u'привет'))
# some latin1 stuff
eq_ (u"Versión", u.to_unicode(u'Versi\xf3n'.encode('iso-8859-2'), precise=True))
def test_detect_encoding():
eq_ ('ascii', u.detect_encoding(b'qwe').lower())
eq_ ('iso-8859-2', u.detect_encoding(u'Versi\xf3n'.encode('iso-8859-2')).lower())
eq_ ('utf-8', u.detect_encoding(u'привет'.encode('utf8')).lower())
# fallback to utf-8
with patch.object(u.chardet, 'detect') as detect:
detect.side_effect = Exception
eq_ ('utf-8', u.detect_encoding('qwe'.encode('utf8')).lower())
def test_quick_detect_encoding():
eq_ ('ascii', u.quick_detect_encoding(b'qwe').lower())
eq_ ('windows-1252', u.quick_detect_encoding(u'Versi\xf3n'.encode('windows-1252')).lower())
eq_ ('utf-8', u.quick_detect_encoding(u'привет'.encode('utf8')).lower())
@patch.object(cchardet, 'detect')
@patch.object(u, 'detect_encoding')
def test_quick_detect_encoding_edge_cases(detect_encoding, cchardet_detect):
cchardet_detect.return_value = {'encoding': 'ascii'}
eq_('ascii', u.quick_detect_encoding(b"qwe"))
cchardet_detect.assert_called_once_with(b"qwe")
# fallback to detect_encoding
cchardet_detect.return_value = {}
detect_encoding.return_value = 'utf-8'
eq_('utf-8', u.quick_detect_encoding(b"qwe"))
# exception
detect_encoding.reset_mock()
cchardet_detect.side_effect = Exception()
detect_encoding.return_value = 'utf-8'
eq_('utf-8', u.quick_detect_encoding(b"qwe"))
ok_(detect_encoding.called)
def test_html_to_text():
html = """<body>
<p>Hello world!</p>
<br>
<ul>
<li>One!</li>
<li>Two</li>
</ul>
<p>
Haha
</p>
</body>"""
text = u.html_to_text(html)
eq_(b"Hello world! \n\n * One! \n * Two \nHaha", text)
eq_(u"привет!", u.html_to_text("<b>привет!</b>").decode('utf8'))
html = '<body><br/><br/>Hi</body>'
eq_ (b'Hi', u.html_to_text(html))
html = """Hi
<style type="text/css">
div, p, li {
font: 13px 'Lucida Grande', Arial, sans-serif;
}
</style>
<style type="text/css">
h1 {
font: 13px 'Lucida Grande', Arial, sans-serif;
}
</style>"""
eq_ (b'Hi', u.html_to_text(html))
html = """<div>
<!-- COMMENT 1 -->
<span>TEXT 1</span>
<p>TEXT 2 <!-- COMMENT 2 --></p>
</div>"""
eq_(b'TEXT 1 \nTEXT 2', u.html_to_text(html))
def test_comment_no_parent():
s = "<!-- COMMENT 1 --> no comment"
d = u.html_document_fromstring(s)
eq_("no comment", u.html_tree_to_text(d))
@patch.object(u.html5parser, 'fromstring', Mock(side_effect=Exception()))
def test_html_fromstring_exception():
eq_(None, u.html_fromstring("<html></html>"))
@patch.object(u.html5parser, 'document_fromstring')
def test_html_document_fromstring_exception(document_fromstring):
document_fromstring.side_effect = Exception()
eq_(None, u.html_document_fromstring("<html></html>"))
@patch.object(u, 'html_fromstring', Mock(return_value=None))
def test_bad_html_to_text():
bad_html = "one<br>two<br>three"
eq_(None, u.html_to_text(bad_html))

View File

@@ -1,3 +1,4 @@
from __future__ import absolute_import
from talon.signature import EXTRACTOR_FILENAME, EXTRACTOR_DATA from talon.signature import EXTRACTOR_FILENAME, EXTRACTOR_DATA
from talon.signature.learning.classifier import train, init from talon.signature.learning.classifier import train, init