=Paper=
{{Paper
|id=Vol-2888/paper4
|storemode=property
|title=Automatic Semantic Annotation for the Easification of Action Rule Legislative Sentences for Specialist Readers
|pdfUrl=https://ceur-ws.org/Vol-2888/paper4.pdf
|volume=Vol-2888
|authors=Sherry Maynard
|dblpUrl=https://dblp.org/rec/conf/icail/Maynard21
}}
==Automatic Semantic Annotation for the Easification of Action Rule Legislative Sentences for Specialist Readers==
Automatic Semantic Annotation for the Easification of Action
Rule Legislative Sentences for Specialist Readers
Sherry Maynard
The University of the West Indies, Cave Hill Campus, Cave Hill, St. Michael, Barbados
Abstract
This research has applied automatic semantic annotation to a text easification solution that aids
non-legal experts in reading legislation as part of their work. It annotates the modality, actor,
action, case and condition concepts within action rule legislative sentences. The research first
analyzes the lexical and syntactic compositions of a corpus of legislation commonly read by a
group of compliance professionals and then extracts data sets of action rule legislative sentences
for annotation. The annotation is rule-based, fully automated and utilizes Tregex patterns and
Tsurgeon operations. The resultant easified legislative sentences were confirmed by legal
experts as having preserved the semantic integrity of the original sentences. In addition, the
professionals who participated in the research, reported lower intrinsic and extraneous cognitive
loads when they read the easified version of the legislative sentence, when compared to the
loads experienced when they read the original version of the same sentence.
Keywords 1
Easification, semantic annotation, specialist readers, cognitive load, intrinsic load, extraneous
load,
1. Introduction characterized by technical vocabulary, wordiness,
repetition, nominalization and the excessive use
of binomial and multinomial expressions [2, 4, 6,
This research fully automates the semantic
7].
annotation of five concepts found in action-rule
legislative sentences. These concepts include
Even legal experts resort to reading the
modality, actor, action, case and condition. The
explanatory notes that accompany a bill rather
semantic annotation is part of a larger goal of
than the legislative text itself [8, 9]. Similarly,
easifying the legislative sentences to aid the
some legislators and government officials have
comprehension of specialist readers, i.e. non-legal
confessed that they do not understand much of the
experts reading legislation as part of their work.
bills they vote on [10]. Nonetheless,
Specialist readers may include professionals in
organisations aiming to reduce cost and looking
areas such as compliance, audit, finance, risk,
for skills beyond legal expertise, are seeking
information security, human resources and health
persons with investigative, audit and critical
and safety.
thinking skills to have primary responsibility for
the legal compliance function within their
It has long been acknowledged that legal
organizations [11-13]. Hence, persons with
language is complex both in its construction of
training in organizational behavior, finance,
and the expression of its ideas. Syntactic
accounting and information systems are being
contributors to this complexity include the density
regarded as ideal candidates for this critical
of prepositional phrases, the high degree of
responsibility [14]. The legal compliance
subordination, syntactic discontinuity and lengthy
function is an important part of modern businesses
sentences [1-5]. In addition, the language is
Proceedings of the Fifth Workshop on Automated Semantic
Analysis of Information in Legal Text (ASAIL 2021), June 25,
2021, São Paulo, Brazil.
EMAIL: sherry.maynard@cavehill.uwi.edu
© 2021 Copyright for this paper by its author. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
ht
tp:
//
ceur-ws.
Workshop ISSN1613-0073
Proceedi
ngs
org
CEUR Workshop Proceedings (CEUR-WS.org)
as they navigate aggressive regulatory overall average of the corpus being 53 words.
environments, unconstrained by geographical This average sentence length significantly
boundaries [15], and while the cost of legal exceeds Curtotti et al. (2015) recommendation of
compliance is high, the cost of non-compliance is keeping legislative sentence lengths below 30
approximately three times higher [16]. words [20]. Furthermore, it is more than double
the average sentence length for English academic
2. The Corpus Analysis articles (26 words) [21] and the recommended
length for general text of 15–20 words [22].
Sentence length in legislative writing, could be
The Barbados legislation that formed the
considered a secondary matter when compared to
corpus analyzed in this research are those
the benefit gained from having as much related
commonly read by forty-five members of a ideas together in a single sentence to mitigate
compliance professional association in Barbados. against taking the law out of context [23-25].
Seventy four percent of these participants have no
legal training and eighty-four percent experience
The corpus has on average three coordinating
challenges reading legislation. The challenges
conjunctions per sentence. In calculating the
reported mirrored those associated with the usage of the coordinating conjunctions, detection
syntactic and lexical features of legal language as rules were created to identify when ‘and’ / ‘or’
outlined in the introduction. The Flesch reading were used in binomial or multinomial
ease scores of these commonly read Barbados expressions; these usages were deducted from the
legislations range from 28.1 – 36.6, i.e. they are total conjunctions prior to calculating the ratio of
difficult to very difficult to read [17]. The coordinating conjunction per sentence.
upcoming sections detail the syntactic and lexical Therefore, the average represents phrasal or
features of the corpus. clausal conjoining. In the corpus, ‘or’, ‘and’ and
‘for’ are the primary conjunctions used, 46.58%,
2.1. Syntactic & Lexical Features 27.16% and 20.62% respectively. On the
contrary, the coordinating conjunction ‘but’ that
The corpus analyzed is composed of the following marks contrast had only 2.16% presence in the
Barbados legislation: corpus. Similarly, ‘nor’ and ‘so’ had only 3.10%
and 0.38% usage respectively; ‘yet’ had no
• Exempt Insurance Act, 1983 occurrences within the corpus.
• Companies Act, 1985
• Proceeds of Crime, 1990 In addition, the corpus had on average two
• International Business Companies, 1992 subordinating conjunctions per sentence. Relative
clauses are heavily used in the corpus, with
• Financial Institutions Act, 1997
relative pronouns making up 53.69% of the total
• International Financial Services Act, 2002
subordinating conjunctions identified. As with
• Anti-Terrorism Act, 2002 coordinating conjunctions, contrast-type
• Money Laundering and Financing of subordinating conjunctions (e.g. while, whereas)
Terrorism (Prevention and Control) Act, 2010 are seldom used within the corpus; they make up
• Financial Services Commission, 2010 0.06% of the total subordinating conjunctions. In
addition, there is one occurrence of the similarity
Overall, the corpus contains 192155 tokens type conjunctions i.e. the term ‘likewise’.
and 3306 sentences. This size is sufficiently large
because the conservative nature of legal discourse Curtotti et al (2015) suggested, for improved
does not necessitate a large corpus to determine readability of legislative text, to avoid using more
its linguistic features. Bhatia (1983) identified than two conjunctions per sentence [20]. The
linguistic patterns in legislative text based on a multiple conjunctions create complex sentence
single British Parliament act; these findings were structures and syntactic discontinuities that can
later confirmed when similar experiments were make sentences difficult to read and understand.
repeated on larger corpuses of European, Hong However, for every negative impact a given
Kong and Chinese legislative texts [18, 19]. linguistic feature has on the readability of the
legislative text there are corresponding benefits
The average sentence length of the legislation for the legal domain. For example, while the
in the corpus range from 39 – 66 words, with the intensive use of conjunctions can result in
cognitive overload for some readers, they usage Table 1: Concept Definitions
serves the legal goals of precision and all- CONCEPT DEFINITIONS
inclusiveness [18, 26, 27]. Achieving these goals The auxiliary representing the action’s
could mean compacting all relevant information Modality
modality
into a single, long, complex sentence that aids in The person or class of persons performing
minimizing the possibility of loopholes and Actor or prohibited from performing a legal
evasions in the law [18, 28, 29] action
The rights, privileges, powers, obligations
Action
A sample of 208 sentences (45 – 115 words) or liabilities
was extracted from the corpus and their The circumstances / occasions in which
Case
dependency distance metric calculated. This the legal action applies
metric can be used as an indicator of The prerequisites that must occur before
Condition
comprehension difficulty and has implication for the legal action becomes operable
the utilization of readers’ working memory
capacities. A recommended threshold is less than The semantic annotations are rule based and
3 words [30]. The average dependency distance utilize Tregex patterns and Tsurgeon operations
metric of the sample sentences is 4 words; the [33]. They are fully automated and require no
lowest being 2 words and the highest 9 words. human intervention in pre-processing the
Therefore, on average four words separate two sentences. The Stanford CoreNLP [34] pipeline
elements that share a syntactic relationship, which was used to perform the typical NLP pre-
would typically reside alongside each other in the processing tasks of tokenization, sentence
sentence structure. segmentation, part of speech tagging and
constituency parsing. The output of the parsed
Finally, the use of Latin and Old English terms tree is the primary basis for the annotation rules.
in the corpus was assessed. The most commonly Nine Tregex pattern – Tsurgeon operation pairs
used archaic terms are “thereof”, “forthwith”, were created to detect the five semantic concepts
“thereby” and “thereafter”; i.e. 98, 61, 26 and 22 defined in table 1 above. The upcoming sections
occurrences respectively. The most commonly provide an overview of the Tregex rules specified
used Latin term was “mutatis mutandis”, which is in table 2 below.
used 12 times. However, overall the use of Old
English and Latin words in the corpus is Table 2: Rule Specification
miniscule: 243 Old English words and 30 Latin
words. In a corpus of 192155 words, these usages
average less than zero for a term-to-sentence ratio.
This lexical occurrences support the findings of a
study by Dell’Orletta (2012) which showed no
significant differences in the lexicon of a set of
EU legislation and the stories from the Wall Street
Journal. On the contrary, there was a noticeable
difference in the underlying syntactic structure of
the writings in the two domains [31].
3. The Semantic Annotation of Legal
Concepts
The concepts annotated for the easification of
action rule legislative sentences are defined in 3.1. The Modality Concept
table 1 below. The concepts were adopted from
Coode (1845) specification of the essential and The first rule searches for modal auxiliaries
optional elements of action rule legislative within the sentence, primarily those at higher
sentences [32]. levels within the tree structure. The rule however
is deliberately wide reaching to ensure that it
captures the correct modal auxiliary needed for
the identification of the ‘Actor’ and ‘Action’ any other characters. This regular expression
concepts in subsequent rules. Generally, the detects clauses beginning with terms such as
targeted modal auxiliary is sandwiched between ‘Where’, ‘When’, ‘Whence’ and extensions such
the ‘Actor’ and ‘Action’ sub-trees. The as ‘Whenever’.
annotation rule identifies a modal auxiliary which
is dominated by a verb phrase (VP). The verb The condition rule identifies adverbial and
phrase (VP) is in turn immediately dominated by prepositional phrases that are immediately
either a declarative clause or a subordinate clause dominated by a declarative clause and
that is immediately dominated by the root of the immediately dominates an adverb or a preposition
parsed tree. respectively. In most instances, the case and
condition clauses end with a comma. An
3.2. The Actor Concept additional rule searches for this comma and
relocates it inside the case and condition sub-trees.
The goal is to ensure that during the easification
The actor rule detects the noun phrase that acts as
process an orphan comma is not left behind.
the subject in the English language sentence
structure. Therefore, it is a node that must be
immediately dominated by nodes that are at high
levels within the parse tree, i.e. clauses 4. Related Works
immediately dominated by the root node. The
actor noun phrase (NP) is the left sister of the verb Boella et al. (2013) implemented a legal
phrase (VP) that dominates the modal auxiliary concept detection mechanism using a Support
detected in the modality rule. In addition, the rule Vector Machine binary classifier. They utilized
accommodates instances where the connection syntactic dependencies to build triplets to train
between the NP and the VP is interrupted by an three classifiers to categorize the concepts of
adverbial phrase and makes provisions for active roles, passive roles and objects [35]. They
complex sentences joined by coordinating used the Italian TULE parser to create the
conjunctions, in which case the conjunction node dependency information for the legislative text
acts as the head of the embedded sentence. [36]. The results of their approach showed high
precision and recall for the detection of the active
3.3. The Action Concept role (precision 97.2% and recall 92.6%),
moderate performance for the passive role
(precision 100% and recall 26.8%), and low
The legal action within the legislative sentence
performance for the object role (precision 59.3%
is a verb phrase (VP) who is the right sister of the and recall 31.9%). These results were negatively
sub-tree that represents the ‘Actor’ concept” and affected by the accuracy of the POS tagger and the
which precedes the ‘Modality’ concept. The syntactic parser. For instance, when the POS
‘Action’ verb phrase represents the predicate of tagger did not recognized a noun, it missed an
the sentence and is therefore immediately eligible word for a semantic label and the
dominated by high-level nodes in the sentence dependency parser could incorrectly label the
tree that have direct connections to the root node.
semantic relations associated with that term [35].
The annotation rules covered to this point are the One of the reasons given for the use of the
core or mandatory concepts in the action-rule machine learning classifier was to overcome the
legislative sentences. need for the sequential execution typically
associated with pattern-matching rules.
3.4. The Case & Condition Concepts Sleimi et al. (2018) utilized the traditional
ordered set of pattern matching rules to detect a
The case rule captures the Wh-clauses in the collection of legal concepts and attained high
initial sentence position, which typically represent performance across the varying concepts [37].
the case concept. These clauses are subordinate The purpose for the annotation in this work is to
clauses that immediately dominates a ‘Wh- support legal requirements engineering. Sleimi et
adverbial phrase, which in turn dominates a ‘Wh- al. (2018) used Tregex patterns to extract ten main
adverb’ that begins with a upper case ‘W’ phrase level concepts from constituency and
followed by a lower case ‘h’ and ‘e’ and then by dependency parsed trees. They established a set
of markers for each concept type based on Contiguous and complete sentences are those with
dictionaries and ontologies. These markers a non-bulleted format that end with a full stop and
formed part of the pattern matching rules. For not a semicolon. The selective nature of the
example, one of the patterns for the “Actor” rule sentences in the experiment were driven primarily
(subject dependency and NP < actor marker) was by the easification methodology utilized in the
represented as a noun phrase in the subject next stage of the experiment and the limitations of
dependency position and one that immediately using a constituency parser not trained on
dominates a term from the list of actor markers. legislative text.
The accuracy of Sleimi et al. (2018) rule
detections had overall precision and recall A hundred development sentences (Dev-Set)
measures of 87.4% and 85.5% respectively using were extracted from a set of Barbados intellectual
200 statements from Luxembourg traffic laws property legislation and annotated by the author.
[37]. These were used to iteratively test the annotation
rules during construction. These legislation
The level of accuracy attained in the work of included:
Sleimi et al. (2018) may result in part to the use of • Trademark Act, 1985
predefined terms within the relevant concept • Patent Act, 2001
repositories. While this approach simplifies the • Industrial Designs, 1981
rule construction, it requires human pre- • Copyright Act, 1998
processing to identify the terms that represent the • Telecommunications Act, 2001
markers for each concept. This technique was
utilized in other tools such as, the Gaius T, [38] An assessment of the syntactic composition of
and the NomosT, [39]. It however has some the intellectual property legislations was done and
drawbacks, for instance, where the repositories compared against those read by the research
are inadequately defined, the performance of the participants to ensure a degree of compatibility.
detection rules will be negatively affected. In The use of development sentences from a
addition, new markers will need to be added to comparable but different legislative domain from
extend the detection capabilities of the annotation those read by the participants was to ensure that
rules beyond the initial legislative domain. It is the algorithm only processes sentences from the
important to note that the work of Sleimi et al. participants’ domain after the rule development
(2018) also suffered challenges associated with was frozen. Two test sets were extracted for the
the performance of the parser as with the work of purpose of testing the performance of the
Boella et al. (2013). Much of Sleimi et al. (2018) annotation rules.
detection errors occurred from the constituency
parser’s inaccurate attachments of subordination, The first test set (Test Set A) contained one
coordination and prepositional phrases and hence hundred and twenty-one sentences extracted from
causing the dependency parser to infer incorrect the legislation read by the participants. These
dependency relationships amongst the nodes [37]. legislation were primarily from the financial
services sector. The average sentence length for
5. Research Experiment Test-set A was 63 words and the average
dependency distance metric was four. The author
The semantic annotations were done at a annotated Test-set A to provide a gold standard to
sentence level using three data sets containing assessment the performance of the annotation
action rule sentences that met the following rules.
criteria:
The second test set (Test-Set B) consisted of
• contiguous and complete; sixty-three sentences extracted from the Barbados
• a single legal action Road Traffic Act 1981. The average sentence
• simple, complex & compound structures; length for Test-set B was 60 words and the
average dependency distance metric was four.
• a single or compound subject;
Two legal experts independently annotated these
• at least one modal auxiliary in the upper level sentences. The author was guided by the
of the sentence tree; annotation procedures recommended by Hovy
• 40 or more words; and Lvid (2010) [40], these included:
• dependency distance metric of 3 or more;
• The provision of guidelines that define the Table 3: Annotation Results for Dev-Set
concepts and the method of highlighting each
Precision %
concept within the data set;
F Measure
Extracted
Recall %
Perfect
Match
Truth
• Giving the annotators practice sentences to CONCEPT
%
ensure the annotation process is understood
and the instructions are clear;
• Using annotators with reasonably similar Modality 118 141 118 83.7 100 91.1
levels of education;
Actor 116 103 94 97.9 82.5 89.5
• A minimum use of two annotators and have
them act independently; Action 117 98 92 98.9 79.3 88.0
• In the absence of a third adjudicator Case 34 33 27 100 79.4 88.5
annotator, any sentences where the Condition 20 17 17 100 85.0 91.9
annotations differ should be discarded;
Overall 405 392 348 93 86.6 89.7
The annotators were two lawyers with
equivalent educational training. They used the The rules detected 392 annotations from the
text highlight feature in Microsoft Office Word to development set. Of these 348 or 86% were
highlight each concept using a specified color perfect matches and 57 were missed or partially
scheme. As a way of improving the speed and detected annotation (14%). Annotations were
reliability of the annotations, the legal experts missed either because of the wrong text or no text
were instructed to annotate one concept at a time being detected for a given concept.
across all the sentences; for example, the first
round of annotations highlights the actor concepts Once the rule construction was frozen, the
only, the second round the actions etc. [40]. Since performance of the semantic annotation rules was
two annotators were used in the experiment, the tested using Test-Set A and Test-Set B. The
thirteen sentences where their annotations algorithm had not seen any of the sentences in
differed were deleted from the test set. Hence 50 these test sets prior to the computation of the
sentences remained in Test-Set B, which results shown in table 4 and 5 below.
represents a 79% agreement between the
annotators. In addition, to maximize the limited Table 4: Annotation Results for Test-Set A
time of the legal experts, a trade off was made
Precision %
where the experts annotated all of the mandatory
F Measure
Extracted
Recall %
Perfect
Match
Truth
concepts and the case concept; the optional CONCEPT
%
condition concept was not annotated. The legal
experts did not engaged the author during the
annotation process. Modality 142 159 142 89.3 100 94.4
Actor 141 134 129 97.0 91.5 94.2
5.1. Results of the Annotations Action 142 131 124 100 87.3 93.2
Case 47 47 41 100 87.2 93.2
The precision, recall and F measures were
computed for the development and the two test Condition 34 30 28 100 82.4 90.2
sets. Both lenient and strict computations were Overall 506 501 464 95.7 91.7 93.6
performed; the lenient computation assigned 0.5
points to partial annotations, while the strict
Table 4 shows the detection results for Test-set A;
computations assigned no points to partial
of the 501 annotations detected, 464 were perfect
detections, hence treating them as missed
match, i.e. 92%; 42 were missed or partially
annotations. The measures were done using
detected (8%). As expected, based on the strategy
GATE Developer 8.0 [41]. Based on the
discussed earlier, the results for the modality
application of the semantic annotation to the
concept showed a 100% recall. The recall for the
easification of sentences within the business
condition concept was the lowest at 82.4%.
context, the partial detections are unacceptable
Alternately, there were 100% precision results for
therefore only the strict computations were used.
Table 3 below shows the results of the annotation the action, case and condition concepts. The F
rules using the Dev-set. measures for all the concepts were above ninety,
with the overall precision, recall and F measures The detection rules for the three mandatory
being 95.7, 91.7 and 93.6 percentage respectively. components of the action rule legislative
These overall percentages are not averages of the sentences have a high degree of dependence.
individual concept measures, but rather Hence the risk of an initial failure in detecting the
computations based on the detection totals across modality concept can be transferred into failed
the concepts. actor and action detections. To mitigate this
drawback, the modal detection rule was designed
The results presented so far, have been compared to be all-inclusive in nature and in all the test sets
against truths annotated by the author. The results had a 100% recall results.
for Test Set B are compared against truths
annotated by the two legal experts participating in The automated detection rules used in this
the research; these are shown in table 5 below. research suffered from similar parser related
difficulties experienced in other works [35, 37, 42,
Table 5: Annotation Results for Test-Set B 43]. In the case of the Stanford constituency
parser, while the support website recommended
the most up-to-date version of the parser for the
Precision %
F Measure
Extracted
Recall %
Perfect
Match
best performance, that recommendation did not
Truth
CONCEPT
%
hold true for the legislative text used in this study.
The researcher found that the older probabilistic
context free grammar parser generated less
Modality 50 60 50 83.3 100 90.9
parsing errors than the newer shift-reduce parser.
Actor 51 44 44 100 86.3 92.6
Action 51 41 41 100 80.4 89.1 The increase in the parsing errors was directly
Case 21 19 19 100 90.5 95.0 linked to the increase in the complexity in the
sentence structures. Repeated errors occurred
Overall 173 164 154 93.9 89.0 91.4 when the subject of the sentence had one or more
embedded qualifiers, when prepositional phrases
Of the 173 annotations detected, 154 were perfect broke the continuity between the modal auxiliary
match, i.e. 89%; 19 were missed or partially and the main verb, and where compound
detected (11%). The performance results on sentences contained ‘or’ conjunctions. In
Test-set B are comparable with those on the Test- addition, some sentences were tagged as
set A. The overall precision was 93.9%; a 100% fragments if the typical English sentence structure
recall measure for the modality concept and the (subject-verb-object) was not detected. Another
‘case’ concept had a recall of 90.5%. The overall interesting parsing error occurred when the term
F-measure was 91.4%. ‘issue’ used in the context “shall issue to the
applicant” was tagged as a noun instead of a verb.
6. Discussion This miss tagging of the word ‘issue’ reflected the
part-of-speech tagger’s interpretation of ‘issue’ as
a topic or problem, instead of the act of
Generally, the detection results of the semantic distributing something. This error is likely rooted
annotations were good, with values of 83 – 100 % in the differences in the genre of the material used
for precision, 80 – 100% for recall and 89 – 94%
in the training the part of speech tagger when
for the F measure. To ensure the annotations were compared to legislative text.
fully automatic and hence eliminating the human
pre-processing, the implementation deviated from While the current work showed the
the use of concept markers utilized in tools such applicability of the annotation rules across
as, the Gaius T, [38], NomosT, [39] and the tool legislation in different domains, an expanded
by Sleimi et al (2018) [37]. This made the scope of the action rule sentences would further
detection rules more complicated but allows for test the generalizability of the annotation rules.
scalability and applicability across multiple Therefore, future work includes the utilizing
legislations in varying domains. As illustrated in larger, more diverse datasets to test the annotation
the data sets, the annotation rules detection rules. However this will also necessitate the
capabilities spanned the intellectual property, employment of techniques to overcome the
financial services and road traffic legislations. limitations of the part of speech and constituency
parsers.
7. The Semantic Annotation Applied • The demands on working memory occurs
from conscious cognitive activities;
to Easification • Schematic structures are utilized to store
information in long-term memory;
The semantic annotation of the legal concepts
was a necessary step in the easification process. Cognitive load is the demand placed on the
The diagram in figure 1 below shows how the storage and processing resources of working
semantic annotation fitted into the overall memory. When the mental demands of the
algorithm design. It added computer readable activities in working memory, at a given instance,
intelligence to the legislative sentence to facilitate exceed an individual’s cognitive capacity, the
the automation of the clarifying cognitive individual experiences cognitive overload [45,
structuring easification device. 47]. Miller (1956) estimated that working
memory stores approximately, 7 (+/- 2) amount of
active information chunks, which decay within 15
– 30 seconds if not actively rehearsed [48]. Other
researchers suggested a more precise capacity
might be 3 - 5 chunks during information
processing [49].
These working memory constraints have
implications for sentence processing and
comprehension. The capacity theory asserts that
sentence parsing and memory processes compete
for the same pool of resources. Therefore, if
sentence processing demands a substantial
amount of resources, the resources dedicated to
Figure 1: Semantic Annotation applied to storage would be reassigned to meet the
Easification processing demand; the resultant reduction in
storage capacity can lead to forgetting part of the
The easification of legislative sentences is a viable sentence; i.e. forgetting by displacement [50].
alternative to text simplification and is suitable for The longer and more syntactically complex the
specialist readers. Unlike text simplification, it sentence, the more likely readers will lose track of
focuses less on modifying the text and more on the structural development of the idea [18]. This
aiding the mental processes of the readers to can occur when some of the components succumb
facilitate the intake of the idea. Consequently, to working memory decay before integration into
easification evades a major risk of text the structure being built [51]. Typically, readers
simplification, that of inadvertently altering the are unaware of the intricate resource allocations in
meaning of the legislative text. This shift in working memory until they reach near full
emphasis from the text to the reader increased the capacity and the resultant trade-offs in working
likelihood of easification preserving the semantic memory distribution starts to occur [52].
integrity of the legislative text.
For the purpose of this research two types of
The easification device, clarifying cognitive cognitive loads were measured, intrinsic load and
structuring makes the components, the structure extraneous load. The intrinsic load (IL) is the
and relationships of the action rule legislative innate complexity of the information or task. This
sentences more apparent to specialist readers. It complexity is determined by element
draws on cognitive load theory (CLT) [44], which interactivity, which is the degree of
offers insights into the consumption of working interconnectivity amongst elements that
memory resources during task performance and necessitates them being processed simultaneous.
learning. CLT is built on the following basic ideas Intrinsic load is essential for comprehension [47,
about the human cognitive architecture (HCA) 53-57]. The extraneous load (EL) is induced by
[45, 46]: the way information is presented and organized.
• HCA has a very limited working memory It is considered the ‘bad’ load because it results in
storage mechanism and a very large long-term cognitive processing that is unrelated to learning
memory storage facility; and could impede learning. EL occurs when there
is high element interactivity and suboptimal construct that makes the cause and effect
communication. The aim is to minimized relationship more obvious.
extraneous load [58, 59].
7.1. Results of the Application to
Easification
The easification algorithm performs the
following functions utilizing the semantic Figure 3: The Easified version of the Securities Act
annotations along with additional annotations. It 2002 318A, s48 (2)
searches and extracts the semantic annotated
elements; annotates additional lower stratum The output illustrated in figure 3 utilizes the
elements, extracts the main legislative idea, following If-Then format proposed by Langton
inserts logic indicators and generates output (2005) as an extension to the initial easification
formats for the readers. device [61]:
IF case(s)
Take for example section 48 (2) of the IF condition(s), sub-condition(s)
Barbados Securities Act 2002 as shown below: THEN legal actor(s) modal
legal action(s)
“Where a broker is charged with an
offence involving fraud or dishonesty or Four lawyers were asked to evaluate the
where it is alleged that he has defaulted similarity in the semantics of four pairs of action
in the payment of moneys due to a self- rule legislative sentences; the original-unmodified
regulatory organisation or to any other version and the corresponding easified version.
market actor, the Commission may, if it There was an overarching agreement amongst the
considers that it is in the public interest lawyers that the meanings of the original
to do so, suspend the registration of the legislative sentences were retained in the easified
broker pending the final determination versions.
of the charge or allegation.” [60]
An additional experiment was also conducted
This legislative sentence has 68 words and a to identify the impact of the easified legislative
dependence distance metric of 4.75. The sentence on the cognitive load of sixty-three
easification algorithm generates the two outputs professionals that participated in this part of the
in figure 2 and 3 from the input sentence above. experiment. A modified version of Leppink, Pass
et al (2013) cognitive load measurement
instrument was used to capture the perceived
intrinsic and extraneous load of the participants
[62]. Confirmatory Factor Analysis was
performed on the modified measurement
instrument and it was found to be valid, reliable
Figure 2: The Main Idea of Securities Act 2002 and the data collected showed good model fit. In
318A, s48 (2) the experiment, the control group was given the
original version of the legislative sentence and the
The main legislative idea shown in figure 2, experimental group was given the easified version
consist of 18 words; approximately 74% less than of the same legislative sentence. An independent
the amount of words in the full sentence (68 sample t-test showed that the lower means for the
words). In addition, the complexity of the intrinsic and extraneous loads of the experimental
sentence has been reduced in this transient phase group, when compared to the control group were
of the sentence processing. The aim is to give the statistically significant.
reader the opportunity to create a mental frame of
the legislative idea prior to processing the details. Presenting the research participants with the
The output in figure 3 below, adds the details with main idea first, temporarily reduced the element
informative component labels and the If-Then interactivity of the legislative sentence. In
addition, the use of progressive revelation allowed [4] P.R. Macleod, Latin in Legal Writing: An
the participants to add the details incrementally, at Inquiry into the Use of Latin in the Modern
their own pace; this further assisted them in Legal World. 39 B.C.L. Rev. 235, 1998.
managing their intrinsic load. The mean of the [5] J. Crandall, V.R. Charrow, Linguistic
intrinsic load, of the experimental group was 3.33 Aspects of Legal Language. 1990.
and the control group is 4.57, with a statistically [6] R. Hyland, A Defense of Legal Writing.
significant p value of .01038 and a 95% University of Pennsylvania Law Review,
confidence interval. Similarly, the mean 1986. 134(3) 599-626.
extraneous load of the experimental group was [7] D. Mellinkoff, The Language of the Law.
4.16 and the control group was 5.43 and was 1963, Eugene, OR: Resource Publications.
statistically significant with a p value of .021 at a 526.
confidence interval of 95%. [8] J. Sheridan, Legislation.gov.uk and Good
Law. Civil Service Quarterly, 2014.
[9] R. Heaton, When Laws Become Too
8. Conclusion Complex - A review into the causes of
complex legislation. 2013.
[10] B.C. Jones, Don't Be Silly: Lawmakers
This research assessed the lexical and syntactic 'Rarely' Read Legislation and Oftentimes
composition of a corpus of Barbados legislation Don't Understand It . . . But That's Okay.
read by compliance professionals working in Penn State Law Review, Penn Statim, 2013.
Barbados. This research bridged a gap, and 118(7) 7 - 21.
developed a solution for specialist readers [11] Deloitte, The changing role of compliance
working in the business context where preserving officers. 2014.
the semantic integrity of the legislative text is [12] Ernst & Young, Compliance seeks a path to
critical to legal compliance. An algorithm was regulatory readiness, in Insurace CCO
successfully developed to easify action rule
survey. 2014, Ernst & Young Global:
legislative sentences. This included creating
London.
several semantic annotation rules to detect key [13] J.A. Tabuena, The Chief Compliance Officer
legal concepts without requiring any human pre- vs the General Counsel: Friend or foe?, in
processing of the text. The algorithm outputted an Compliance & Ethics Magazine. 2006,
easified legislative sentence with multiple Society of Corporate Compliance and Ethics:
perspectives of the legislative idea. The
Minneapolis, MN. pp. 4-7 & 10-15.
easification of the action rule legislative sentence [14] A. Gross-Schaefer, C.A. Cueto, Ethics &
proved effective in lowering the intrinsic and Compliance: The Game-Changer in the
extraneous loads of the specialist readers in the Business World. International Journal of
research sample, without compromising the Business and Social Science, 2017. 8(2) 57 -
semantic integrity of the legislative sentence. 65.
Future work will seek to expand the sample size [15] Sovos, The State of Regulatory Compliance.
of the participants and to explore the impact of 2017.
informed ratings in the cognitive load tests. [16] Ponemon Institute The True Cost of
Compliance with Data Protection
9. References Regulations. 2017.
[17] R. Flesch and A.J. Gould, The art of readable
[1] E. Mattiello, Nominalization in English and writing. 1949.
Italian Normative Legal Texts. SESP Across [18] V.K. Bhatia, Simplification v. Easification—
Cultures, 2010. 7 129 - 146. The Case of Legal Texts. Applied
[2] P.M. Tiersma, Language of Legal Texts, in Linguistics, 1983. 4(1) 42-54.
Encyclopedia of Language & Linguistics, B. [19] V.K. Bhatia, N.M. Langton, J. Lung, Legal
Keith, Editor. 2006, Elsevier: Oxford. pp. Discourse: Opportunities and treats for
549-556. corpus linguistics, in Discourses in the
[3] R.P. Charrow, V.R. Charrow, Making Legal Professions: Perspectives from Corpus
Language Understandable: A Linguistics, U. Connor and T.A. Upton,
Psycholinguistic Study of Jury Instructions. Editors. 2004, John Benjamins Netherlands.
Columbia Law Review, 1979. 79(7) 1306- pp. 203-232.
1374.
[20] M. Curtotti, E. McCreath, T. Bruce, S. Frug, [33] R. Levy, G. Andrew. Tregex and Tsurgeon:
W. Waibel, C. Nicolas, Machine learning for Tools for querying and manipulating tree
readability of legislative sentences, in data structures. Proceedings of the The Fifth
Proceedings of the 15th International International Conference on Language
Conference on Artificial Intelligence and Resources and Evaluation (LREC’06).
Law. 2015, Association for Computing Genoa, Italy: European Language Resources
Machinery: San Diego, California. pp. 53– Association (ELRA). 2006.
62. [34] CoreNLP, S. Using the Stanford CoreNLP
[21] G. Zhang, H. Liu, A Quantitative Analysis of API.2014, URL:
English Variants Based on Dependency http://stanfordnlp.github.io/CoreNLP/api.ht
Treebanks. Glottometrics, 2019. 44 16-33. ml.
[22] M. Cutts, Oxford Guide to Plain English. [35] G. Boella, L. Di Caro, L. Robaldo. Semantic
2013, URL: Relation Extraction from Legislative Text
http://site.ebrary.com/id/10775452. Using Generalized Syntactic Dependencies
[23] C. Williams, Legal English and Plain and Support Vector Machines. Proceedings
Language: an introduction. ESP Across of. Berlin, Heidelberg: Springer Berlin
Cultures, 2004. 1 111-124. Heidelberg. 2013, pp. 218-225.
[24] E. Tanner, Sanctity of the Single Legal [36] L. Lesmo, The Turin University Parser at
Rule/Single Sentence Structure. Monash U. Evalita 2009. Proceedings of the 11th
L. Rev., 2000. 26. Conference of the Italian Association for
[25] C. Renton, Renton Commitee Report on Artificial Intelligence. Itally, 2009.
legislation. 1975. [37] A. Sleimi, N. Sannier, M. Sabetzadeh, L.
[26] V.K. Bhatia, An Investigation into formal Briand, J. Dann, Automated Extraction of
and functional characteristics of Semantic Legal Metadata using Natural
qualifications in legislative writing and its Language Processing. Proceedings of the
application to English for Academic legal IEEE International Requirements
purposes. 1982, Univ. of Aston in Engineering Alberta, Canada 2018, pp. 124-
Birmingham. 135.
[27] V. Bhatia, J. Engberg, M. Gotti, D. Heller, [38] N. Kiyavitskaya, N. Zeni, T.D. Breaux, A.I.
Introduction, in Vagueness in Normative Antón, J.R. Cordy, L. Mich, J. Mylopoulos,
Texts - Linguistic Insights Studies in Automating the Extraction of Rights and
Language and Communication, V. Bhatia, et Obligations for Regulatory Compliance.
al., Editors. 2005, Peter Lang International Proceedings of. Berlin, Heidelberg: Springer
Academic Publishers pp. 9-21. Berlin Heidelberg. 2008, pp. 154-168.
[28] H.E.S. Mattila, Comparative Legal [39] N. Zeni, E.A. Seid, P. Engiel, J. Mylopoulos,
Linguistics 2006: Ashgate Pub Co. 347. Building Large Models of Law with
[29] C. Frade, Legal Multinomials: Recovering NómosT. Proceedings of. Cham: Springer
Possible Meanings from Vague Tags, in International Publishing. 2016, pp. 233-247.
Vagueness in Normative Texts Linguistic [40] E. Hovy, J. Lavid, Towards a 'science' of
Insights Studies in Language and corpus annotation: A new methodological
Communication, V. Bhatia, et al., Editors. challenge for corpus linguistics.
2005, Peter Lang International Academic Interrnational Journal of Translation, 2010.
Publishers pp. 133-153. 22(1) 13-36.
[30] H. Liu, Dependency distance as a metric of [41] Sheffield, T.U.o., GATE Developer. 1995,
language comprehension difficulty. Journal The Univ. of Sheffield: Sheffield, UK.
of Cognitive Science, 2008. 9(2) 159 - 191. [42] F. Dell’Orletta, S. Marchi, S. Montemagni,
[31] F. Dell’Orletta, S. Marchi, S. Montemagni, G. Venturi, T. Agnoloni, E. Francesconi,
B. Plank, G. Venturi, The SPLeT–2012 Domain adaptation for dependency parsing
Shared Task on Dependency Parsing of at evalita 2011. International Workshop on
Legal Texts, in 4th Workshop on “Semantic Evaluation of Natural Language and Speech
Processing of Legal Texts. 2012: Istanbul, Tool for Italian, pp. 58-69. Springer, Berlin,
Turkey, May. pp. 42 -51. Heidelberg, 2012.
[32] G. Coode, On legalislative Expression or the [43] G. Boella, L. Humphreys, M. Martin, P.
Language of the Written law. 1845, William Rossi, L. van der Torre, Eunomos, a Legal
Benning and Co.: London. Document and Knowledge Management
System to Build Legal Services. Proceedings [55] J. Sweller, Cognitive load during problem
of. Berlin, Heidelberg: Springer Berlin solving: Effects on learning. Cognitive
Heidelberg. 2012, pp. 131-146. Science, 1988. 12(2) 257-285.
[44] J. Sweller, P. Ayres, S. Kalyuga, Cognitive [56] J. Sweller, P. Chandler, Why Some Material
Load Theory. Explorations in the Learning Is Difficult to Learn. Cognition and
Sciences, Instructional Systems and Instruction, 1994. 12(3) 185-233.
Performance Technologies, ed. M. Spector [57] J. Sweller, J.J.G.v. Merrienboer, F.G.W.C.
and S. Lajoie. 2011, New York: Springer. Paas, Cognitive Architecture and
[45] J. Sweller, Human Cognitive Architecture. Instructional Design. Educational
Instructional Science, 2004. 32(1) 9-31. Psychology Review, 1998. 10(3) 251-296.
[46] J. Sweller, S. Sweller, Natural Information [58] S. Kalyuga, Informing: A Cognitive Load
Processing Systems. Evolutionary Perspective. Informing Science: the
Psychology, 2006. 4(1) International Journal of an Emerging
147470490600400135. Transdiscipline, 2011. 14 33-45.
[47] J. Sweller, Cognitive Load Theory, Learning [59] J. Sweller, Element Interactivity and
Difficulty, and Instructional Design. Intrinsic, Extraneous, and Germane
Learning and Instruction, 1994. 4 295-312. Cognitive Load. Educational Psychology
[48] G.A. Miller, The Magical Number Seven, Review, 2010. 22(2) 123-138.
Plus or Minus Two Some Limits on Our [60] The Securities Act 2002, 318A, s 48(2).
Capacity for Processing Information. 2002, The Government of Barbados.
Psychological review, 1956. 63 81 - 97. [61] N.M. Langton, Cleaning up the act:using
[49] N. Cowan, The magical number 4 in short- plain English in legislation. Clarity - Journal
term memory: A reconsideration of mental of the International Association Promoting
storage capacit. Behavioural and Brain Plain Legal Language, 2005(54) 28-33.
Science, 2000(24) 87-185. [62] J. Leppink, F. Paas, C.P. Van der Vleuten, T.
[50] M.A. Just, P.A. Carpenter, A capacity theory Van Gog, J.J. Van Merriënboer,
of comprehension: individual differences in Development of an instrument for measuring
working memory. Psychological review, different types of cognitive load. Behavior
1992. 99(1) 122-149. Research Methods, 2013. 45(4) 1058-1072.
[51] E. Gibson, The dependency locality theory:
A distance-based theory of linguistic
complexity, in Image, Language, Brain:
Papers from the First Mind Articulation
Project Symposium, Y.M. Marantz and W.
O'Neil, Editors. 2000, The MIT Press:
Cambridge, MA. pp. 95-126.
[52] P.A. Carpenter, M.A. Just, The Role of
Working Memory in Language
Comprehension, in Complex Information
Processing: The Impact of Herber A. Simon,
D. Klahr and K. Kotovsky, Editors. 1989,
Lawrence Erlbaum Associates: Hillsdale,
NY. pp. 31-68.
[53] K.E. DeLeeuw, R.E. Mayer, A Comparison
of Three Measures of Cognitive Load:
Evidence for Separable Measures of
Intrinsic, Extraneous, and Germane Load.
Journal of Educational Psychology 2008.
100(1) 223 - 234.
[54] W. Schnotz, C. Kürschner, A
Reconsideration of Cognitive Load Theory.
Educational Psychology Review, 2007.
19(4) 469-508.