<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Designing Data Classification and Secure Store Policy According to SOC 2 Type II</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Oleh</forename><surname>Deineka</surname></persName>
							<email>oleh.r.deineka@lpnu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>12 Stepana Bandery str</addrLine>
									<postCode>79000</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Oleh</forename><surname>Harasymchuk</surname></persName>
							<email>garasymchuk@ukr.net</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>12 Stepana Bandery str</addrLine>
									<postCode>79000</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Andrii</forename><surname>Partyka</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>12 Stepana Bandery str</addrLine>
									<postCode>79000</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Anatoliy</forename><surname>Obshta</surname></persName>
							<email>anatolii.f.obshta@lpnu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>12 Stepana Bandery str</addrLine>
									<postCode>79000</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Nataliia</forename><surname>Korshun</surname></persName>
							<email>n.korshun@kubg.edu.ua</email>
							<affiliation key="aff1">
								<orgName type="institution">Borys Grinchenko Kyiv Metropolitan University</orgName>
								<address>
									<addrLine>18/2, Bulvarno-Kudriavska str</addrLine>
									<postCode>04053</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="department">Cybersecurity Providing in Information and Telecommunication Systems</orgName>
								<orgName type="institution">EMAIL</orgName>
								<address>
									<addrLine>February 28</addrLine>
									<postCode>2024</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Designing Data Classification and Secure Store Policy According to SOC 2 Type II</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">8E5DB1F307B1FAE0F5D6DCA1FC3F255E</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:29+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>SOC 2 Type II, data classification, data security, access management, storage 0009-0005-9156-3339 (O. Deineka)</term>
					<term>0000-0002-8742-8872 (O. Harasymchuk)</term>
					<term>0000-0003-3037-8373 (A. Partyka)</term>
					<term>0000-0001-5151-312X (A. Obshta)</term>
					<term>0000-0003-2908-970X (N. Korshun)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper discusses the design of a data classification policy for SOC 2 Type II compliance. SOC 2 Type II is a significant certification that attests to a service organization's ability to meet the Trust Services Criteria, which encompass security, availability, processing integrity, confidentiality, and privacy. Data classification is a critical first step in establishing a robust data security strategy, as it helps organizations understand what data they have and assigns a level of sensitivity to that data, which informs the security controls that should be applied. The main objectives of data classification are to organize and manage data in a way that enhances its protection and aligns with the overall data security strategy of an organization. Data security plays a pivotal role in the data classification process, as it directly influences how classified data is protected and managed. Designing a data classification policy for SOC 2 Type II compliance involves several challenges and considerations that organizations must navigate to effectively protect sensitive information and maintain the integrity of their service delivery. These challenges and considerations include understanding the scope of data, aligning with the Trust Services Criteria, balancing security with usability, training, and awareness, regular updates, and reviews, defining classification levels, ensuring consistency, automating classification, integration with other policies and controls, dealing with third-party vendors, monitoring and enforcement, and legal and regulatory compliance.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The modern world is characterized by a rapid growth of information assets, which contain a rather high percentage of critical information. Large volumes of such information primarily require classification by various parameters and features, their reliable storage and transmission, as well as protection from unauthorized access. Recently, the number of possible attacks on information resources has been constantly increasing <ref type="bibr" target="#b0">[1]</ref><ref type="bibr" target="#b1">[2]</ref><ref type="bibr" target="#b2">[3]</ref>. Cybersecurity specialists are constantly developing new standards, approaches, and CEUR Workshop Proceedings (CEUR-WS.org) methods to counteract such malicious acts, as well as the development of infrastructure in this direction <ref type="bibr" target="#b3">[4]</ref><ref type="bibr" target="#b4">[5]</ref><ref type="bibr" target="#b5">[6]</ref><ref type="bibr" target="#b6">[7]</ref><ref type="bibr" target="#b7">[8]</ref><ref type="bibr" target="#b8">[9]</ref>. An important direction is the development of standards for safe data storage <ref type="bibr">[10,</ref><ref type="bibr">11]</ref>. Security standards allow a better understanding of how exactly an institution controls access to data and ensures their security and confidentiality <ref type="bibr" target="#b9">[12]</ref>.</p><p>The standards and requirements for data storage for organizations can vary depending on the country, the organization's industry, the sensitivity level of the information, and other factors. For a specific organization, there may be specific standards and requirements dictated by its needs and legal requirements. Most organizations or institutions form their security policy based on international standards, which are mostly carried out with the participation of external auditing companies that certify compliance with the standard <ref type="bibr" target="#b10">[13,</ref><ref type="bibr" target="#b11">14]</ref>.</p><p>However, there are still many problems that professionals who deal with secure storage of large volumes of data encounter. For instance, they have to grapple with issues of data integrity, confidentiality, and accessibility. Ensuring that the information remains unaltered from creation through storage and retrieval can be a daunting task. Moreover, professionals have to guarantee confidentiality, so that only authorized individuals can access the data. They also need to ensure that the data is readily accessible when needed, which can be challenging in an era of rapidly increasing data volumes.</p><p>While there are a variety of effective approaches, methods, and ways to organize big data storage, there are still certain problems in this area. The issue of searching for the necessary information in unstructured data can be identified as a significant drawback.</p><p>ISO 27001 is a standard designed to ensure proper management of a company's digital assets, including financial information, intellectual property, employee data, and trusted third-party information.</p><p>In turn, SOC 2 certification is more recognized and is usually preferred by American and Canadian companies.</p><p>Another important point: SOC is divided into SOC 1, SOC 2, and SOC 3. The first is exclusively about financial control, and the third is mostly used for marketing purposes, so SaaS providers can focus solely on SOC 2.</p><p>The Service and Organization Controls 2 standard was developed by the American Institute of Certified Public Accountants using the Trust Services Criteria reliability criteria. SOC 2 provides an independent assessment of risk management control procedures in IT companies that provide services to users.</p><p>The standard pays special attention to data privacy and confidentiality, so it is turned to by such giants as Google and Amazon-for them, a high level of security and transparent data processing processes are especially important. External auditors are invited for certification. Their task is to study the implemented practices, check how the company follows its procedures, and how it registers changes in processes.</p><p>SOC 2 Type II is a significant certification within the landscape of data security and compliance. It serves as an attestation by an independent auditor that a service organization has not only designed its systems to meet the Trust Services Criteria but also that it operates effectively over time. The Trust Services Criteria encompass several critical areas: security, availability, processing integrity, confidentiality, and privacy.</p><p>The importance of SOC 2 Type II lies in its ability to build trust with clients and stakeholders. By demonstrating a commitment to stringent data management practices, companies can assure clients that their sensitive data is handled responsibly. This is especially crucial in sectors where data privacy and security are paramount, such as financial services, healthcare, and cloud computing.</p><p>Moreover, the audit process SOC 2 Type II helps organizations identify and mitigate potential security risks, ensuring that they maintain a strong security posture. This proactive approach to risk management is critical in an era where cyber threats are constantly evolving, and data breaches can have catastrophic consequences. Therefore, there is a constant search for new approaches and methods to ensure reliable data storage and user and device authentication where this data is stored <ref type="bibr" target="#b12">[15]</ref><ref type="bibr" target="#b13">[16]</ref><ref type="bibr" target="#b14">[17]</ref>.</p><p>In an increasingly regulated environment, SOC 2 Type II compliance can also support adherence to legal and regulatory requirements. This can help organizations avoid costly penalties and legal issues associated with non-compliance.</p><p>From a business perspective, SOC 2 Type II compliance can serve as a competitive differentiator. It signals to the market that an organization is a reliable and secure partner, which can be instrumental in winning new business and retaining existing customers <ref type="bibr" target="#b15">[18]</ref>.</p><p>The result of implementing SOC 2 is a report based on the AICPA Attestation Standards, section 101, Attest Engagement.</p><p>Types of SOC 2 reports: Type I report contains information about the design of control procedures and the result of an assessment of the internal control system as of the date of the check. This type of report is a starting point for further building SOC 2 Type II compliance.</p><p>Type II report proves compliance with requirements over a certain period. The organization must demonstrate adherence to control measures and policies during this period, which usually requires a certain degree of automation and long-term commitments.</p><p>Goal of the work: Development of a solution for optimizing the classification of organizational data and its appropriate storage by the SOC 2 Type II standard.</p><p>Task: Analyze the main requirements for data classification and their storage organization, identify shortcomings, and search for the optimal solution in terms of speed and economic efficiency to ensure compliance with the SOC 2 Type II standard.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Overview of Data Classification and its Role in Data Security</head><p>Data classification is the process of organizing data into categories that make it easier to manage and protect based on its level of sensitivity and the impact on the organization should that data be disclosed, altered, or destroyed without authorization. It is a critical first step in establishing a robust data security strategy because it helps organizations understand what data they have and assigns a level of sensitivity to that data, which informs the security controls that should be applied. Recognizing this is important as big data plays a key role in data analytics. It's the analytics that allows us to correctly understand and interpret this data so that it can be used for making correct and justified decisions, predicting trends, etc. It's important to understand that big data repositories are not just a "large database". The main difference lies in the fact that databases typically store structured data and have a fixed schema, while repositories of unstructured data can also store unstructured data and process large volumes of information. The main objectives of data classification are to organize and manage data in a way that enhances its protection and aligns with the overall data security strategy of an organization. The process involves assigning categories to data based on its level of sensitivity and the potential impact on the organization if that data were to be improperly accessed, modified, or destroyed. The objectives are as follows: Identify Sensitive Data: Data classification enables organizations to determine which data is sensitive and requires more robust protection measures. This includes data such as Personal Identifiable Information (PII), financial details, health records, and intellectual property.</p><p>Facilitate Risk Management: By classifying data, organizations can better understand the risks associated with each type of data. Higher classification levels typically indicate a higher need for protection due to increased risk.</p><p>Enhance Regulatory Compliance: Many industries are governed by regulations that mandate the protection of certain types of data or dictate specific rules for their storage and access.</p><p>Data classification is critical for compliance with regulations such as the General Data Protection Regulation (GDPR), the Health Insurance Portability and Accountability Act (HIPAA), and others.</p><p>Enable Focused Security Measures: Through data classification, organizations can apply appropriate security controls where they are most needed. This targeted approach ensures that the most sensitive data receives the highest level of security, optimizing the use of security resources.</p><p>Support Access Controls: Proper data classification assists in the implementation of effective access controls. It ensures that access to sensitive data is restricted to authorized individuals based on their roles and the need-toknow principle.</p><p>Inform Data Lifecycle Management: Classification helps determine how data should be handled throughout its lifecycle, including retention, storage, archiving, organizing access to it, and secure destruction policies.</p><p>Prioritize Security Efforts: In the event of a security incident, understanding the classification of affected data can help prioritize response and recovery efforts, thereby minimizing the potential impact on the organization.</p><p>Raise Awareness and Accountability: It promotes awareness among employees about the types of data they handle and their responsibilities in safeguarding it, thereby fostering a culture of security and accountability within the organization <ref type="bibr" target="#b16">[19,</ref><ref type="bibr" target="#b17">20]</ref>.</p><p>Data security plays a pivotal role in the data classification process, as it directly influences how classified data is protected and managed. The role of data security in data classification can be described through several key functions:</p><p>Defining Protection Measures: Data security is the driving force behind the selection of protection measures for each classification level. Once data is categorized, data security principles guide the application of appropriate security controls, such as encryption, access controls, and monitoring systems.</p><p>Risk Mitigation: Data security practices are essential in mitigating the risks associated with the handling of data. By understanding the classification of data, organizations can implement security measures that are commensurate with the level of risk, ensuring that sensitive data is afforded stronger safeguards.</p><p>Regulatory Compliance: Many data security frameworks and regulations require the classification of data as part of their compliance standards. Data security ensures that classified data is handled in a manner that complies with legal and industry-specific requirements, thus avoiding potential fines and legal action.</p><p>Access Control: Data security policies determine who has access to various classes of data, based on their need to know and authorization levels. By enforcing strict access control measures, data security helps prevent unauthorized access to sensitive information.</p><p>Data Lifecycle Management: The role of data security extends throughout the entire lifecycle of the data, from creation to disposal. Security measures are applied differently at each stage of the lifecycle, depending on the classification of the data.</p><p>Incident Response and Recovery: In case of a data breach or other security incident, the classification of the compromised data guides the incident response and recovery efforts. Data security teams can prioritize their actions based on the sensitivity of the data involved, ensuring that the most critical data is addressed first.</p><p>Awareness and Training: Data security involves educating and training employees on the importance of data classification and the correct handling of data according to its classification. This increases awareness and reduces the likelihood of accidental data breaches or leaks, enhancing the reliability of their storage and control of access to them.</p><p>Auditing and Compliance Monitoring: Data security involves regular auditing and monitoring to ensure that classified data is being managed by established security policies and procedures. This helps to identify and rectify any deviations or weaknesses in the protection of classified data <ref type="bibr" target="#b18">[21,</ref><ref type="bibr" target="#b19">22]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Challenges and Considerations in Designing a Data Classification Policy for SOC 2 Type II</head><p>Designing a Data Classification policy for SOC 2 Type II compliance involves several challenges and considerations that organizations must navigate to effectively protect sensitive information and maintain the integrity of their service delivery. Here are some of the key challenges and considerations: Understanding the Scope of Data: Organizations must first identify and understand the types of data they handle, which can be a complex task, especially for large or data-intensive businesses. This involves mapping out where data resides, how it flows through the organization, and what data is critical for the operation or sensitive by nature.</p><p>Aligning with Trust Services Criteria: SOC 2 Type II revolves around the Trust Services Criteria set by the AICPA, which include security, availability, processing integrity, confidentiality, and privacy. A data classification policy must ensure that controls are in place to address these criteria appropriately for different categories of data.</p><p>Balancing Security with Usability: Implementing too stringent controls can hinder business operations, while too lenient controls can expose the organization to risk. Organizations must find the right balance to ensure data is both secure and accessible to authorized users as needed and with appropriate rights and privileges.</p><p>Training and Awareness: Employees must be aware of the data classification policy and understand their roles in maintaining compliance. Training programs are essential to ensure that all personnel can correctly handle data according to its classification.</p><p>Regular Updates and Reviews: Data classification policies must be dynamic, reflecting changes in the business environment, emerging threats, new data types, and regulatory requirements. Regular reviews and updates to the policy are necessary to maintain SOC 2 Type II compliance.</p><p>Defining Classification Levels: Organizations need to define clear and practical classification levels that reflect the sensitivity and value of the data. These levels will determine the corresponding controls and handling procedures.</p><p>Ensuring Consistency: Consistency in how data is classified across different departments and systems is crucial. Inconsistencies can lead to gaps in protection and potential compliance issues, which can result in possible data loss or unauthorized access.</p><p>Automating Classification: Manual data classification can be error-prone and inefficient and can be quite time-consuming. Implementing automated classification solutions can help, but it is essential to choose tools that align well with the organization's specific needs and compliance requirements.</p><p>Integration with Other Policies and Controls: The data classification policy must integrate seamlessly with other organizational policies, such as access control, incident response, and data retention policies, and not slow down their operation.</p><p>Dealing with Third-Party Vendors: If thirdparty vendors manage or have access to the organization's data, they must also adhere to the data classification policy. This requires careful vendor management and sometimes additional contractual agreements or audits, regarding their rights and privileges.</p><p>Monitoring and Enforcement: Ongoing monitoring is needed to ensure that the data classification policy is being followed and that controls are effective. This includes regular audits and reviews, which are part of SOC 2 Type II requirements.</p><p>Legal and Regulatory Compliance: Organizations must consider various legal and regulatory frameworks that apply to their data and ensure that the classification policy helps them meet these obligations and does not contradict current legislation.</p><p>Addressing these challenges and considerations requires a strategic approach and ongoing commitment to maintaining a robust data classification policy. Organizations may seek guidance from compliance experts, legal counsel, and SOC 2 audit professionals to design and implement a policy that not only meets SOC 2 Type II requirements but also supports the organization's overall data governance strategy <ref type="bibr" target="#b20">[23,</ref><ref type="bibr" target="#b21">24]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Data Classification Policy Design</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Requirements</head><p>While SOC 2 Type II itself does not prescribe specific data classification policies, it does require organizations to effectively manage and protect the confidentiality, privacy, and security of information, by the Trust Services Criteria (TSC). A Data Classification Policy is a critical component of meeting these criteria, particularly the Security criterion, which is common to all SOC 2 audits.</p><p>A SOC 2 audit measures the effectiveness of your processes and systems based on the Trust Service Criteria and checks compliance with information security standards and rules, including Common Criteria standards. Here are some general requirements that a Data Classification Policy should address to support SOC 2 Type II compliance:</p><p>Identification of Data Type: The policy should define the types of data handled by the organization, including sensitive data subject to SOC 2 considerations, such as PII, business confidential data, and intellectual property.</p><p>Classification Levels: The policy must establish clear classification levels that reflect the sensitivity of the data. Common levels include public, internal use only, confidential, and highly confidential.</p><p>Ownership and Responsibilities: The policy should define roles and responsibilities for data classification, including data owners, custodians, and users, and outline their responsibilities in maintaining data classification.</p><p>Handling Requirements: For each classification level, the policy should specify handling requirements, including storage, transmission, access controls, encryption standards, and end-of-life procedures.</p><p>Labeling and Marking: The policy should provide guidelines on how data should be labeled or marked according to its classification to ensure that it is easily identifiable and handled appropriately.</p><p>Access Controls: The policy must address access controls, ensuring that access to data is based on the principle of least privilege and that only authorized individuals can access sensitive data.</p><p>Retention and Disposal: The policy should outline data retention periods and secure disposal methods for each classification level, ensuring data is not kept longer than necessary and is disposed of securely.</p><p>Training and Awareness: The policy should mandate regular training and awareness programs for employees to understand the importance of data classification and their role in it.</p><p>Auditing and Monitoring: The policy should include provisions for regular auditing and monitoring to ensure that classification controls are effective and being followed.</p><p>Incident Response: The policy should be linked to an incident response plan that addresses potential data breaches or loss, with procedures tailored to the classification level of the data involved.</p><p>Review and Update: The policy should specify intervals for reviewing and updating data classification procedures to ensure they remain relevant and effective as the organization evolves, data volumes increase, and new threats emerge.</p><p>Third-Party Vendors: If data is shared with or handled by third-party vendors, the policy must extend to these vendors, often requiring them to adhere to similar or compatible classification and handling standards.</p><p>To ensure alignment with SOC 2 Type II requirements, developing a Data Classification Policy usually demands a comprehensive understanding of the AICPA's TSC and the unique data protection requirements of the organization. Engaging with seasoned compliance experts or auditors who can give tailored advice and oversee compliance with the standard's stipulations is highly recommended. The AICPA's guidance and frameworks such as ISO 27001, when consulted and utilized, can offer invaluable inputs for the creation and sustenance of a strong data classification policy. It is crucial to identify and categorize data based on its sensitivity, importance, and regulatory mandates. Moreover, regular reviews and updates of the policy should be conducted to ensure its efficiency and continued compliance with SOC 2 Type II requirements <ref type="bibr" target="#b22">[25]</ref><ref type="bibr" target="#b23">[26]</ref><ref type="bibr" target="#b24">[27]</ref><ref type="bibr" target="#b25">[28]</ref><ref type="bibr" target="#b26">[29]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Representation</head><p>A high-level overview of the interaction between a system and its users, outlining the different functions (use cases) the system is expected to perform and the roles that interact with these functions.</p><p>Considering the aforementioned requirements, we have developed the following structure (Fig. <ref type="figure" target="#fig_0">1</ref>), which fully allows for data classification, ensures their storage, and authorizes access to them by SOC 2 Type II. The diagram mentioned in the document illustrates and provides detailed information about the various actions, processes, and roles that are necessary to fulfill the requirements for coverage. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Roles</head><p>We suggest applying for the following roles:</p><p>Employee: An employee is responsible for adhering to the data classification policy, correctly handling data according to its classification, and reporting any incidents or violations. As the primary users of data within an organization, employees are responsible for correctly handling data according to its classification level. This means that employees must understand the different classification levels and the corresponding handling requirements, such as storage, transmission, access controls, and end-of-life procedures. In addition to correctly handling data, employees are also responsible for adhering to the data classification policy and reporting any incidents or violations. This includes reporting any suspected data breaches, loss, or unauthorized access to data. By promptly reporting incidents, employees can help the organization to quickly respond and mitigate any potential damage or block unauthorized access to data. To fulfill these responsibilities, employees must receive regular training and awareness programs to understand the importance of data classification, the rules, and methods of such classification, and their role in it. This training should cover the data classification policy, the different classification levels, and the handling requirements for each level.</p><p>Data Steward: A data steward is responsible for the management and governance of data within the organization. They ensure that data is classified correctly, that the classification policy is being followed, and that data is being used in compliance with legal and regulatory requirements. Data stewards play a crucial role in maintaining the integrity of the data classification policy and ensuring that it is effectively implemented throughout the organization. They work closely with data owners, custodians, and users to ensure that data is correctly classified and that the appropriate controls and handling procedures are in place. Data stewards also monitor compliance with the data classification policy and report any incidents or violations to the appropriate authorities.</p><p>Auditor: An auditor plays a crucial role in assessing an organization's compliance with SOC 2 requirements, including the data classification policy. They are responsible for independently reviewing the policy, processes, and controls to ensure that they meet the Trust Services Criteria. The Trust Services Criteria encompass several critical areas: security, availability, processing integrity, confidentiality, and privacy. The auditor's role is to provide an objective evaluation of the organization's compliance with these criteria and to identify any areas where improvements may be needed. This helps the organization to maintain a strong security posture, and reputation among its clients and partners, and to demonstrate its commitment to protecting sensitive data <ref type="bibr" target="#b16">[19]</ref>.</p><p>Admin: An admin plays a crucial role in maintaining the smooth operation of an organization's IT systems. They are responsible for deploying new app versions, monitoring system performance, and patching all operational staff. Here are some of the key responsibilities of an admin in this context:</p><p>Deploying new app versions: Admins are responsible for rolling out new versions of applications to ensure that users have access to the latest features and security updates. This involves testing the new version, preparing the deployment plan, and coordinating with other teams to ensure a smooth rollout. Monitoring: Admins are responsible for monitoring the performance and availability of IT systems. This involves tracking key metrics, identifying and resolving issues, and ensuring that systems are operating at optimal levels. Patching all operational staff: Admins are responsible for ensuring that all operational staff have the latest security patches and updates installed on their systems. This involves identifying and deploying patches, testing their effectiveness, and ensuring that all systems are up-to-date and secure.</p><p>SSO system: A Single Sign-On (SSO) system is responsible for managing user authentication and access control, the rights, and privileges of authorized individuals. It ensures that users are correctly authenticated and that they have access only to the data that they are authorized to access based on their roles and the data classification <ref type="bibr" target="#b27">[30]</ref>.</p><p>Processing System: A processing system plays a crucial role in managing data in line with the data classification policy. It ensures that data is processed, stored, and transmitted securely, adhering to the policies set forth by the organization. The processing system also involves data indexing, which is a method of organizing data to optimize its retrieval. This function is crucial as it makes the data search process more efficient, enabling users to locate and retrieve the necessary data quickly. Machine learning classification is an integral part of a processing system <ref type="bibr" target="#b28">[31]</ref><ref type="bibr" target="#b29">[32]</ref><ref type="bibr" target="#b30">[33]</ref><ref type="bibr" target="#b31">[34]</ref>.</p><p>ITSM: IT Service Management (ITSM) is responsible for the delivery of IT services that support the data classification policy. This includes the provision of systems, tools, and processes that enable the organization to effectively classify, manage, and protect its data. ITSM also plays a key role in access management, request fulfillment, and incident management <ref type="bibr" target="#b29">[32]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4.">Actions and Processes for Review Customer Data</head><p>New customer information or category appears: If new customer information or category appears it should be added to Customer Data Catalog. Adding new customer information could include collecting additional details with any relevant information.</p><p>Adding a new category could involve creating a new grouping or segmenting the existing customer data into different categories. Adding new customer information or categories is usually done to improve the effectiveness of the Customer Data Catalog and enable to make more informed business decisions. However, it's important to ensure that the new information or category is collected and stored in compliance with data protection regulations and customer privacy laws.</p><p>The sensitivity level has changed: The sensitivity level of the data category refers to how valuable or confidential the information is, and how much damage or harm could be caused if it were to be disclosed or accessed by unauthorized individuals or entities.</p><p>When the sensitivity level of the data category has changed, it means that the level of importance or confidentiality of the data has increased or decreased. If a previously nonsensitive data category has now become sensitive due to changes in regulations, business practices, or legal requirements, the sensitivity level of that data category has increased. Conversely, if the sensitive data category has become less important or valuable due to changes in business practices or legal requirements, the sensitivity level of that data category has decreased. It is important to review and assess the sensitivity level of the data category to ensure that it is being protected adequately and to make any necessary adjustments to security measures and access controls.</p><p>The description of an existing category has changed: If the data being collected for a specific category is changing or expanding, the description of that category may need to be edited to reflect the new data category being collected. Editing the description of an existing customer data category is usually done to ensure that the information being collected is accurately and completely described <ref type="bibr" target="#b19">[22]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.5.">Data Flow Design</head><p>This diagram provides all data flow steps:</p><p>Step 1: Understanding the Types of Data Your Company Owns</p><p>The first step in creating a Data Flow Diagram is to understand the types of data your company owns. Data can be broadly classified into three categories: structured, semi-structured, and unstructured.</p><p>Structured data refers to data that is organized in a predefined manner, such as data stored in a relational database. Structured data is easy to search, analyze, and manipulate, as it follows a consistent format. Semi-structured data refers to data that has some level of organization but does not follow a strict format. Examples of semi-structured data include XML and JSON files, which contain data in a hierarchical format, but do not have a fixed schema.</p><p>Unstructured data refers to data that has no inherent structure or organization. Examples of unstructured data include text documents, images, and videos. Unstructured data can be difficult to search, analyze, and manipulate, as it does not follow a consistent format <ref type="bibr" target="#b32">[35,</ref><ref type="bibr" target="#b33">36]</ref>.</p><p>Step 2: Understanding the Metadata Associated with Your Data Once you have identified the types of data your company owns, the next step is to understand the metadata associated with that data. Metadata refers to data that provides information about other data. For example, the metadata associated with a text document might include the author, date of creation, and file size. Understanding the metadata associated with your data can help you to better organize, manage, and analyze your data <ref type="bibr" target="#b19">[22]</ref>.</p><p>Step 3: Using Integration Tools to Manage and Store Your Data After you have identified the types of data your company owns and the metadata associated with that data, the next step is to use integration tools to manage and store your data. Integration tools allow you to extract data from various sources, transform it into a common format, and load it into a data store. This process, known as Extract, Transform, Load (ETL), allows you to consolidate your data into a single location, making it easier to manage and analyze <ref type="bibr" target="#b28">[31]</ref><ref type="bibr" target="#b29">[32]</ref><ref type="bibr" target="#b30">[33]</ref><ref type="bibr" target="#b31">[34]</ref>.</p><p>Step 4: Creating a Data Model Once your data has been extracted, transformed, and loaded into a data store, the next step is to create a data model. A data model is a visual representation of the relationships between different data elements. It provides a framework for organizing and structuring your data and can help you to identify patterns and trends within your data <ref type="bibr" target="#b34">[37]</ref>.</p><p>Step 5: Classifying and Linking Your Data to Metadata</p><p>After you have created a data model, the next step is to classify your data and link it to the metadata associated with it. This involves assigning a level of sensitivity to your data, based on its importance and the potential impact if it were to be lost or stolen. Once your data has been classified, you can link it to the metadata associated with it, providing additional context and information about the data <ref type="bibr" target="#b35">[38]</ref>.</p><p>Step 6: Visualizing and Managing Your Data The final step in creating a Data Flow Diagram is to create an application that allows you to visualize and manage your data. This application should provide a user-friendly interface for accessing, analyzing, and manipulating your data. It should also include logic for managing access, requests, and incidents, and should be integrated with your ITSM system to ensure that data is handled according to your company's policies and procedures <ref type="bibr" target="#b36">[39,</ref><ref type="bibr" target="#b37">40]</ref>.</p><p>This solution presents a host of advantages over traditional product-based offerings from various companies. One of the key benefits is the flexibility to choose the hosting environment that best fits your needs, be it onpremise or cloud-based. This allows you to align the solution with your operational requirements and infrastructure capabilities.</p><p>Furthermore, you have the freedom to select the technology stack that best suits your project. This means that you're not limited to a predetermined set of technologies, but can tailor the solution to leverage the most relevant and efficient tools for your specific needs.</p><p>In terms of team composition, you can assemble a team that is uniquely suited to the project at hand. This flexibility ensures that the right expertise and skills are applied to deliver the best possible outcomes.</p><p>Another advantage is the budgeting flexibility. Unlike vendor-specific solutions that may come with fixed licensing costs, the budget for this solution can be adjusted according to your financial capacity and project requirements. This can result in significant cost savings without compromising on quality or performance.</p><p>Lastly, this solution offers robust change and feature management capabilities. This means that it can easily adapt to evolving business needs, with the ability to incorporate new features and make necessary changes in a timely and efficient manner. This flexibility ensures the solution remains relevant and continues to deliver value over time.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.6.">Value of SOC 2 Type II Compliance</head><p>The SOC 2 Type II report has become a standard requirement for businesses looking to assure clients, partners, and stakeholders about the security of their data and systems. This report, issued by an independent auditor, offers an in-depth review and attestation of the effectiveness of a company's information security controls over some time.</p><p>The main reason why the SOC 2 Type II report is valuable to a company is that it provides clear evidence that the company has robust and effective controls in place to protect customer data. In today's digital age, data security is a top priority for businesses and customers alike. A data breach not only leads to financial loss but also damages a company's reputation.</p><p>The SOC 2 Type II report helps build trust with customers by demonstrating that a company has taken necessary measures to protect its data. It's a clear signal to clients that their data is safe, secure, and handled in a manner that meets or exceeds industry standards.</p><p>Another benefit of SOC 2 Type II is that it can provide a competitive edge. Companies that have achieved SOC 2 Type II compliance can differentiate themselves from competitors that haven't. This can be a decisive factor for potential customers when choosing between different service providers.</p><p>Furthermore, the SOC 2 Type II report can help companies avoid penalties related to noncompliance. Various laws and regulations require businesses to take certain steps to protect customer data. By achieving SOC 2 Type II compliance, companies can demonstrate that they are meeting these requirements, thus avoiding potential fines and legal complications.</p><p>The SOC 2 Type II report can also help companies identify and address vulnerabilities in their information security controls. The process of achieving compliance requires a comprehensive review of a company's information security policies and procedures. This can help identify any weaknesses or gaps that need to be addressed, thereby strengthening the company's overall security posture.</p><p>Lastly, the SOC 2 Type II report can help improve a company's internal processes. The process of achieving compliance requires a company to document and formalize its information security policies and procedures. This can lead to more efficient and effective processes, as well as a greater understanding of the company's information security risks and controls among employees <ref type="bibr" target="#b38">[41,</ref><ref type="bibr" target="#b39">42]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions</head><p>According to the document: In conclusion, designing a data classification policy for SOC 2 Type II compliance is a complex but crucial task for organizations. SOC 2 Type II is a significant certification that attests to a service organization's ability to meet the Trust Services Criteria, which encompass security, availability, processing integrity, confidentiality, and privacy. Data classification is a critical first step in establishing a robust data security strategy, as it helps organizations understand what data they have and assigns a level of sensitivity to that data, which informs the security controls that should be applied. The main objectives of data classification are to organize and manage data in a way that enhances its protection and aligns with the overall data security strategy of an organization. Designing a data classification policy for SOC 2 Type II compliance involves several challenges and considerations that organizations must navigate to effectively protect sensitive information and maintain the integrity of their service delivery. These challenges and considerations include understanding the scope of data, aligning with the Trust Services Criteria, balancing security with usability, training, and awareness, regular updates, and reviews, defining classification levels, ensuring consistency, automating classification, integration with other policies and controls, dealing with third-party vendors, monitoring and enforcement, and legal and regulatory compliance. Addressing these challenges and considerations requires a strategic approach and ongoing commitment to maintaining a robust data classification policy. Organizations may seek guidance from compliance experts, legal counsel, and SOC 2 audit professionals to design and implement a policy that not only meets SOC 2 Type II requirements but also supports the organization's overall data governance strategy. The proposed solution aims to demonstrate the simplicity of the process that can be developed using the technologies and resources that are acceptable to the company within an affordable budget.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Use case diagram</figDesc><graphic coords="7,72.00,72.00,326.20,295.50" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Big Data Security and Privacy: A review</title>
		<author>
			<persName><forename type="first">B</forename><surname>Matturdi</surname></persName>
		</author>
		<idno type="DOI">10.1109/CC.2014.7085614</idno>
	</analytic>
	<monogr>
		<title level="j">China Communications</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="issue">14</biblScope>
			<biblScope unit="page" from="135" to="145" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Analysis of the Attack Vectors Used by Threat Actors During the Pandemic</title>
		<author>
			<persName><forename type="first">V</forename><surname>Susukailo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Opirskyy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Vasylyshyn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE 15 th International Scientific and Technical Conference on Computer Sciences and Information Technologies</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="261" to="264" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Security Threats for Big Data: An Empirical Study</title>
		<author>
			<persName><forename type="first">M</forename><surname>Islam</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Int. J. Inf. Commun. Technol. Human. Dev</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="1" to="18" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">DNACDS: Cloud IoE Big Data Security and Accessing Scheme Based on DNA Cryptography</title>
		<author>
			<persName><forename type="first">A</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Namasudra</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11704-022-2193-3</idno>
	</analytic>
	<monogr>
		<title level="j">Frontiers Comput. Sci</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page">181801</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Generator of Pseudorandom Bit Sequence with Increased Cryptographic Security</title>
		<author>
			<persName><forename type="first">O</forename><surname>Harasymchuk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Metallurgical and Mining Industry Sci. Tech. J</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="25" to="29" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Management of Information Protection Based on the Integrated Implementation of Decision Support Systems</title>
		<author>
			<persName><forename type="first">V</forename><surname>Lakhno</surname></persName>
		</author>
		<idno type="DOI">10.15587/1729-4061.2017.111081</idno>
	</analytic>
	<monogr>
		<title level="j">Eastern-European J. Enterprise Technol. Inf. and Controlling Syst</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">9</biblScope>
			<biblScope unit="page" from="36" to="41" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Formation of Requirements for the Electronic Record-Book in Guaranteed Information Systems of Distance Learning</title>
		<author>
			<persName><forename type="first">H</forename><surname>Hulak</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cybersecurity Providing in Information and Telecommunication Systems</title>
		<imprint>
			<biblScope unit="volume">2923</biblScope>
			<biblScope unit="page" from="137" to="142" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Development of Additive Fibonacci Generators with Improved Characteristics for Cybersecurity Needs</title>
		<author>
			<persName><forename type="first">V</forename><surname>Maksymovych</surname></persName>
		</author>
		<idno type="DOI">10.3390/app12031519</idno>
	</analytic>
	<monogr>
		<title level="j">Appl. Sci</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page">1519</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Combined Pseudo-Random Sequence Generator for Cybersecurity</title>
		<author>
			<persName><forename type="first">V</forename><surname>Maksymovych</surname></persName>
		</author>
		<idno type="DOI">10.3390/s22249700</idno>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">24</biblScope>
			<date type="published" when="2022">2022. 9700</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Invasion Detection Model using Two-Stage Criterion of Detection of Network Anomalies</title>
		<author>
			<persName><forename type="first">V</forename><surname>Buriachok</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Workshop on Cybersecurity Providing in Information and Telecommunication Systems</title>
		<imprint>
			<biblScope unit="volume">2746</biblScope>
			<biblScope unit="page" from="23" to="32" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Evaluation Method of the Physical Compatibility of Equipment in a Hybrid Information Transmission Network</title>
		<author>
			<persName><forename type="first">P</forename><surname>Anakhov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Theor. Appl. Inf. Technol</title>
		<imprint>
			<biblScope unit="volume">100</biblScope>
			<biblScope unit="issue">22</biblScope>
			<biblScope unit="page" from="6635" to="6644" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Improving the Security Policy of the Distance Learning System based on the Zero Trust Concept</title>
		<author>
			<persName><forename type="first">P</forename><surname>Skladannyi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cybersecurity Providing in Information and Telecommunication Systems</title>
		<imprint>
			<biblScope unit="volume">3421</biblScope>
			<biblScope unit="page" from="97" to="106" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Simulation of Authentication in Information-Processing Electronic Devices Based on Poisson Pulse Sequence Generators</title>
		<author>
			<persName><forename type="first">V</forename><surname>Maksymovych</surname></persName>
		</author>
		<idno type="DOI">10.3390/electronics11132039</idno>
	</analytic>
	<monogr>
		<title level="j">Electronics</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="issue">13</biblScope>
			<biblScope unit="page">2039</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">An Improved Data Backup Scheme Based on Multi-Factor Authentication</title>
		<author>
			<persName><forename type="first">J</forename><surname>Yi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wen</surname></persName>
		</author>
		<idno type="DOI">10.1109/BigDataSecurity-HPSC-IDS58521.2023.00041</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 9 th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing (HPSC), IEEE Intl Conference on Intelligent Data and Security (IDS)</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Designing Secured Services for Authentication, Authorization, Accounting of Users</title>
		<author>
			<persName><forename type="first">D</forename><surname>Shevchuk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cybersecurity Providing in Information and Telecommunication Systems II</title>
		<imprint>
			<biblScope unit="volume">3550</biblScope>
			<biblScope unit="page" from="217" to="225" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Calder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Watkins</surname></persName>
		</author>
		<title level="m">IT Governance: An International Guide to Data Security and ISO27001/ISO27002</title>
				<imprint>
			<publisher>Kogan Page</publisher>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<ptr target="https://www.arma.org/" />
		<title level="m">Information Classification: Getting It Right</title>
				<imprint/>
	</monogr>
	<note>ARMA International</note>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">Securing the Cloud: Cloud Computer Security Techniques and Tactics</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Vic</surname></persName>
		</author>
		<author>
			<persName><surname>Winkler</surname></persName>
		</author>
		<idno type="DOI">10.1016/C2009-0-30544-9</idno>
		<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Information Security Management Principles</title>
		<author>
			<persName><forename type="first">D</forename><surname>Alexander</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
		<respStmt>
			<orgName>BCS, The Chartered Institute for IT</orgName>
		</respStmt>
	</monogr>
	<note>Updated edition</note>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Rhodes-Ousley</surname></persName>
		</author>
		<title level="m">Information Security: The Complete Reference</title>
				<imprint>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
	<note>Second Edition</note>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Harkins</surname></persName>
		</author>
		<title level="m">Managing Risk and Information Security: Protect to Enable</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><surname>Peltier</surname></persName>
		</author>
		<title level="m">Information Security Policies, Procedures, and Standards: Guidelines for Effective Information Security Management</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<ptr target="https://us.aicpa.org/interestareas/frc/assuranceadvisoryservices/soc-for-service-organizations" />
		<title level="m">SOC 2®-SOC for Service Organizations: Trust Services Criteria</title>
				<imprint>
			<publisher>AICPA</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">IS Audit Basics: The Domains of Data and Information Audits</title>
		<author>
			<persName><forename type="first">E</forename><surname>Gelbstein</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ISACA J</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Practical Data Security and Privacy for GDPR and CCPA</title>
		<author>
			<persName><forename type="first">U</forename><surname>Mattsson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ISACA J</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">3</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Boosting Cyber Security With Data Governance and Enterprise Data Management</title>
		<author>
			<persName><forename type="first">G</forename><surname>Pearce</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ISACA J</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<title level="m" type="main">IT Service Management: A Guide for ITIL Foundation Exam Candidates</title>
		<author>
			<persName><forename type="first">D</forename><surname>Cannon</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2012">2012</date>
			<publisher>BCS</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Harper</surname></persName>
		</author>
		<title level="m">Gray Hat Hacking: The Ethical Hacker&apos;s Handbook</title>
				<imprint>
			<publisher>McGraw Hill</publisher>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Cote</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lah</surname></persName>
		</author>
		<title level="m">Professional Microsoft SQL Server 2014 Integration Services (SSIS)</title>
				<imprint>
			<publisher>Wrox</publisher>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Chauhan</surname></persName>
		</author>
		<title level="m">Mastering Apache Airflow</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Gaikwad</surname></persName>
		</author>
		<title level="m">Learning AWS Glue</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Anoshin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Avdeev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Van Vliet</surname></persName>
		</author>
		<title level="m">Azure Data Factory Cookbook</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<author>
			<persName><forename type="first">N</forename><surname>Karumanchi</surname></persName>
		</author>
		<title level="m">Data Structures and Algorithms Made Easy: Data Structures and Algorithmic Puzzles</title>
				<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Watson</surname></persName>
		</author>
		<title level="m">Data Management: Databases and Organizations</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Hoberman</surname></persName>
		</author>
		<title level="m">Data Modeling Made Simple: A Practical Guide for Business and IT Professionals</title>
				<imprint>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Aggarwa</surname></persName>
		</author>
		<title level="m">Data Classification: Algorithms and Applications</title>
				<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<monogr>
		<author>
			<persName><forename type="first">Y</forename><surname>Duhamel</surname></persName>
		</author>
		<title level="m">Microsoft Power Platform Enterprise Architecture</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Collie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Singh</surname></persName>
		</author>
		<title level="m">Power BI: Moving Beyond Power Pivot and Excel</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<monogr>
		<ptr target="https://www.aicpa.org/interestareas/frc/assuranceadvisoryservices/aicpasoc2report.html" />
		<title level="m">Understanding SOC 2 Reports</title>
				<imprint>
			<publisher>AICPA</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<monogr>
		<ptr target="https://www.alertlogic.com/blog/why-soc-2-type-ii-certification-matters/" />
		<title level="m">Why SOC 2 Type II Certification Matters</title>
				<imprint/>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
