Guidance to First Responders in COVID-19

first responders

The Office for Civil Rights, which is the HIPAA enforcement arm of U.S Department of Health and Human Services (HHS), issued guidance today on how entities subject to HIPAA (covered entities) may disclose protected health information (PHI) about an individual who has been exposed to COVID-19 to law enforcement, paramedics, other first responders, and public health authorities in compliance with the HIPAA Privacy Rule.

In its guidance, OCR explains the circumstances under which a covered entity may disclose PHI, such as the name or other identifying information about individuals, without their HIPAA authorization, and provides examples including:

· When needed to provide treatment;

· When required by law;

· When first responders may be at risk for an infection; and

· When disclosure is necessary to prevent or lessen a serious and imminent threat.

Today, OCR clarified the regulatory permissions that a covered entity may use to disclose PHI to first responders and others so they take the necessary precautions or use personal protective equipment. OCR is also careful to remind all covered entities to take reasonable steps to limit the PHI used or disclosed to that which is the “minimum necessary” to accomplish the purpose for the disclosure, which is frankly a good recommendation for all PHI related disclosures, pandemic or not. Even though these are extraordinary times, we must be sure to protect one another’s privacy while also striving to protect the health of our first responders during this crisis. OCR is careful to strike that balance in today’s guidance. 

Clients and friends can find the guidance here

Stay safe and healthy!


If you need further information, contact us here.

Is Ohio Getting It’s Cybersecurity Act Together?

computer with code

When state senators Bob Hackett and Kevin Bacon introduced Senate Bill 220, I for one felt a sense of relief that, at last, Ohio would finally take much-needed action on the issue of cybersecurity. The bill is far from perfect, but it is finally a START of what will hopefully result in meaningful comprehensive cybersecurity legislation.

What does the bill accomplish? It incentivizes Ohio companies to adopt a risk-based framework by providing a “safe harbor”, which is an “affirmative defense”, to tort claims arising out data breaches caused by third-party malefactors.  The bill indicates that all covered entities (any Ohio business that “…accesses, maintains, communicates, or handles personal information”, or, essentially all Ohio companies), may  seek a safe harbor under the law provided the company has a “written cybersecurity program that contains administrative, technical, and physical safeguards for the protection of personal information that complies with the NIST cybersecurity framework or other industry cybersecurity frameworks (such as Center of Internet Security Critical Security Controls, ISO 27000).

For health care entities complying with the Health Insurance Portability and Accountability Act (HIPAA), banks and other financial institutions complying with the Gramm-Leach-Bliley Act (GLBA) and government contractors complying with the Federal Information Security Modernization Act (FISMA), the bill allows for a safe harbor for those entities who have developed their own frameworks to comply with industry regulations.

The bill requires that covered entities seeking safe harbor must have written cybersecurity programs must be designed to do the following:

(1) Protect the security and confidentiality of personal information;

(2) Protect against any anticipated threats or hazards to the security or integrity of personal information;

(3) Protect against unauthorized access to and acquisition of personal information that is likely to result in a material risk of identity theft or other fraud to the individual to whom the information relates.

The bill takes into consideration that not all entities have the same security challenges.  The bill acknowledges that the cybersecurity program of covered entities may take into account the following:

(1) The size and complexity of the covered entity;

(2) The nature and scope of the activities of the covered entity;

(3) The sensitivity of the personal information to be protected;

(4) The cost and availability of tools to improve information security and reduce  vulnerabilities;

(5) The resources available to the covered entity.

Now for the rub.

For a covered entity to successfully assert the affirmative defense afforded by the bill, it must demonstrate “substantial compliance” with its chosen risk-based framework or HIPAA, GLBA or whatever regulatory rubric applies to the covered entity.  To a lawyer, the term “substantial compliance” automatically means “litigable issue.” What does “substantial” mean?  It is wholly subjective and it will take years in Ohio courts, if ever, to create a case law definition.  From a cybersecurity standpoint, we do not have years to shore up Ohio’s networks.

I guess what I’m really driving at is that Ohio needs law with more teeth in it. How about a law that simply mandates that you have a written cybersecurity program and follow a risk-based framework if you maintain sensitive personal information as part of your business?  Operators in health care, banking and any publically traded company understand such a mandate. Entities who do not obey the law will be held accountable on the basis of negligence per se in the event they sustain a breach without a risk-based framework in place. Litigation will result either way.  A clear mandate would bring more clarity to questions of liability and presumably more businesses would adopt a risk-based framework in the face of a mandate.

In the end, isn’t more about security than liability?

Ickes Holt Featured Speakers at All Ohio Counselor’s Conference

ickes holt featured speakers

Ickes Holt had the pleasure to present a seminar at the All Ohio Counselor’s Conference (“AOCC”) in Columbus. From their website:

Supported by the Ohio Counseling Association (OCA) and the Ohio School Counselors Association (OSCA), the All Ohio Counselors Conference is the leading professional development conference in the state of Ohio for licensed counselors, counseling students, supervisors, and counselor educators who work in a clinical/community, school, college, addiction, private practice, or other related setting.

Ickes Holt presented a 90 minutes seminar on Information Security and Privacy for Mental Health Professionals. The seminar focused on educating mental health professionals about the Privacy and Security Rules of HIPAA and associated regulations, as well as external and internal threats, ethical handling of subpoenas, and how to accept and bear the legal and ethical obligations imposed by HIPAA. The seminar focused on implementing a written information security program and best practices regarding psychotherapy notes and addressable items, such as encryption.

Ickes Holt believes that mental health professionals remain an under-educated and under-served area in information security and privacy law. A unique sector of healthcare, mental health professionals maintain some of the most important and sensitive information possible about their patients. Further, the entire basis of the patient-counselor relationship is based in confidentiality and trust. It is imperative that patients trust their counselor so they can receive the help they need. Finally, mental health professionals keenly understand the sensitive nature of the patient-counselor relationship and greatly value confidentiality.

For these reasons, Ickes Holt is committed to supporting mental health professionals maintain and safeguard patient privacy. If your mental health care practice has questions about HIPAA, information security, patient privacy, or issues regarding lawful disclosures of patient information, please feel free to contact us today. We are happy to help.

Regarding Privacy Ohio Sets a High Bar for Medical Marijuana

medical marijuana and privacy

Over the last few years, agencies such as the Federal Trade Commission have fostered a movement to encourage industry to implement the concept of privacy-by-design.  The idea behind privacy-by-design is that when developing new software, hardware, medical-devices or other such products that extract personal information, such as personally identifiable information (PII), health care information, geo-tracking data, etc., the manufacturer should consider privacy in the product’s design.

The European Union has historically been very aggressive on privacy matters and recently mandated privacy-by-design in its new General Data Protection Regulation (GDPR), which will become enforceable in May 2018. The GDPR will require companies to not only design compliant privacy policies, procedures, and systems at the outset of any product or process development but must also employ a data protection officer to ensure compliance.

Although the US has industry specific regulations for healthcare (HIPAA) and banking (GLBA) that require organizations to address privacy and security, and the Securities and Exchange Commission requires auditing and reporting of controls associated with information security and cybersecurity, until now, there has been no legislative rubric mandating privacy-by-design.

Recently, the Ohio Medical Marijuana Control Program (OMMCP) created mandates for privacy and information security that are among the strictest in the country.

The long and short is that all medical marijuana industry participants (cultivators, processors, dispensaries, or testing facilities) that use an “electronic system” for storing and retrieving records required by the regulations or related to medical marijuana in any way (including all patient data for dispensaries) shall implement a system that does the following:

  • Guarantees the confidentiality of the information stored in the system (emphasis on the emphasis);
  • Is capable of providing safeguards against erasures and unauthorized changes in data after the information has been entered and verified;
  • Is capable of placing a litigation hold or enforcing a records retention hold for purposes of conducting an investigation or pursuant to ongoing litigation; and
  • Is capable of being reconstructed in the event of a computer malfunction or accident resulting in the destruction of the data bank.

One of the above requirements clearly stands out.  If medical marijuana businesses use a computer to store medical marijuana related data (which will be most if not all its data), the system must be capable to guarantee the confidentiality of the data. In other words, the Ohio medical marijuana industry must guarantee patient privacy and the security of its data systems.

The result is an entirely new, state-based industry which legally must be designed with privacy and security in mind.  Personally, I believe that guaranteed confidentiality is impossible and any cybersecurity, physical security, or privacy professional worth their salt will tell you “there is no such thing as perfect security.”  In fact, most, if not all, federal and state privacy and information security laws require reasonable security, a standard which itself is continually evolving in the law. Consequently, I also believe that the required guarantee will ultimately be amended, compelled by litigation, lobbying efforts, or both and Ohio’s medical marijuana regulations will move toward a standard something more akin “reasonable security”.

However, I have resolved that this ridiculously high standard will be a good thing for the Ohio medical marijuana industry. It will make the entire industry put privacy, information security, and date protection on the short list of organizational imperatives.  An organization simply cannot ignore a regulation that requires a guarantee of confidentially.  These fledgling companies must hardwire privacy and security into their businesses from the very start. Here are a few suggestions:

  1. Most privacy breaches are the result of human error. Develop a 21st century information governance program comprised of policies and procedures that clearly articulate how information will be handled within the organization.
  2. Regularly train all members of the organization on privacy and information and physical security. Training can be done in group settings or one-on-one, online, or in person. There are many privacy and security training options and most are not cost prohibitive.
  3. Document all your privacy and security incidents and all corrective measures taken.
  4. Engage legal counsel. Yes, I am an information security and privacy attorney who wants to help medical marijuana companies. Yes, I am self-interested. However, my self-interest doesn’t change the fact that one thing attorneys can do is provide virtually ironclad confidentiality related to client information under certain circumstances, particularly in anticipation of litigation or prosecution. With cannabis currently illegal on a federal level, wouldn’t all Ohio medical marijuana business be conducted under the auspices of federal prosecution?

With the OMMCP taking such a bold stance on privacy and security it will be interesting to see if such rigorous requirements will be a help or a hindrance to the industry. Although wouldn’t it be a sweet twist of fate if an industry imperiled by stigma of the black market and “reefer madness”, becomes a sterling example of privacy and security the modern age? It is our goal at Ickes\Holt to see that happen.

Stay tuned for our upcoming article on the privacy and information security requirements for Ohio medical marijuana dispensaries, which must be prepared to comply with the Health Insurance Portability and Accountability Act of 1996 (HIPAA), Ohio Automated Rx Reporting System (OARRS) along with a whole host of particularized recordkeeping and reporting requirements.

Encryption Prescription

encryptions and hipaa

Regardless of the actual legitimacy of the HIMSS Study, it raises an important discussion point regarding encryption. So, with due respect to the pundits advocating caution, I will presume it to be reliable.  When viewed as reliable, the HIMSS Study presents compelling statistics with immediate impact to the healthcare industry.

The Numbers Regarding Encryption.

According to the HIMMS Study, approximately 32% of hospitals and 52% of non-acute providers do not encrypt data in transit.  Further, 39% of acute providers and 52% of non-acute providers do not encrypt data at rest.   The overarching gist of the HIMMS Study is that a significant percentage of healthcare organizations (“HCOs”) do not encrypt data, either at rest or in transit.  But, what’s the big deal?

The Rules Regarding Encryption.

HIPAA does not necessarily require encryption.  However, encryption is an addressable implementation specification.  See 45 CFR 164.312(a)(2)(iv).   Importantly, “addressable” does not mean “optional.”  Instead, “addressable” means that a covered entity must “[i]mplement the implementation specification if reasonable and appropriate” under the circumstances for that covered entity.  See 45 CFR 164.306(d)(3).  If a covered entity determines that an addressable item is not reasonable and appropriate, it must document why and implement an equivalent measure, if the substitute measure is reasonable and appropriate.  Clearly, if encryption is reasonable and appropriate for a covered entity, failure to implement encryption violates HIPAA’s Security Rule.  Thus, the operative question is whether encryption is reasonable and appropriate.

In 2016, encryption tools are readily available and there is no excuse for failing to encrypt data at rest.   For example, Windows OS includes BitLocker Drive Encryption onboard.  Further, there are numerous affordable encryption options for Windows.[v]   Mac offers FireVault 2 encryption standard with OS X.  Firevault 2 encrypts not only the hard drive, but removable drives as well.  FireVault is a respectably robust encryption tool, especially for individuals or small business.  Mac users also have additional options for encryption.[vi]

Data in transit is a bit more technical.  I do not claim to be a CISSP – my knowledge base is in the law, not hardware and software.  So, for purposes of this article, let’s just consider that “data in transit” entails methods with which we are all familiar – email, fax, and text.  All of these transmissions may be encrypted by employing various programs, services, and technology, many of which are readily available and affordable.

People will undoubtedly argue about the viability of, and protection afforded by, these encryption tools.  For example, you can Google numerous articles discussing the security flaws in Firevault 2 and BitLocker.  Encryption options for faxing and texting usually fare no better.

The good news is that HIPAA does not demand that the encryption WORK – but only that covered entities “[i]mplement a mechanism to encrypt and decrypt” ePHI.  See 45 CFR 164.312(a)(2)(iv).   HIPAA defines encryption as “the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key.”  See 45 CFR 164.304.  So, the mere fact that a covered entity implements encryption methods meeting technical requirements[vii] satisfies HIPAA’s basic requirement.  Of course, covered entities must also keep safeguards up to date and monitor overall effectiveness in protecting information assets.

Finally, it should be stated that encrypting data relieves a covered entity from data breach notification requirements in many states, including Ohio.  In Ohio, data breaches exposing “personal information” must, under certain circumstances, be reported to the individuals.  See R.C. 1349.19(B)(1).  Information is only “personal information” “when the data elements are not encrypted, redacted, or altered by any method or technology[.]”  R.C. 1349.19(A)(7)(a).

In closing, it is arguable that encryption is currently reasonable and appropriate for 100% of covered entities.  Under that postulation, then, according to the HIMSS Study, between 32% to 52% of HCOs are violating HIPAA and perhaps do not even realize they are doing so.  While HIPAA’s Privacy and Security Rules go far beyond encryption, perhaps it is a good, objective starting point for covered entities.  Stakeholders in covered entities (and business associates) should ask:

  • Do we store data? If so, do we encrypt that data?

  • Do we transmit data? If so, how?  Email, fax, or text?

  • Do we encrypt the data we transmit? How?

  • Is encryption reasonable and appropriate for our organization?

  • If not, do we have the justifications documented?

Based on this self-analysis, covered entities should contact an information security lawyer to help them: (1) conduct a thorough and confidential analysis of existing information security policies and procedures; and (2) develop and implement an information security regimen tailored to foster an organizational culture of security.


[ii] outpatient clinics, rehabilitation facilities and physicians’ offices.  See note iv, infra.

[iii] 2016 HIMSS Cybersecurity Survey, available at:

[iv] For example, the HIMSS Study was sponsored by FairWarning.  FairWarning is a provider of information security services and has a considerable market in … you guessed it … the healthcare industry.  Sure, it seem convenient that a study exposing a lack of information security in healthcare is sponsored by a seller of information security to healthcare. In fact, the lawyer in me demands the injection of a healthy dose of skepticism.

However, in fairness, as an information security attorney, I could be accused of the same sort of fear-mongering designed to scare people into hiring me.  But, I know this to be patently untrue.  No reasonable person would consider identification of critical issues and application of sound legal advice to mitigate those issues as “fear mongering.”  It is no different that advising a business owner to incorporate to avoid the risk of exposing personal assets to creditors.  So, because I know my motives are pure, I am inclined to extend the benefit of doubt to others.



[vii] HHS has issued guidance on encryption standards, namely referring to NIST guidelines.  For example, encryption for data at rest must be consistent with NIST Special Publication 800-111.  Encryption for data in transit must comply with other specifications, including NIST Special Publications 800-52,


Morgan Stanley Smith Barney Not “Too Big To Fail” Sec Administrative Proceeding

morgan stanley bailout

In October 2008, Morgan Stanley received a $10 billion bailout from the U.S. Government.  Morgan Stanley, amongst other financial institutions, were simply “too big to fail.”  In 2016, however, the Securities and Exchange Commission (“SEC”) determined that one of Morgan Stanley’s subsidiaries, Morgan Stanley Smith Barney (“MSSB”) was not “too big to fail” an SEC administrative proceeding.  On June 8, 2016, the SEC issued an order against MSSB for its violation of the Safeguards Rule (Rule 30(a) of Regulation S-P).  The Order instituted an administrative cease and desist for violations of the Safeguard Rule and levied a $1 million civil penalty.[i] 

The gist of the underlying facts are as follows.  MSSB maintained substantial personally identifiable information (“PII”) in 2 specific Web applications accessible through MSSB’s intranet.  MSSB had adopted written policies and procedures intended to restrict employees access to, and handling of, customer PII. Under these policies, MSSB employees were prohibited from accessing PII other than what was necessary to perform specific responsibilities.  MSSB also installed technology controls, including: (1) authorization protocols designed to allow employees access to only PII belonging to that employee’s customers; (2) controls restricting employees from copying data onto removable storage devices; and (3) controls restricting employee access to certain categories of websites via MSSB computers.[ii]

In or about 2011, a MSSB employee (“Marsh”) discovered multiple flaws in the security of MSSB’s technology controls which ultimately allowed him to circumvent all restrictions and obtain unauthorized access to customer PII.    Marsh was able to download and transfer the PII by accessing his personal website from his MSSB computer and uploading the PII to his personal server.  MSSB’s filtering software did not prevent employees from accessing “uncategorized” websites from MSSB computers.  During a routine Internet sweep in December 2014, MSSB identified some of the PII for sale on the Internet.  Ultimately, MSSB determined that a third party hacked Marsh’s personal server and copied the PII.[iii]

The Safeguards Rule requires covered organization to “adopt written policies and procedures reasonably designed to: (1) insure the security and confidentiality of customer records and information; (2) protect against any anticipated threats or hazards to the security or integrity of customer records and information; and (3) protect against unauthorized access to or use of customer records or information that could result in substantial harm or inconvenience to any customer.”[iv]   According to the Order, MSSB violated the Safeguards Rule because: (1) its existing policies and procedures were not reasonably designed to meet the Rule’s objectives; (2) its technology protocols contained design flaws which rendered them effectively useless; (3) it failed to reasonably audit/test the technology protocols in place; and (4) it failed to monitor and analyze employees’ access to the customer PII.[v]

There are multiple lessons to be taken from MSSB’s settlement:

Lesson 1:  The MSSB settlement provides valuable insight into what is clearly the SEC’s very strict definition of “reasonable” security.  By most standards, MSSB actually complied with the Safeguards Rule.  MSSB had written policies and procedures and technology controls meant to address the Safeguards Rule.  Moreover, unlike many companies out there, MSSB’s discovery of, and incident response to, the data breach was quick and  effective:

  • MSSB discovered the compromised data within what appears to be a matter of a week or so once it was posted for sale online.
  • MSSB discovered the exposed PII during a regular sweep of the Internet which demonstrates they have someone actively monitoring potential risks.
  • MSSB swiftly took steps to remove the PII from the Internet and notified proper authorities.
  • MSSB immediately started an investigation and within a few days of discovering the breach, procured an admission from Marsh.
  • MSSB began notifying affected customers by January 5, 2015, just 9 days after discovering the breach.

MSSB recognized its obligation under the Safeguards Rule, devoted resources to the issue, and took meaningful steps to comply.  In fact, the Federal Trade Commission declined to bring charges against MSSB under Section 5 for the exact same incident, citing MSSB’s “comprehensive policies designed to protect against insider theft[.]”[vi] Yet, the SEC found MSSB’s violation “willful” and levied its largest monetary sanction to date.  It is clear that what the SEC has lacked in terms of quantity of enforcement actions, it intends to make up for in terms of severity.

The MSSB settlement ultimately presents an unavoidable question for entities under SEC jurisdiction:  If MSSB’s robust policies, procedures, and protocols (albeit flawed) are insufficient to avoid SEC sanctions under the Safeguards Rule, is the end result even arguable in a case where the organization adopts minimal policies, procedures and protocols, or fails to adopt any whatsoever?

Lesson 2:  Perhaps MSSB’s most crucial mistake was to rest on its laurels.  MSSB adopted policies and procedures and employed technological safeguards, but then inexplicably stopped.  In fact, according to the SEC, MSSB “failed to conduct any auditing or testing of the authorization [protocols] … at any point since their creation at least 10 years” prior.[vii]  That is astounding … and likely a contributing factor to the SEC’s determination that MSSB’s violation was willful.

From flawed controls on the Web applications, to the failure to install authorization protocols on certain applications, to inadequate Internet filters, to a breakdown in written policies and managerial oversight, it is safe to say that MSSB’s information security was a house of cards.  Further, the evidence indicates that MSSB did not follow its written policies and procedures and that employee training, accountability and supervision were not organizational priorities.   While there is no such thing as perfect security, these failings indicate that MSSB’s underlying procedural and technical flaws were exacerbated by an organizational culture of complacency.

Lesson 3:   It is dangerous to hyper-focus on external threats.  As pointed out repeatedly in this blog, internal threats and insiders (malign or benign) are an increasingly probable threat vector.  MSSB was exploited by a single insider, who was then exploited in turn by a single outsider.  MSSB managed to keep the external threat at bay, but handed the keys to the kingdom to an insider who them lost them anyway.  Organizations must split their focus and keep their own house in order.  Employee training and accountability must be meaningful and sustained.  Internal access controls must be in place, operational, and enforceable. Auditing, testing, and recalibrating must be an ongoing process.  Supervision and accountability from the executive level must be a priority.

Lesson 4:   The SEC is getting serious.  According to SEC Chair Mary Jo White, cyber security is the biggest risk facing the financial system.[viii]   Regulation S-P has been around since 2000, and the requirement of written policies has been in effect since 2005.  However, only recently has the SEC ramped up examinations and enforcement actions related to cybersecurity.  Cybersecurity compliance and controls, including governance, access controls, training, and incident response, were the focus of the Office of Compliance Inspections and Examinations 2015 Cybersecurity Examination Initiative.[ix]  Perhaps more importantly, as indicated in the MSSB settlement, the SEC is taking a hard line on its expectations of reasonable security and will not accept excuses or half measures.

ICKES \ HOLT is a full-service, team-driven, and client focused law firm in Northeast Ohio concentrating on information security and governance. Information is the DNA of modern organizations and ICKES \ HOLT is dedicated to advising clients on how to protect its information. Please contact us to discuss establishing or improving the information governance policies for your organization.



[iii] Id.

[iv] Id. at ¶3

[v] Id.


[vii] at ¶8



What’s App-Ening to Your Financial Data?

venmo app security risks

Recently, a friend asked me to pay him back for movie tickets via Venmo.  For those of you born before 1985, Venmo is a mobile app owned by PayPal which allows users to “[p]ay anyone with a Venmo account instantly using money you have in Venmo, or link your bank account or debit card quickly.” Simply, instead of “divvying up the check”, people can now electronically transfer funds back and forth through Venmo, using Venmo “wallets” or a direct link to their bank.  Suffice it to say, I refused.  While we joked about my age, “youngsters and their ‘future money’” and “financial black magic”, my refusal was not based in age, fear, or lack of understanding.  Instead, it was based off of an informed and objective analysis of the interaction of mobile apps and security.

Well, it appears my fears were well founded.  According to PayPal’s 2016 1st Quarterly Report for the SEC, Paypal admitted that it was under investigation by the Federal Trade Commission (“FTC”) for unfair or deceptive acts and practices as related to Venmo.[i]  While Paypal does not elaborate on the nature of the investigation, it seems apparent that the FTC’s investigation is focused on a host of privacy violations.

In March 2016, the parties filed an “Assurance of Voluntary Compliance” (the “Assurance”) in In the Matter of State of Texas and Paypal, Inc. (the “Paypal Litigation”).  The Paypal Litigation derived from an investigation of Paypal by the Texas Attorney General for potential violations of Texas’ deceptive trade practices and consumer protection law.  The Assurance lays out a litany of privacy violations concerning Venmo, most notably:

  1. Auto-friending, which permits Venmo to access and assimilate a user’s contact list in order to add those contacts to the user’s Venmo Friends list, all without a deliberate action by the user or adequate choice. It appears that Venmo was also accessing users’ contacts lists without any real privacy notice.  See Assurance, ¶6(A)(i).
  2. Potential misrepresentations about the level of security provided by Venmo. See Assurance, ¶6(B).
  3. Venmo’s default “audience setting” is set to public – which publishes a “timeline” of your Venmo financial transactions. This setting can be changed to private, but according to the Assurance, it seems that this is not commonly known and Venmo doesn’t exactly make it easy to accomplish.[ii]  See Assurance ¶6(C) (“At the time of … any transaction, [Venmo] shall clearly and conspicuously disclose the audience setting for the transaction in close proximity beneath, beside, or adjacent to any field … or call to action.”).

If you look closely at the screen shot above, you will see how Venmo creates a crawling “ticker” of your financial transactions.   Think of a Twitter feed, but the updates are your financial transactions using Venmo.

Based on the Paypal Litigation and the Assurance, it seems to be a pretty safe bet that the FTC investigation of Paypal/Venmo settles smack dab in the wheelhouse of Section 5 of the FTC Act.

The 3 violations asserted in the Paypal Litigation are serious, especially considering the apparent lack of notice provided to Venmo users about the app’s information sharing practices.  However, I have a couple of other concerns about Venmo that were not addressed by the Assurance – 1 practical and 1 policy.

First, the practical. Signing in with Google or Facebook accounts has become very popular.  After all, it’s easy, right?   Venmo advertises this feature on its website.  See  But have you ever stopped to consider HOW Venmo is able to create an account for you and log in by using your Facebook account?  Or, is it just yet another mystical Internet transaction that doesn’t concern you?

In order for Venmo to log you in using Facebook, an authentication process must occur, called “OAuth.”  Now, OAuth is by all accounts, a pretty decent way to do this.  OAuth creates “tokens”  which allow the third party app to access your Facebook account and do the things you have allowed it to do.[iii]  However, some services don’t exactly tell you what permissions you are giving away, or instead bury them in hard-to-find-and-harder-to-understand privacy notices

For example, the first time anybody sees Venmo’s privacy notice is after they’ve chosen to start the Facebook login process.  Further, notice the tiny “privacy policy” link in the bottom left hand corner.  Like most privacy notices, it is not clear and conspicuous.  However, if one bothers to read the privacy notice, they will discover that Venmo collects the following information from its users:

  • Account Information – text-enabled cellular/wireless telephone number, machine or mobile device ID and other similar information.
  • Identification Information – your name, street address, email address, date of birth, and SSN.
  • Device Information.
  • Social Media Information.
  • Financial Information – bank account and routing numbers and credit cards linked to your Venmo account.

Finally, Venmo makes the incredible caveat that it “may collect additional information from or about you in other ways not specifically described here.”  That stipulation conveniently seems to counteract the entire purpose of a privacy notice.  But, that is another topic for another day.

Back to the issue at hand.  It seems insane to sign into Venmo using Facebook.  The whole point of Venmo is that it is a financial app with a direct link to your bank account or credit card information.  While Venmo makes it very clear that it “does not share financial information with third party social networking services” there is no reason to disbelieve that a hacker infiltrating Facebook could somehow “back-door” into Venmo, and thus, users’ financial information.

What’s more, Facebook just had an epic security breach in 2013 where 6 million users were compromised.  Facebook is one of the largest social media platforms and is a high profile target for hackers.  With all due, respect, this layman will presume that logging into Venmo with my Facebook account will potentially expose my financial information.

Now the policy concern.  Venmo illustrates the one of the barriers to comprehensive federal cybersecurity legislation – the allocation of risk.  This struggle has occurred across sectors, but is very evident amongst retail and banking/financial.  And, I believe, with good reason.

An app like Venmo needlessly puts users’ financial information at risk, and banks will ultimately be the ones left holding the proverbial bag should Venmo get hacked and that financial information is used to infiltrate the banks’ networks.  If a bank is compromised through information obtained in a Venmo hack (think Target and Fazio, as I previously wrote about:  ), then the bank, through no real fault of its own, will be subject to regulatory action and perhaps even civil liability.

Quite legitimately, we are talking about the potential exposure of: (1) Venmo users; (2) their banks; (3) their credit card companies; and (4) all of the OTHER customers of the banks and credit card companies. We are also talking about legal consequences for the banks and credit card companies for the disclosure. From a legal and policy perspective, it is problematic that the fate of a regulated entity may be so significantly intertwined with and affected by the security of an unregulated entity.

It’s no wonder that the banking and financial industry are supporting federal data security and breach notification standards.  They are subject to heightened standards and are exposed when an unregulated entity fails to take security seriously. In fact, according to a spokesperson: “Financial institutions have had this obligation for 15 years, and it’s long overdue for Congress to pass legislation ensuring that everyone has a similar mandate to keep customer data safe.”[iv]  Translation:  banks are mad as hell.

The morale of the story is that, until everyone is regulated, consumers have to be careful.  While the FTC does have jurisdiction over interstate commerce, they are limited to investigating unfair and deceptive trade practices.  A strong information security regulatory framework with a private right of action would go a long way to ensuring that all entities collecting personal information have sufficient security.

Call me old and out of touch.  Call me a curmudgeon.  Mock my puritanical sensibilities.  I don’t care.  There is no chance that I will ever divvy up the bar bill using Venmo.

[i]                  “On March 28, 2016, we received a Civil Investigative Demand (“CID”) from the Federal Trade Commission (“FTC”) as part of its investigation to determine whether we, through our Venmo service, have been or are engaged in deceptive or unfair practices in violation of the Federal Trade Commission Act.”
[ii]                 Venmo Likely Investigated Over User Privacy Violations, Jeff John Roberts, May 24, 2016, available at

The ADA’s Dental Debacle

ADA dental debacle

Talk about the ever-changing world of information security and data privacy. Literally, something new, interesting, or terrible occurs daily.

The latest giant balloon in the “parade of horribles” is the American Dental Association (“ADA”) providing its members with a free, electronic copy of the 2016 Dental Procedure Codes – with one small catch.  The handy, searchable PDF was stored on malware-laced USB drives.  Woops.

In other words:  Ransomware.So to recap:  one benefit of a paid membership in the ADA is a potential malware infection.  According to Krebs on Security, “Mike” (presumably a dentist) was suspicious of the USB drive and took a look at the code.  Mike discovered that one of the files on the USB drive tried to open a well-known malware distribution website.  Apparently, this website “is used by crooks to infect visitors with malware that lets the attackers gain full control of the infected Windows computer.”

On the surface, the ADA’s idea is merely just a bad idea.  If one looks deeper, however, there is a next level disconnect about protecting PHI.  Think about it.  According to the ADA’s instructions, a covered entity is supposed to: (1) “flip out” a USB drive obtained in the mail; (2) “plug [it] into the USB port” on their computer; and (3) “open … the file on your computer.”  WHAT?   A dental office’s computer contains PHI (and likely other provider specific sensitive information).  While “reasonable safeguards” under HIPAA is up for interpretation, I am pretty sure that it does not include plugging random USB drives into computers and networks containing PHI.

Let’s think about this.  HIPAA’s Privacy Rule requires “reasonable and appropriate administrative, technical, and physical safeguards.”  Covered entities must ensure the confidentiality and integrity of PHI, as well as “identify and protect against reasonably anticipated threats to the security or integrity of the information.”  HIPAA’s Security Rule mandates that the information is not made available or disclosed to unauthorized persons.  While the Security Rule does not dictate measures, covered entities must consider certain things, most notably: the likelihood and possible impact of potential risks.

It seems that “Mike” considered the “likelihood and possible impact” of inserting an unknown USB drive and opening unknown files.  But I am willing to bet that many or most would not, either from ignorance, inattention, or explicit faith in the ADA.  In the current landscape, none of these are acceptable reasons for failing to consider the likelihood and possible impact.  Covered entities, and all organizations in general, must build an organizational culture of security where, like “Mike”, a natural suspicion arises when faced with a seemingly harmless, but unknown, situation.   Please be like Mike.  Trust or do not trust.  But always verify.

One more thing.  The approximately 37,000 USB drives were “manufactured in China by a subcontractor of an ADA vendor[.]” [Insert forehead slap here].  So, let’s get this straight.  The ADA: (1) unknowingly sent malware laced USB drives to its members; (2) provided them specific instructions to potentially infect their computers with ransomware; (3) failed to include in those instructions anything resembling steps to securely access the USB; and (4) obtained those USB drives from a subcontractor of a vendor in China.  If you’re keeping score at home, that’s strikes 1, 2, 3 and 4.  But the ADA didn’t stop there.

In an email statement, the ADA exacerbated the problem by committing the cardinal sin of incident response:  failing to take ownership of the problem and downplaying the threat:

“Upon investigation, the ADA concluded that only a small percentage of the manufactured USB devices were infected … Of note it is speculated that one of several duplicating machines in use at the manufacturer had become infected during a production run for another customer. That infected machine infected our clean image during one of our three production runs. Our random quality assurance testing did not catch any infected devices. Since this incident, the ADA has begun to review whether to continue to use physical media to distribute products ….  Your anti-virus software should detect the malware if it is present.”

Seems pretty specific for “speculation.”

In this statement the ADA essentially acted like its mistake was no big deal.  Further, it not so subtly transferred responsibility to the members.  Did you catch it?  “Your anti-virus software should detect the malware if it is present.”  Translation:  if you have proper cyber security in place our mistake won’t hurt you.  If you don’t have proper cyber security in place, our mistake is your fault for not having proper cyber security.

Not only is this a peevish and puerile response to a serious screw-up, it is also not accurate.  According to Krebs on Security:

“It’s not clear how the ADA could make a statement that anti-virus should detect the malware, since presently only some of the many antivirus tools out there will flag the malware link as malicious.”

Nice job, ADA [golf clap].

What’s even more curious about the ADA’s post-incident position is that cheap USB drives manufactured in China containing malware are not a new threat.  They are, in fact, a very common threat.  According to one security consultant, this fact “… is why the ADA’s decision to use them is so disconcerting[.]”   The point is, that in 2016, use of untested USB drives should always be suspicious – and therefore, connecting them to information systems should warrant consideration of the “likelihood and possible impact[.]”  In fact, according to that same consultant “connecting untested thumb drives to information systems containing sensitive data like personal health information violates the most fundamental rules of InfoSec[.]”

Now, you might be saying … “well, the ADA didn’t violate any rule.”  Perhaps this is true.  However, the ADA’s dental debacle clearly demonstrates the great divide between where we are and where we should be related to information security.  To say that the ADA does not have any culpability is ludicrous.  The ADA has a responsibility to its paying members.  At the very least the ADA shouldn’t contribute to the immense threats that its members already face.[[i]][[ii]]

Ickes Holt is a full-service, team-driven, and client focused law firm in Northeast Ohio concentrating on information security and governance. Information is the DNA of modern organizations and Ickes Holt is dedicated to advising clients on how to protect its information. Please contact us to discuss establishing or improving the information governance policies for your organization.


Internet of Things Institute: Day One Takeaways

computer scientists

Day 1 of the ABA Internet of Things Institute:  So, come to find out, the Internet of Things (“IoT”) is not the precursor to SkyNet or a rampant abuse of power by Big Brother.  It is fascinating, and yes, slightly frightening.  The simple fact is, the IoT is just like any other rapid advance in technology – it is power that can be used for good or ill.  It provides safer cars, more productive businesses, and cleaner, more efficient energy grids.  It also provides more pervasive avenues for malefactors to hack into our daily lives.  But the bottom line is, the IoT is not going away, so it is imperative to understand it and implement sound security practices.

Some takeaways from Day 1: 

  • The IoT is a broad term for a world where everyday objects are connected, have software and are networked.
  • Computer scientists predicted the IoT in the 1980’s.
  • The most commonly known examples of the IoT are consumer goods like thermostats and light bulbs with sensors to monitor how many people are in a room at a given time and software to interpret that data to more efficiently allocate energy consumption.
  • Consumer products are just the beginning:  more necessary and beneficial uses include smart energy grids, smart water solutions, smart cities and infrastructure, autonomous cars, agricultural improvements, and medical products like medicine pumps, defribulators, and monitoring devices for the aged (which will double in population by 2050).
  • We need to understand that connected devices are nothing more than computers, and computers can be programmed to do whatever you want.  So yes, that smart refrigerator can be hacked to send out malicious emails.
  • Because of this threat, we need to rely on sound engineering principles and strong encryption when developing IoT devices.
  • Manufacturers of IoT devices need to remember that they are actually developing software and not just cool gadgets.
  • Consumer protection must always be at the forefront of development.
  • Computer scientists were able to convert first generation electronic voting machines into Pac-Man games.
  • Industry cannot rely on Congress to legislate IoT security.  We have to rely on Industry sector regulation and consumer protection laws.
  • You cannot regulate what you can’t define.  According to one U.S. Senator, the IoT is moving too fast, its too big, and it changes every day.
  • The IoT is currently a $2 Trillion economy and will grow to $11 Trillion by 2025.
  • Don’t fear autonomous cars – 95% of auto accidents are due to driver error.  Autonomous vehicles will make roads safer, including not only individual vehicles, but the trucking industry as well.
  • The IoT is expected to create a 10-25% savings in energy consumption and manufacturing processes for industry.  Business will have to implement IoT devices to remain competitive.
  • The IoT is the 4th industrial revolution and will fundamentally change organizational behavior, as well as perceptions of privacy, security, ownership and interpersonal relationships.
  • Good with the Bad:  the IoT will also unquestionably create difficult societal, business, and ethical problems, such as job loss or restructuring, privacy and security issues, cyber-terrorism threats, cross-border data flow issues, data ownership issues, and dangerous digital divides (access, literacy, and acceptance of IoT).
  • Abuses and abusers will evolve.  Bad actors will remain bad actors.  The IoT will not change human behavior, but will give bad actors new tools to be bad actors.
  • There will be an estimated 30 billion IoT devices by 2030.
  • The raw cost of utilizing encryption is approximately 2 cents per device.
  • HIPAA and HITECH require healthcare providers to encrypt patient personal health information.
  • Cloud computing raises significant legal and ethical issues for every organization that uses the Internet.
  • The key to safely navigating the IoT and protecting your organizational information and the information of those you serve is security by design and front end engineering.
  • Cyber liability insurance is a good idea, but not the cure – coverage is not always sufficient, insurance companies may seek to deny coverage, and insurance does not fix the problems caused by a breach or recover the information lost.
  • The value in the IoT is the aggregation of data that by itself is useless.
  • Privacy concern and policy discussions must be viewed in context with the beneficial uses of the IoT.
  • 42% of consumers believe that privacy concerns outweigh the benefits of the IoT because the focus is on the consumer products, not the societal benefits.
  • IoT devices are increasingly becoming threat vectors.
  • IoT devices and software that utilize the collected data could be protectable intellectual property even though the data itself is not.

One thing is certain.  The IoT presents the greatest potential for human connectedness and technological advances in history while simultaneously presenting the greatest potential for security and privacy abuses.  The idea of a global community where information flows freely for the betterment of humanity is an exciting one.  However, we must temper that laudable goal with the stark reality that the same technology that frees us can also be used by bad actors to compromise that freedom.

In the immortal words of Peter Parker’s Uncle Ben:  with great power comes great responsibility.  Attorneys and other professionals specializing in information security and privacy must be at the forefront of the IoT.  So too must others (traditional attorneys, healthcare providers, financial services professionals, business owners, and governmental leaders) understand the benefits and threats posed by the IoT and seek advice from people best equipped to shepherd them through this new age.

Ickes Holt is a full-service, team-driven, and client focused law firm in Northeast Ohio concentrating on information security and governance. Information is the DNA of modern organizations and Ickes Holt is dedicated to advising clients on how to protect its information. Please contact us to discuss establishing or improving the information governance policies for your organization.

Hungry, Hungry HIPAA

HIPAA compliance

One recent case that didn’t get much attention, but should have, clarifies Ohio health care providers’ potential exposure for the unauthorized disclosure of patient health information (“PHI”).  On August 14, 2015, the Second District Court of Appeals decided Sheldon v. Kettering Health Network. [i]   In Sheldon, the Second District addressed patients’ rights related to the unauthorized disclosure of PHI.  Although the plaintiff was ultimately unsuccessful, the court affirmatively held that the Health Information Portability and Accountability Act (“HIPAA”) does not prevent a patient for asserting a common law tort claim for unauthorized disclosure of medical information.  On February 10, 2016, the Ohio Supreme Court declined to review the correctness of the Second District’s decision.  At that point, Sheldon effectively removed more than fifteen (15) years of gray area on the matter.[ii]

Prior to Sheldon, the Ohio Supreme Court decided Biddle v. Warren Gen. Hosp.[iii]  In Biddle, the Court held that, in Ohio, a physician can be held liable under Ohio common law for unauthorized disclosures of medical information.  The cause of the “gray area” was that the Supreme Court decided Biddle before HIPAA’s privacy-rule regulations were published on December 28, 2000 and before its security-rule regulations took effect on April 21, 2003.[iv]   The Sheldon case provides considerable clarity on exactly how HIPAA and the HITECH Act coexist with Ohio common law tort claims.

One point verified by Sheldon is that, according to Ohio law,  HIPAA does not allow a private cause of action.[v]  However, the Second District then concluded that HIPAA does not preempt an Ohio state law claim for the independent tort recognized by the Ohio Supreme Court in Biddle:

“[T]he unauthorized, unprivileged disclosure to a third party of nonpublic medical information that a physician or hospital has learned within a physician-patient relationship.”

The Second District went on the refer to such actions as “Biddle claims.”   The Second District went a step further in addressing how the standards delineated in the HIPAA regulations interact with Biddle claims.

The Second District held that violation of HIPAA does not provide for negligence per se claims.  The Court reasoned that to allow such a claim would essentially override HIPAA’s explicit prohibition of private causes of action.[vi]   However, buried in the Sheldon decision is one sentence that should send a shiver down the spines of physicians and the attorneys who represent them:

“[T]he violation of an administrative rule does not constitute negligence per se; however such a violation may be admissible as evidence of negligence.”[vii]

Essentially, HIPAA may not allow for a private cause of action, but according to Sheldon, a health care provider’s HIPAA dirty laundry can still be heard by a jury in conjunction with a Biddle claim.

More troubling is that recent Federal case law, although only persuasive authority for Ohio state claims, will make it much easier to get these types of cases to a jury.

In  July 2015, the Federal Seventh Circuit Court of Appeals decided Remijas v. Nieman Marcus Group, LLC[viii]a case involving a massive data breach.  The Seventh Circuit overruled the trial court’s ruling in holding that “injuries [of customers] associated with resolving fraudulent charges and protecting oneself against future identity theft do” provide sufficient standing to maintain a cause of action for those affected by a data breach.[ix]  Thus, in situations where a data breach has occurred, but no actual identity theft has occurred, Remijas establishes the framework for plaintiffs’ lawyers to overcome the heretofore solid defense of lack of standing due to intangible and speculative damages.   Although no Ohio court has applied the reasoning of Remijas, there is now a viable legal argument to be made in Ohio state law negligence claims.

With the spate of data breaches in the health care industry occurring around the country (including several in the state of Ohio), HIPAA covered entities must take action to ensure that information security processes and procedures are in place. Not only because the impending threat of litigation or the fact that the Department of Heath and Human Services has announced that 200 new HIPAA audits are in the pipeline for 2016.[x]  It is simply the right thing to do.  Perhaps the Hippocratic oath, in our digital age, should extend to patients’ identity as well as their health and wellness.

Ickes Holt is a full-service, team-driven, and client focused law firm in Northeast Ohio concentrating on information security and governance. Information is the DNA of modern organizations and Ickes Holt is dedicated to advising clients on how to protect its information. Please contact us to discuss establishing or improving the information governance policies for your organization.


[i] Sheldon v. Kettering Health Network, 40 N.E.3d 661(App. 2d Dist. 2015)

[iii] Biddle v. Warren Gen. Hosp. , 86 Ohio St.3d 395, 401,1999-Ohio-115, 715 N.E.2d 518 (1999)

[iv]Sheldon at 671

[v] Id. at 670 citing Henry v. Ohio Victims of Crime Comp. Program, S.D.Ohio No. 2:07-cv-0052, 2007 WL 682427 (Feb. 28, 2007)

[vi] Id. at 674

[vii]Id. citing Chambers v. St. Mary’s School, 82 Ohio St.3d 563, 1998-Ohio-184, 697 N.E.2d 198 (1998)

[viii] Remijas v. Neiman Marcus Group, LLC, 794 F3d 688 (7th Cir. 2015)

[ix] Id.

[x] Raths, David, OCR’s Samuels Describes Launch of Phase 2 of HIPAA Audit Program, Health Care Infomatics, March 19, 2016