Cybersecurity in America’s Dairyland

cyber security in dairy land

On November 7, 2016, ICKESHOLT attorneys Jim Ickes and Joel Holt journeyed to Joel’s home-state of Wisconsin to record a webinar for the National Business Institute ( entitled: “Cybersecurity: the Ultimate Guide.”

Suffice it to say, the name and scope of the seminar presented quite a challenge.  However, while neither Jim nor Joel would willingly label the presentation as the “ultimate guide”, they are proud of how it turned out.  The seminar is a robust, comprehensive, and rapid-fire excursion through the most prominent federal information security/cybersecurity laws, including HIPAA, GLBA, and FTC Act Section 5.  The seminar also touches on State laws, threat vectors, emerging technology, pending legislation, and the role of counsel in the information security space, including regulatory compliance and advocating an organizational culture of security.  It is a lot of information crammed into 3 hours.

Most importantly, Jim and Joel produced what has to be the granddaddy of all webinar written materials: a 70 page treatise on their topics.  Again, while trying to maintain some semblance of humility, Jim and Joel are quite proud of this document and are confident readers will come away with a good working knowledge of the topics covered.

Finally, Jim and Joel had the privilege to work with panelists Joe Carney and Victoria Ferrise, both of whom are attorneys at Brennan, Manna & Diamond, LLC, in Akron, Ohio, as well as Jeff Grady, Director of Security & Compliance at Three Pillars Technology in Madison, Wisconsin.  The result is 6 hours of informative, and hopefully not too monotonic, CLE credit.  If you are interested in participating in the seminar, it will broadcast nationally on Nov. 30, 2016.  Please check out this link:  Cybersecurity: the Ultimate Guide.

If you have questions about information security, feel free to contact us.  We love to talk about this stuff.

Encryption Prescription

encryptions and hipaa

Regardless of the actual legitimacy of the HIMSS Study, it raises an important discussion point regarding encryption. So, with due respect to the pundits advocating caution, I will presume it to be reliable.  When viewed as reliable, the HIMSS Study presents compelling statistics with immediate impact to the healthcare industry.

The Numbers Regarding Encryption.

According to the HIMMS Study, approximately 32% of hospitals and 52% of non-acute providers do not encrypt data in transit.  Further, 39% of acute providers and 52% of non-acute providers do not encrypt data at rest.   The overarching gist of the HIMMS Study is that a significant percentage of healthcare organizations (“HCOs”) do not encrypt data, either at rest or in transit.  But, what’s the big deal?

The Rules Regarding Encryption.

HIPAA does not necessarily require encryption.  However, encryption is an addressable implementation specification.  See 45 CFR 164.312(a)(2)(iv).   Importantly, “addressable” does not mean “optional.”  Instead, “addressable” means that a covered entity must “[i]mplement the implementation specification if reasonable and appropriate” under the circumstances for that covered entity.  See 45 CFR 164.306(d)(3).  If a covered entity determines that an addressable item is not reasonable and appropriate, it must document why and implement an equivalent measure, if the substitute measure is reasonable and appropriate.  Clearly, if encryption is reasonable and appropriate for a covered entity, failure to implement encryption violates HIPAA’s Security Rule.  Thus, the operative question is whether encryption is reasonable and appropriate.

In 2016, encryption tools are readily available and there is no excuse for failing to encrypt data at rest.   For example, Windows OS includes BitLocker Drive Encryption onboard.  Further, there are numerous affordable encryption options for Windows.[v]   Mac offers FireVault 2 encryption standard with OS X.  Firevault 2 encrypts not only the hard drive, but removable drives as well.  FireVault is a respectably robust encryption tool, especially for individuals or small business.  Mac users also have additional options for encryption.[vi]

Data in transit is a bit more technical.  I do not claim to be a CISSP – my knowledge base is in the law, not hardware and software.  So, for purposes of this article, let’s just consider that “data in transit” entails methods with which we are all familiar – email, fax, and text.  All of these transmissions may be encrypted by employing various programs, services, and technology, many of which are readily available and affordable.

People will undoubtedly argue about the viability of, and protection afforded by, these encryption tools.  For example, you can Google numerous articles discussing the security flaws in Firevault 2 and BitLocker.  Encryption options for faxing and texting usually fare no better.

The good news is that HIPAA does not demand that the encryption WORK – but only that covered entities “[i]mplement a mechanism to encrypt and decrypt” ePHI.  See 45 CFR 164.312(a)(2)(iv).   HIPAA defines encryption as “the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key.”  See 45 CFR 164.304.  So, the mere fact that a covered entity implements encryption methods meeting technical requirements[vii] satisfies HIPAA’s basic requirement.  Of course, covered entities must also keep safeguards up to date and monitor overall effectiveness in protecting information assets.

Finally, it should be stated that encrypting data relieves a covered entity from data breach notification requirements in many states, including Ohio.  In Ohio, data breaches exposing “personal information” must, under certain circumstances, be reported to the individuals.  See R.C. 1349.19(B)(1).  Information is only “personal information” “when the data elements are not encrypted, redacted, or altered by any method or technology[.]”  R.C. 1349.19(A)(7)(a).

In closing, it is arguable that encryption is currently reasonable and appropriate for 100% of covered entities.  Under that postulation, then, according to the HIMSS Study, between 32% to 52% of HCOs are violating HIPAA and perhaps do not even realize they are doing so.  While HIPAA’s Privacy and Security Rules go far beyond encryption, perhaps it is a good, objective starting point for covered entities.  Stakeholders in covered entities (and business associates) should ask:

  • Do we store data? If so, do we encrypt that data?

  • Do we transmit data? If so, how?  Email, fax, or text?

  • Do we encrypt the data we transmit? How?

  • Is encryption reasonable and appropriate for our organization?

  • If not, do we have the justifications documented?

Based on this self-analysis, covered entities should contact an information security lawyer to help them: (1) conduct a thorough and confidential analysis of existing information security policies and procedures; and (2) develop and implement an information security regimen tailored to foster an organizational culture of security.


[ii] outpatient clinics, rehabilitation facilities and physicians’ offices.  See note iv, infra.

[iii] 2016 HIMSS Cybersecurity Survey, available at:

[iv] For example, the HIMSS Study was sponsored by FairWarning.  FairWarning is a provider of information security services and has a considerable market in … you guessed it … the healthcare industry.  Sure, it seem convenient that a study exposing a lack of information security in healthcare is sponsored by a seller of information security to healthcare. In fact, the lawyer in me demands the injection of a healthy dose of skepticism.

However, in fairness, as an information security attorney, I could be accused of the same sort of fear-mongering designed to scare people into hiring me.  But, I know this to be patently untrue.  No reasonable person would consider identification of critical issues and application of sound legal advice to mitigate those issues as “fear mongering.”  It is no different that advising a business owner to incorporate to avoid the risk of exposing personal assets to creditors.  So, because I know my motives are pure, I am inclined to extend the benefit of doubt to others.



[vii] HHS has issued guidance on encryption standards, namely referring to NIST guidelines.  For example, encryption for data at rest must be consistent with NIST Special Publication 800-111.  Encryption for data in transit must comply with other specifications, including NIST Special Publications 800-52,


What’s App-Ening to Your Financial Data?

venmo app security risks

Recently, a friend asked me to pay him back for movie tickets via Venmo.  For those of you born before 1985, Venmo is a mobile app owned by PayPal which allows users to “[p]ay anyone with a Venmo account instantly using money you have in Venmo, or link your bank account or debit card quickly.” Simply, instead of “divvying up the check”, people can now electronically transfer funds back and forth through Venmo, using Venmo “wallets” or a direct link to their bank.  Suffice it to say, I refused.  While we joked about my age, “youngsters and their ‘future money’” and “financial black magic”, my refusal was not based in age, fear, or lack of understanding.  Instead, it was based off of an informed and objective analysis of the interaction of mobile apps and security.

Well, it appears my fears were well founded.  According to PayPal’s 2016 1st Quarterly Report for the SEC, Paypal admitted that it was under investigation by the Federal Trade Commission (“FTC”) for unfair or deceptive acts and practices as related to Venmo.[i]  While Paypal does not elaborate on the nature of the investigation, it seems apparent that the FTC’s investigation is focused on a host of privacy violations.

In March 2016, the parties filed an “Assurance of Voluntary Compliance” (the “Assurance”) in In the Matter of State of Texas and Paypal, Inc. (the “Paypal Litigation”).  The Paypal Litigation derived from an investigation of Paypal by the Texas Attorney General for potential violations of Texas’ deceptive trade practices and consumer protection law.  The Assurance lays out a litany of privacy violations concerning Venmo, most notably:

  1. Auto-friending, which permits Venmo to access and assimilate a user’s contact list in order to add those contacts to the user’s Venmo Friends list, all without a deliberate action by the user or adequate choice. It appears that Venmo was also accessing users’ contacts lists without any real privacy notice.  See Assurance, ¶6(A)(i).
  2. Potential misrepresentations about the level of security provided by Venmo. See Assurance, ¶6(B).
  3. Venmo’s default “audience setting” is set to public – which publishes a “timeline” of your Venmo financial transactions. This setting can be changed to private, but according to the Assurance, it seems that this is not commonly known and Venmo doesn’t exactly make it easy to accomplish.[ii]  See Assurance ¶6(C) (“At the time of … any transaction, [Venmo] shall clearly and conspicuously disclose the audience setting for the transaction in close proximity beneath, beside, or adjacent to any field … or call to action.”).

If you look closely at the screen shot above, you will see how Venmo creates a crawling “ticker” of your financial transactions.   Think of a Twitter feed, but the updates are your financial transactions using Venmo.

Based on the Paypal Litigation and the Assurance, it seems to be a pretty safe bet that the FTC investigation of Paypal/Venmo settles smack dab in the wheelhouse of Section 5 of the FTC Act.

The 3 violations asserted in the Paypal Litigation are serious, especially considering the apparent lack of notice provided to Venmo users about the app’s information sharing practices.  However, I have a couple of other concerns about Venmo that were not addressed by the Assurance – 1 practical and 1 policy.

First, the practical. Signing in with Google or Facebook accounts has become very popular.  After all, it’s easy, right?   Venmo advertises this feature on its website.  See  But have you ever stopped to consider HOW Venmo is able to create an account for you and log in by using your Facebook account?  Or, is it just yet another mystical Internet transaction that doesn’t concern you?

In order for Venmo to log you in using Facebook, an authentication process must occur, called “OAuth.”  Now, OAuth is by all accounts, a pretty decent way to do this.  OAuth creates “tokens”  which allow the third party app to access your Facebook account and do the things you have allowed it to do.[iii]  However, some services don’t exactly tell you what permissions you are giving away, or instead bury them in hard-to-find-and-harder-to-understand privacy notices

For example, the first time anybody sees Venmo’s privacy notice is after they’ve chosen to start the Facebook login process.  Further, notice the tiny “privacy policy” link in the bottom left hand corner.  Like most privacy notices, it is not clear and conspicuous.  However, if one bothers to read the privacy notice, they will discover that Venmo collects the following information from its users:

  • Account Information – text-enabled cellular/wireless telephone number, machine or mobile device ID and other similar information.
  • Identification Information – your name, street address, email address, date of birth, and SSN.
  • Device Information.
  • Social Media Information.
  • Financial Information – bank account and routing numbers and credit cards linked to your Venmo account.

Finally, Venmo makes the incredible caveat that it “may collect additional information from or about you in other ways not specifically described here.”  That stipulation conveniently seems to counteract the entire purpose of a privacy notice.  But, that is another topic for another day.

Back to the issue at hand.  It seems insane to sign into Venmo using Facebook.  The whole point of Venmo is that it is a financial app with a direct link to your bank account or credit card information.  While Venmo makes it very clear that it “does not share financial information with third party social networking services” there is no reason to disbelieve that a hacker infiltrating Facebook could somehow “back-door” into Venmo, and thus, users’ financial information.

What’s more, Facebook just had an epic security breach in 2013 where 6 million users were compromised.  Facebook is one of the largest social media platforms and is a high profile target for hackers.  With all due, respect, this layman will presume that logging into Venmo with my Facebook account will potentially expose my financial information.

Now the policy concern.  Venmo illustrates the one of the barriers to comprehensive federal cybersecurity legislation – the allocation of risk.  This struggle has occurred across sectors, but is very evident amongst retail and banking/financial.  And, I believe, with good reason.

An app like Venmo needlessly puts users’ financial information at risk, and banks will ultimately be the ones left holding the proverbial bag should Venmo get hacked and that financial information is used to infiltrate the banks’ networks.  If a bank is compromised through information obtained in a Venmo hack (think Target and Fazio, as I previously wrote about:  ), then the bank, through no real fault of its own, will be subject to regulatory action and perhaps even civil liability.

Quite legitimately, we are talking about the potential exposure of: (1) Venmo users; (2) their banks; (3) their credit card companies; and (4) all of the OTHER customers of the banks and credit card companies. We are also talking about legal consequences for the banks and credit card companies for the disclosure. From a legal and policy perspective, it is problematic that the fate of a regulated entity may be so significantly intertwined with and affected by the security of an unregulated entity.

It’s no wonder that the banking and financial industry are supporting federal data security and breach notification standards.  They are subject to heightened standards and are exposed when an unregulated entity fails to take security seriously. In fact, according to a spokesperson: “Financial institutions have had this obligation for 15 years, and it’s long overdue for Congress to pass legislation ensuring that everyone has a similar mandate to keep customer data safe.”[iv]  Translation:  banks are mad as hell.

The morale of the story is that, until everyone is regulated, consumers have to be careful.  While the FTC does have jurisdiction over interstate commerce, they are limited to investigating unfair and deceptive trade practices.  A strong information security regulatory framework with a private right of action would go a long way to ensuring that all entities collecting personal information have sufficient security.

Call me old and out of touch.  Call me a curmudgeon.  Mock my puritanical sensibilities.  I don’t care.  There is no chance that I will ever divvy up the bar bill using Venmo.

[i]                  “On March 28, 2016, we received a Civil Investigative Demand (“CID”) from the Federal Trade Commission (“FTC”) as part of its investigation to determine whether we, through our Venmo service, have been or are engaged in deceptive or unfair practices in violation of the Federal Trade Commission Act.”
[ii]                 Venmo Likely Investigated Over User Privacy Violations, Jeff John Roberts, May 24, 2016, available at

Clapper Claptrap…Data Breach Class Actions are Alive and Kicking

data breach

While attending the recent ABA Internet of Things Institute, I heard something troubling from a particular panelist, a data breach class action defense attorney. This attorney, from a monolithic law firm, proclaimed that data breach class-actions were, essentially, on life support as result of the U.S. Supreme Court’s (“SCOTUS”) decision in Clapper v. Amnesty Int’l USA, 133 S. Ct. 1138 (2013). I was a bit astonished by the certainty of the panelist’s position. I would respectfully, and vigorously, disagree. Data breach class-actions are alive and well. Moreover, based on the latest case law and the uptick in security incidents every year, I posit that that data breach class-actions are coming to a courthouse near you. 

Clapper involved a lawsuit in which a group of attorneys and human rights, labor, legal, and media organizations alleged that the Federal Government had intercepted their private communications in conjunction with counterterrorism surveillance. SCOTUS correctly held that the alleged injury was too speculative to support legal standing to challenge the Foreign Intelligence Surveillance Act (“FISA”), because the plaintiffs possessed no actual evidence that their private communications were actually intercepted.

A handful of federal district courts around the country have applied Clapper to data breach class actions. These courts dismissed several of the cases, holding that in the absence of identity theft or other manifestation of damage, the plaintiff did not have standing. These cases have created a false sense of “security” amongst security front-liners, including, apparently, some defense attorneys.

Not. So. Fast. In back-to-back decisions, the Seventh Circuit turned the tables on SCOTUS and changed the fortunes of data breach litigants. First, In Remijas v. Neiman Marcus Grp., LLC, 794 F.3d 688 (7th Cir. 2015), the high–end department store Neiman Marcus experienced a data breach that potentially exposed payment–card data of all customers who paid with cards during the previous year. The plaintiff class consisted of customer who had shopped at Neiman Marcus during the time the information was exposed to the invader.

In Remijas, the court stated “there is ‘no need to speculate as to whether [the Neiman Marcus customers’] information has been stolen and what information was taken.’” The court concluded that the plaintiffs’ injuries were concrete and particularized enough to support Article III standing. The court identified two future injuries that were sufficiently imminent: (1) the increased risk of fraudulent credit or debit card charges; and (2) the increased risk of identity theft. The court further opined that such risks were not mere “allegations of possible future injury,” but instead were the type of “certainly impending” future harm that SCOTUS requires to establish standing.

Two weeks ago, the Seventh Circuit doubled down on its Remijas holding in Lewert v. P.F. Chang’s China Bistro, Inc., No. 14-3700, (7th Cir. 2016), a case involving data breaches at 33 P.F. Chang’s restaurant locations. In Lewert, the Seventh Circuit impliedly relaxed the standing requirements for data breach cases even further. P.F. Chang’s attempted to distinguish the case

from Remijas by arguing that the Lewert plaintiffs had dined at a Northbrook, Illinois, restaurant that was not among the 33 locations subject to the breach.

The Seventh Circuit rejected P.F. Chang’s argument and concluded that a lawsuit could compensate for the costs of purchasing credit-monitoring services, lost points on a debit card, or unreimbursed fraudulent charges (though the panel raised doubts about whether the costs of plaintiffs’ meals or the right to their identities constituted injuries). Citing Remijas, the court held that the plaintiffs were at risk for future fraudulent charges given that the breach had already occurred.

“They describe the same kind of future injuries as the Remijas plaintiffs did: the increased risk of fraudulent charges and identity theft they face because their data has already been stolen,” wrote Chief Judge Diane Wood. “These alleged injuries are concrete enough to support a lawsuit.”

So, in my opinion, Clapper does not constitute the death knell of data breach class action lawsuits.i In fact, Clapper is well reasoned, and ultimately, correctly decided. The Clapper Court held that plaintiffs’ injuries were too speculative because there was no evidence that a breach or disclosure (i.e. intercepted communications) had even occurred. This holding comports perfectly with traditional notions of subject matter jurisdiction and Article III standing. Conversely, in Remijas and Lewert, plaintiffs established that a breach or disclosure had actually occurred. Therefore, the court reasoned, plaintiffs’ had a substantive and concrete injury in the potential financial consequences of the breach.

Effectively, the Seventh Circuit has established that the mere occurrence of a data breach or disclosure constitutes actionable injury, regardless of whether identities are stolen or fraudulent charges are incurred. For once, it seems that the courts are actually in lockstep with the practical realities of law (albeit a little late to the party). The breach IS the injury. The breach is a bell that cannot be un-rung. In a climate where government officials have conceded to an inability to protect information, data collectors must be held accountable at the first instance where malefactors obtain personal information. We, as a government, society, and legal profession, cannot allow these entities to breathe a sigh of relief and go on their merry way just because a hacker does not use the stolen information. To do so allows a free pass and misses a chance to teach accountability and make information security a top priority.

i This article does not address the potential for state court actions in negligence and intentional tort. State court actions will be addressed in a future article.

Internet of Things Institute: Day One Takeaways

computer scientists

Day 1 of the ABA Internet of Things Institute:  So, come to find out, the Internet of Things (“IoT”) is not the precursor to SkyNet or a rampant abuse of power by Big Brother.  It is fascinating, and yes, slightly frightening.  The simple fact is, the IoT is just like any other rapid advance in technology – it is power that can be used for good or ill.  It provides safer cars, more productive businesses, and cleaner, more efficient energy grids.  It also provides more pervasive avenues for malefactors to hack into our daily lives.  But the bottom line is, the IoT is not going away, so it is imperative to understand it and implement sound security practices.

Some takeaways from Day 1: 

  • The IoT is a broad term for a world where everyday objects are connected, have software and are networked.
  • Computer scientists predicted the IoT in the 1980’s.
  • The most commonly known examples of the IoT are consumer goods like thermostats and light bulbs with sensors to monitor how many people are in a room at a given time and software to interpret that data to more efficiently allocate energy consumption.
  • Consumer products are just the beginning:  more necessary and beneficial uses include smart energy grids, smart water solutions, smart cities and infrastructure, autonomous cars, agricultural improvements, and medical products like medicine pumps, defribulators, and monitoring devices for the aged (which will double in population by 2050).
  • We need to understand that connected devices are nothing more than computers, and computers can be programmed to do whatever you want.  So yes, that smart refrigerator can be hacked to send out malicious emails.
  • Because of this threat, we need to rely on sound engineering principles and strong encryption when developing IoT devices.
  • Manufacturers of IoT devices need to remember that they are actually developing software and not just cool gadgets.
  • Consumer protection must always be at the forefront of development.
  • Computer scientists were able to convert first generation electronic voting machines into Pac-Man games.
  • Industry cannot rely on Congress to legislate IoT security.  We have to rely on Industry sector regulation and consumer protection laws.
  • You cannot regulate what you can’t define.  According to one U.S. Senator, the IoT is moving too fast, its too big, and it changes every day.
  • The IoT is currently a $2 Trillion economy and will grow to $11 Trillion by 2025.
  • Don’t fear autonomous cars – 95% of auto accidents are due to driver error.  Autonomous vehicles will make roads safer, including not only individual vehicles, but the trucking industry as well.
  • The IoT is expected to create a 10-25% savings in energy consumption and manufacturing processes for industry.  Business will have to implement IoT devices to remain competitive.
  • The IoT is the 4th industrial revolution and will fundamentally change organizational behavior, as well as perceptions of privacy, security, ownership and interpersonal relationships.
  • Good with the Bad:  the IoT will also unquestionably create difficult societal, business, and ethical problems, such as job loss or restructuring, privacy and security issues, cyber-terrorism threats, cross-border data flow issues, data ownership issues, and dangerous digital divides (access, literacy, and acceptance of IoT).
  • Abuses and abusers will evolve.  Bad actors will remain bad actors.  The IoT will not change human behavior, but will give bad actors new tools to be bad actors.
  • There will be an estimated 30 billion IoT devices by 2030.
  • The raw cost of utilizing encryption is approximately 2 cents per device.
  • HIPAA and HITECH require healthcare providers to encrypt patient personal health information.
  • Cloud computing raises significant legal and ethical issues for every organization that uses the Internet.
  • The key to safely navigating the IoT and protecting your organizational information and the information of those you serve is security by design and front end engineering.
  • Cyber liability insurance is a good idea, but not the cure – coverage is not always sufficient, insurance companies may seek to deny coverage, and insurance does not fix the problems caused by a breach or recover the information lost.
  • The value in the IoT is the aggregation of data that by itself is useless.
  • Privacy concern and policy discussions must be viewed in context with the beneficial uses of the IoT.
  • 42% of consumers believe that privacy concerns outweigh the benefits of the IoT because the focus is on the consumer products, not the societal benefits.
  • IoT devices are increasingly becoming threat vectors.
  • IoT devices and software that utilize the collected data could be protectable intellectual property even though the data itself is not.

One thing is certain.  The IoT presents the greatest potential for human connectedness and technological advances in history while simultaneously presenting the greatest potential for security and privacy abuses.  The idea of a global community where information flows freely for the betterment of humanity is an exciting one.  However, we must temper that laudable goal with the stark reality that the same technology that frees us can also be used by bad actors to compromise that freedom.

In the immortal words of Peter Parker’s Uncle Ben:  with great power comes great responsibility.  Attorneys and other professionals specializing in information security and privacy must be at the forefront of the IoT.  So too must others (traditional attorneys, healthcare providers, financial services professionals, business owners, and governmental leaders) understand the benefits and threats posed by the IoT and seek advice from people best equipped to shepherd them through this new age.

Ickes Holt is a full-service, team-driven, and client focused law firm in Northeast Ohio concentrating on information security and governance. Information is the DNA of modern organizations and Ickes Holt is dedicated to advising clients on how to protect its information. Please contact us to discuss establishing or improving the information governance policies for your organization.

Hungry, Hungry HIPAA

HIPAA compliance

One recent case that didn’t get much attention, but should have, clarifies Ohio health care providers’ potential exposure for the unauthorized disclosure of patient health information (“PHI”).  On August 14, 2015, the Second District Court of Appeals decided Sheldon v. Kettering Health Network. [i]   In Sheldon, the Second District addressed patients’ rights related to the unauthorized disclosure of PHI.  Although the plaintiff was ultimately unsuccessful, the court affirmatively held that the Health Information Portability and Accountability Act (“HIPAA”) does not prevent a patient for asserting a common law tort claim for unauthorized disclosure of medical information.  On February 10, 2016, the Ohio Supreme Court declined to review the correctness of the Second District’s decision.  At that point, Sheldon effectively removed more than fifteen (15) years of gray area on the matter.[ii]

Prior to Sheldon, the Ohio Supreme Court decided Biddle v. Warren Gen. Hosp.[iii]  In Biddle, the Court held that, in Ohio, a physician can be held liable under Ohio common law for unauthorized disclosures of medical information.  The cause of the “gray area” was that the Supreme Court decided Biddle before HIPAA’s privacy-rule regulations were published on December 28, 2000 and before its security-rule regulations took effect on April 21, 2003.[iv]   The Sheldon case provides considerable clarity on exactly how HIPAA and the HITECH Act coexist with Ohio common law tort claims.

One point verified by Sheldon is that, according to Ohio law,  HIPAA does not allow a private cause of action.[v]  However, the Second District then concluded that HIPAA does not preempt an Ohio state law claim for the independent tort recognized by the Ohio Supreme Court in Biddle:

“[T]he unauthorized, unprivileged disclosure to a third party of nonpublic medical information that a physician or hospital has learned within a physician-patient relationship.”

The Second District went on the refer to such actions as “Biddle claims.”   The Second District went a step further in addressing how the standards delineated in the HIPAA regulations interact with Biddle claims.

The Second District held that violation of HIPAA does not provide for negligence per se claims.  The Court reasoned that to allow such a claim would essentially override HIPAA’s explicit prohibition of private causes of action.[vi]   However, buried in the Sheldon decision is one sentence that should send a shiver down the spines of physicians and the attorneys who represent them:

“[T]he violation of an administrative rule does not constitute negligence per se; however such a violation may be admissible as evidence of negligence.”[vii]

Essentially, HIPAA may not allow for a private cause of action, but according to Sheldon, a health care provider’s HIPAA dirty laundry can still be heard by a jury in conjunction with a Biddle claim.

More troubling is that recent Federal case law, although only persuasive authority for Ohio state claims, will make it much easier to get these types of cases to a jury.

In  July 2015, the Federal Seventh Circuit Court of Appeals decided Remijas v. Nieman Marcus Group, LLC[viii]a case involving a massive data breach.  The Seventh Circuit overruled the trial court’s ruling in holding that “injuries [of customers] associated with resolving fraudulent charges and protecting oneself against future identity theft do” provide sufficient standing to maintain a cause of action for those affected by a data breach.[ix]  Thus, in situations where a data breach has occurred, but no actual identity theft has occurred, Remijas establishes the framework for plaintiffs’ lawyers to overcome the heretofore solid defense of lack of standing due to intangible and speculative damages.   Although no Ohio court has applied the reasoning of Remijas, there is now a viable legal argument to be made in Ohio state law negligence claims.

With the spate of data breaches in the health care industry occurring around the country (including several in the state of Ohio), HIPAA covered entities must take action to ensure that information security processes and procedures are in place. Not only because the impending threat of litigation or the fact that the Department of Heath and Human Services has announced that 200 new HIPAA audits are in the pipeline for 2016.[x]  It is simply the right thing to do.  Perhaps the Hippocratic oath, in our digital age, should extend to patients’ identity as well as their health and wellness.

Ickes Holt is a full-service, team-driven, and client focused law firm in Northeast Ohio concentrating on information security and governance. Information is the DNA of modern organizations and Ickes Holt is dedicated to advising clients on how to protect its information. Please contact us to discuss establishing or improving the information governance policies for your organization.


[i] Sheldon v. Kettering Health Network, 40 N.E.3d 661(App. 2d Dist. 2015)

[iii] Biddle v. Warren Gen. Hosp. , 86 Ohio St.3d 395, 401,1999-Ohio-115, 715 N.E.2d 518 (1999)

[iv]Sheldon at 671

[v] Id. at 670 citing Henry v. Ohio Victims of Crime Comp. Program, S.D.Ohio No. 2:07-cv-0052, 2007 WL 682427 (Feb. 28, 2007)

[vi] Id. at 674

[vii]Id. citing Chambers v. St. Mary’s School, 82 Ohio St.3d 563, 1998-Ohio-184, 697 N.E.2d 198 (1998)

[viii] Remijas v. Neiman Marcus Group, LLC, 794 F3d 688 (7th Cir. 2015)

[ix] Id.

[x] Raths, David, OCR’s Samuels Describes Launch of Phase 2 of HIPAA Audit Program, Health Care Infomatics, March 19, 2016

Information Security and Privacy Round-Up: Memphis Neurology & Fazio Mechanical

identity theft in memphis

Information security and privacy is an incredibly broad and pervasive topic.  It spans across industries, relates to private and public sectors, affects small business to publicly traded companies, is governed by federal and state legislation, is enforced by regulators and courts, and incorporates IT and legal solutions.  Information is the DNA of the modern world.  It is everywhere – our computers, our phones, our cars, our homes, our businesses, the cloud.  We have unprecedented access to each other, and as a result, other people have unprecedented access to our information. The boundaries of information security are continually being stretched by the dramatic leaps in technology and ever shifting societal norms.

Events in the information security realm occur so quickly that it is difficult, even for privacy professionals, to keep current.  This article will provide an overview of some recent information security cases, both which illustrate the concept that small to mid-sized business are the most vulnerable to, and least equipped to prevent, information security attacks.

Memphis Neurology Case:  In February, the U.S. Attorneys’ office indicted Jeremy Jones on charges of identity theft, fraud, and conspiracy.  Jones is accused of conspiring to steal the identities of more that 145 patients of Memphis Neurology, as well as customers of car dealerships and other people he knew.  Jones used the stolen identities to apply for loans and credit cards, and to open banks accounts in the victims’ names.   The estimated loss to the defrauded financial institutions is $1,660,587.30.

The Memphis Neurology case presents significant information security concerns, namely, insider threats and access controls.  Memphis Neurology is a regional, private neurological practice with five locations.[i]  The practice has been in business since the 1970’s.  Jones allegedly conspired with an employee of Memphis Neurology to steal patient information from the practice’s database.[ii]   The scheme allegedly began in 2011 and continued through 2015.[iii]

This case underscores the importance of: (1) training employees about information security: (2) clearly communicating to employees the consequences for intentional and unintentional security breaches; (3) properly screening potential employees during the hiring process; (4) conducting periodic audits of information security practices for efficacy and potential breaches; and (5) ensuring access to patient information is properly limited to authorized employees, including organizational and physical security.  These items are crucial components to an overall information security governance program, which is required by HIPAA and the FTC Act, as well as necessitated by the modern world in which small to mid-sized medical practices operate.

Jeremy Jones is facing criminal charges.  The financial institutions are facing the loss of $1,660,587.30.  But, what about Memphis Neurology?  What are the potential consequences to the practice?  First, they almost certainly lost existing and future customers.  Second, they face potential investigation and enforcement by the Federal Trade Commission and/or the Department of Health and Human Services.  An investigation and enforcement action will cost Memphis Neurology significantly in legal fees and lost productivity.  Further, the FTC and HHS are not averse to levying heavy financial penalties for violations.  Finally, while neither the FTC Act or HIPAA provide a private right of action, there is an increasing trend of state courts adopting federal statutory/regulatory frameworks as the “standard of care” in common law negligence actions.[iv]  This trend could expose Memphis Neurology to state court negligence lawsuits brought by the patient victims.

Target Breach-Fazio Mechanical.  Most people are aware of the Target breach in 2013.  In fact, most people probably held their breath waiting for notice from the retail giant that their information had been compromised.  The fallout from the Target breach has been staggering:

  • 110 million customers’ information exposed
  • Immediate 50% drop in profits at the time of the breach from the previous year
  • Consumer and media backlash
  • Approximately $252 million spent to manage the breach
  • An escrow account of $10 million set aside for compromised customers
  • Ongoing litigation and regulatory action
  • Target CEO ousted
  • Potential personal exposure to fines and monetary damages for Target executives[v]

What is not commonly known is the source of the hack leading to the Target breach.  According to Krebs on Security, hackers gained access to Target’s network via one of its vendors, Fazio Mechanical, a Pennsylvania based refrigeration company.[vi]  According to investigators, the Target breach “traces back to network credentials” issued to Fazio by Target.   Fazio has stated that its data connection to Target “was exclusively for electronic billing, contract submission and project management[.]”[vii]

It appears that Target’s network credentials were stolen by means of email “phishing” attack sent to employees at Fazio.  Facts indicate that one or more Fazio employees opened the phishing email, thus infecting Fazio’s system and delivering Target’s network credentials to the hackers.  The hackers then planted malware on Target’s system and began stealing credit card data from thousands of Target’s registers nationwide.

Target receives and retains an immense amount of customer information.  As the recipient of this information, Target had a duty to ensure that the third party vendors with which it works have adequate security controls.  There is no question that Target should have done a better job of auditing Fazio’s information security controls and ultimately bears responsibility for the breach.   However, while Target is certainly culpable for the breach (namely failing to timely act on the breach[viii] and sending out inadequate data breach notifications[ix]), it was undoubtedly prepared for the possibility of an attack.  Six months prior to the breach, the retailer had started installation of a $1.6 million malware detection tool designed by FireEye.  FireEye is a leading cyber security firm who provides services to the CIA and the Pentagon.  Target employed a security squad in Bangalore to monitor its system 24/7.[x]  Despite these measures and obscene financial resources, Target was hacked and is now facing reputational damage, lawsuits, and regulatory enforcement.

And it is, in large part, Fazio’s fault.

True, if it wasn’t Fazio, it likely would have been another vendor.  Or perhaps, malefactors could have penetrated Target’s system directly.  However, the facts surrounding the Target breach point blame directly to an unremarkable, “mom and pop” business lacking any information security policies and practices.  In stark contrast to Target’s measures, Fazio primarily relied on the free version of Malwarebytes Anti-Malware (“MBytes”) to detect malicious software on its systems.[xi]  It is unknown if Fazio employed any actual information security protocols, but based on their use of MBytes, it seems likely that they did not.

What is more inexplicable was Fazio’s response to its role in the Target breach.  In a press release, Fazio stated it was “the victim of a sophisticated cyber attack operation,” and further that its “IT system and security measures are in full compliance with industry practices.”[xii]  Clearly, Fazio was out of its depths concerning the technical aspects of information security as well as willfully or unintentionally ignorant of its duties under applicable state and federal law.

First, phishing attacks are not “sophisticated.” Phishing attacks are common.  They are not targeted, but instead use a “blast” approach to distribute the poison pill email as widely as possible.  In fact, email phishing attacks are so unsophisticated that they can be defeated by simply ignoring and deleting the email.[xiii]

Second, while MBytes is a reputable malware program, it is seriously limited.  The free version is an on-demand scan and kill program, which means a user must actually run the scanner or set it to run at scheduled times.  Also, the free version of Mbytes does not offer real-time protection against threats.  Real-time protection means that the software actually blocks or stops malware that is actively trying to infect a system.  Imagine a pop-up blocker, which is a real-time protector.  A pop-up blocker that did not protect in real-time would effectively allow the pop-up to appear, and then only remove the pop-up when the user prompts it to do so.  Essentially, a non-real time malware program is ineffective to prevent malware infections.

Third, Fazio clearly was not in compliance with industry practices.  We have already discussed the limited capabilities of free MBytes above.  Further, the free version of Mbytes is made explicitly for individual users and its license prohibits corporate use.[xiv]  Fazio violated this license, which is definitely not an industry standard. Finally, there is no evidence that Fazio employed any reasonable information security policies and procedures, let alone a written program including preventative measures, training, incident response strategy, and data breach notification plan.  Thus, Fazio quite literally failed to meet the requirements of state and federal information security laws, which ARE the industry standard.[xv]

Information security is not a problem for “big” companies.  Information security is not IT’s problem.  Information security is everyone’s problem.  Do you think your organization is somehow protected from phishing attacks?  It happened to Fazio Mechanical.  Fazio’s role in the Target breach proves that the “little guys” cannot ignore their place in the global marketplace.   According to the Privacy Rights Clearinghouse, 621,955,664 records have been breached in the U.S. since state data breach notifications laws went into effect in 2005.  Those are only the ones that have been reported—experts think the figure is actually much larger.[xvi]

In this modern age, it is best practice to assume that your organization has already been breached or will be breached in the future.  The only way to prevent a breach is to put solid information security policies and procedures into place, train your employees, and regularly test your network security.

Ickes Holt is a full-service, team-driven, and client focused law firm in Northeast Ohio concentrating on information security and governance. Information is the DNA of modern organizations and Ickes Holt is dedicated to advising clients on how to protect its information. Please contact us to discuss establishing or improving the information governance policies for your organization.
















[xv]     See


CFPB’s DWOLLA Enforcement Action: A Warning to Small Financial Institutions

online banks storing personal data

For the first time since its inception, the Consumer Financial Protection Bureau (CFPB) brought the regulatory hammer down on an organization for allegedly misrepresenting the robustness of its data security program to consumers.   Recently, the CFPB targeted Dwolla, Inc., a provider of an online payment platform and agent of financial institutions Veridian Credit Union and Compass Bank.  As of May 2015, Dwolla had approximately 653,000 members and had transferred as much as $5,000,000 per day.  See CFPB Consent Order.

According to the Consent Order, Dwolla persistently represented to customers that its network and transactions were safe and secure.  According to the Consent Order, Dwolla represented that its data-security practices exceeded industry standards and set “a new precedent for the industry for safety and security.”  Interestingly, according to Dwolla’s recent press release the organization had not detected any evidence of a data breach or received any notifications of a breach in its 5 years of operation.  Despite the lack of a breach, the CFPB levied a substantial penalty against Dwolla, including a $100,000.00 fine and implementation of mandatory audits.

In the Dwolla case, the CFPB adopted an extraordinarily aggressive enforcement posture, especially considering it is the agency’s first data security enforcement action.  Further, CFPB’s enforcement came prior to any known breach, and, in fact, without any evidence of a breach.  Thus, the Dwolla case establishes an immediate precedent that CFPB intends to initiate enforcement actions based upon an organization’s representations concerning its data security practices and is willing to mete out serious consequences for an organizational failure to live up to those representations.  It is only logical to speculate that CFPB will pursue many future enforcement actions against organizations that lack data security practices, regardless of the existence of a breach.  Consequently, the Dwolla case is an opening salvo from the CFPB to which financial services organizations should pay close attention.

The Dwolla case is a cautionary tale presenting many lessons.  One lesson to be learned is that where an organization represents to consumers that it has brawny information security practices, those practices must be the reality.  If the representation does not match reality, the CFPB will wield the might of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) to bring enforcement actions again financial services organizations for “unfair, deceptive and abusive practices.”  “Puffery” or “mere marketing” are no longer viable defenses.   Review of the Consent Order (see link above), which is essentially a negotiated settlement between the CFPB and Dwolla, demonstrates that the CFPB based the enforcement action on Dwolla’s alleged deceptive practices in violation of the 12 U.S.C. §§ 5531(a) and 5536(a)(1)(B).

According to the Consent Order, Dwolla made the following representations on its website and through direct communications to consumers:

  • its data security practices “exceeded industry standards.”
  • the company “sets a new precedent for industry safety and security.”
  • that “all” information was “securely encrypted and stored” and utilized the same encryption standards as the federal government.
  • it complied with the payment card (Visa/Mastercard/AMEX) standards for data security, commonly referred to as the “PCI DSS”.

As explicitly stated in the Consent Order, Dwolla, among other things, “failed” to do the following:

  • adopt and implement data-security policies and procedures reasonable and appropriate for the organization.
  • use appropriate measures to identify reasonably foreseeable security risks.
  • ensure that employees who have access to or handle consumer information received adequate training and guidance about security risks.
  • use encryption technologies to properly safeguard sensitive consumer information.


Apparently, prior to 2012, Dwolla’s employees received little to no data security training, including their responsibilities when handling and protecting the security of consumers’ personal information.  In 2012, an independent auditor conducted a penetration test on Dwolla’s systems, which included a spear phishing email attack. 
The Consent Order described the penetration test:

In December 2012, [Dwolla] hired a third-party auditor to perform the first penetration test of In that test, a phishing e-mail attack was distributed to [Dwolla’s] employees that contained a suspicious URL link.  Nearly half of [Dwolla’s] employees opened the e-mail, and of those, 62% of employees clicked on the URL link.  Of those that clicked the link, 25% of employees further attempted to register on the phishing site and provided a username and password.

These results are disturbing and underscore both the reality of insider threats and the importance of employee training.  Despite the poor test results, the CFPB found that “Dwolla failed to address the results of this test or educate its personnel about the dangers of phishing.”  Then, “Dwolla did not conduct its first mandatory employee data-security training until mid-2014.”

Moreover, according the the Consent Order, despite industry standards requiring encryption of sensitive data, Dwolla did the following:

In numerous instances, [Dwolla] stored, transmitted, or caused to be transmitted the following consumer personal information without encrypting that data:

  1. First and last names;
  2. Mailing addresses;
  3. Dwolla 4-digit PINS;
  4. Social Security numbers;
  5. Bank account information; and
  6. Digital images of driver’s licenses, Social Security cards and utility bills.

It is further stated that Dwolla “also encouraged consumers to submit sensitive information via e-mail in clear text, including Social Security numbers and scans of driver’s licenses, utility bills, and passports, in order to expedite the registration process for new users.” 
These are significant missteps by an organization in the business of handling confidential information and financial transactions.

Another lesson present in the Dwolla case is that the CFPB’s aggressiveness mandates proactivity on the part of organizations to establish, implement, and adhere to, sound information security practices.  For its foibles (and bearing in mind that there is no indication a single consumer was harmed), the CFPB fined Dwolla $100,000.00 and also required Dwolla, among other things, to “establish, implement, and maintain a written, comprehensive data-security plan … reasonably designed to protect the confidentiality, integrity, and availability of sensitive consumer information.”  The CFPB further required Dwolla to “designate a qualified person to coordinate and be accountable for the data-security program.” Additionally, Dwolla must “conduct data-security risk assessments twice annually” as well as “conduct regular, mandatory employee training on a) the Company’s data-security policies and procedures; b) the safe handling of consumers’ sensitive personal information; and c) secure software design, development and testing.”   These broad sanctions will ultimately cost Dwolla much more than an information security plan would have prior to the enforcement action, without even taking into account the damage to its reputation, loss of productivity and cost of legal defense.

In response to the Consent Order, CFPB Director Richard Cordray issued the following statement: “With data breaches becoming commonplace and more consumers using these online payment systems, the risk to consumers is growing.” He further commented, “[i]t is crucial that companies put systems in place to protect this information and accurately inform consumers about their data security practices.”

The CFPB is sending a clear message that it intends to hold financial services organizations accountable for their representations about data security practices, regardless of breaches or actual harm to customers.   If the CFPB finds an organization’s data security lacking, or its representations too grandiose, it will levy heavy sanctions, including monetary fines, compulsory implementation of a defined information security program, and mandatory ongoing oversight.

It makes sense that organizations should heed the lessons of Dwolla, and proactively institute sound information security policies and procedures.  Many, if not most, organizations are governed by federal and state information security and privacy laws.  These laws apply regardless of whether the organization realizes it or not.  Organizations need the assistance of knowledgeable and skilled information security and privacy attorneys and other experts to help them navigate the regulatory minefields and develop and implement best practices in their organizations.