Guidance to First Responders in COVID-19

first responders

The Office for Civil Rights, which is the HIPAA enforcement arm of U.S Department of Health and Human Services (HHS), issued guidance today on how entities subject to HIPAA (covered entities) may disclose protected health information (PHI) about an individual who has been exposed to COVID-19 to law enforcement, paramedics, other first responders, and public health authorities in compliance with the HIPAA Privacy Rule.

In its guidance, OCR explains the circumstances under which a covered entity may disclose PHI, such as the name or other identifying information about individuals, without their HIPAA authorization, and provides examples including:

· When needed to provide treatment;

· When required by law;

· When first responders may be at risk for an infection; and

· When disclosure is necessary to prevent or lessen a serious and imminent threat.

Today, OCR clarified the regulatory permissions that a covered entity may use to disclose PHI to first responders and others so they take the necessary precautions or use personal protective equipment. OCR is also careful to remind all covered entities to take reasonable steps to limit the PHI used or disclosed to that which is the “minimum necessary” to accomplish the purpose for the disclosure, which is frankly a good recommendation for all PHI related disclosures, pandemic or not. Even though these are extraordinary times, we must be sure to protect one another’s privacy while also striving to protect the health of our first responders during this crisis. OCR is careful to strike that balance in today’s guidance. 

Clients and friends can find the guidance here https://www.hhs.gov/sites/default/files/covid-19-hipaa-and-first-responders-508.pdf

Stay safe and healthy!

 

If you need further information, contact us here.

Is Ohio Getting It’s Cybersecurity Act Together?

computer with code

When state senators Bob Hackett and Kevin Bacon introduced Senate Bill 220, I for one felt a sense of relief that, at last, Ohio would finally take much-needed action on the issue of cybersecurity. The bill is far from perfect, but it is finally a START of what will hopefully result in meaningful comprehensive cybersecurity legislation.

What does the bill accomplish? It incentivizes Ohio companies to adopt a risk-based framework by providing a “safe harbor”, which is an “affirmative defense”, to tort claims arising out data breaches caused by third-party malefactors.  The bill indicates that all covered entities (any Ohio business that “…accesses, maintains, communicates, or handles personal information”, or, essentially all Ohio companies), may  seek a safe harbor under the law provided the company has a “written cybersecurity program that contains administrative, technical, and physical safeguards for the protection of personal information that complies with the NIST cybersecurity framework or other industry cybersecurity frameworks (such as Center of Internet Security Critical Security Controls, ISO 27000).

For health care entities complying with the Health Insurance Portability and Accountability Act (HIPAA), banks and other financial institutions complying with the Gramm-Leach-Bliley Act (GLBA) and government contractors complying with the Federal Information Security Modernization Act (FISMA), the bill allows for a safe harbor for those entities who have developed their own frameworks to comply with industry regulations.

The bill requires that covered entities seeking safe harbor must have written cybersecurity programs must be designed to do the following:

(1) Protect the security and confidentiality of personal information;

(2) Protect against any anticipated threats or hazards to the security or integrity of personal information;

(3) Protect against unauthorized access to and acquisition of personal information that is likely to result in a material risk of identity theft or other fraud to the individual to whom the information relates.

The bill takes into consideration that not all entities have the same security challenges.  The bill acknowledges that the cybersecurity program of covered entities may take into account the following:

(1) The size and complexity of the covered entity;

(2) The nature and scope of the activities of the covered entity;

(3) The sensitivity of the personal information to be protected;

(4) The cost and availability of tools to improve information security and reduce  vulnerabilities;

(5) The resources available to the covered entity.

Now for the rub.

For a covered entity to successfully assert the affirmative defense afforded by the bill, it must demonstrate “substantial compliance” with its chosen risk-based framework or HIPAA, GLBA or whatever regulatory rubric applies to the covered entity.  To a lawyer, the term “substantial compliance” automatically means “litigable issue.” What does “substantial” mean?  It is wholly subjective and it will take years in Ohio courts, if ever, to create a case law definition.  From a cybersecurity standpoint, we do not have years to shore up Ohio’s networks.

I guess what I’m really driving at is that Ohio needs law with more teeth in it. How about a law that simply mandates that you have a written cybersecurity program and follow a risk-based framework if you maintain sensitive personal information as part of your business?  Operators in health care, banking and any publically traded company understand such a mandate. Entities who do not obey the law will be held accountable on the basis of negligence per se in the event they sustain a breach without a risk-based framework in place. Litigation will result either way.  A clear mandate would bring more clarity to questions of liability and presumably more businesses would adopt a risk-based framework in the face of a mandate.

In the end, isn’t more about security than liability?

Ickes Holt Featured Speakers at All Ohio Counselor’s Conference

ickes holt featured speakers

Ickes Holt had the pleasure to present a seminar at the All Ohio Counselor’s Conference (“AOCC”) in Columbus. From their website:

Supported by the Ohio Counseling Association (OCA) and the Ohio School Counselors Association (OSCA), the All Ohio Counselors Conference is the leading professional development conference in the state of Ohio for licensed counselors, counseling students, supervisors, and counselor educators who work in a clinical/community, school, college, addiction, private practice, or other related setting.

Ickes Holt presented a 90 minutes seminar on Information Security and Privacy for Mental Health Professionals. The seminar focused on educating mental health professionals about the Privacy and Security Rules of HIPAA and associated regulations, as well as external and internal threats, ethical handling of subpoenas, and how to accept and bear the legal and ethical obligations imposed by HIPAA. The seminar focused on implementing a written information security program and best practices regarding psychotherapy notes and addressable items, such as encryption.

Ickes Holt believes that mental health professionals remain an under-educated and under-served area in information security and privacy law. A unique sector of healthcare, mental health professionals maintain some of the most important and sensitive information possible about their patients. Further, the entire basis of the patient-counselor relationship is based in confidentiality and trust. It is imperative that patients trust their counselor so they can receive the help they need. Finally, mental health professionals keenly understand the sensitive nature of the patient-counselor relationship and greatly value confidentiality.

For these reasons, Ickes Holt is committed to supporting mental health professionals maintain and safeguard patient privacy. If your mental health care practice has questions about HIPAA, information security, patient privacy, or issues regarding lawful disclosures of patient information, please feel free to contact us today. We are happy to help.

Regarding Privacy Ohio Sets a High Bar for Medical Marijuana

medical marijuana and privacy

Over the last few years, agencies such as the Federal Trade Commission have fostered a movement to encourage industry to implement the concept of privacy-by-design.  The idea behind privacy-by-design is that when developing new software, hardware, medical-devices or other such products that extract personal information, such as personally identifiable information (PII), health care information, geo-tracking data, etc., the manufacturer should consider privacy in the product’s design.

The European Union has historically been very aggressive on privacy matters and recently mandated privacy-by-design in its new General Data Protection Regulation (GDPR), which will become enforceable in May 2018. The GDPR will require companies to not only design compliant privacy policies, procedures, and systems at the outset of any product or process development but must also employ a data protection officer to ensure compliance.

Although the US has industry specific regulations for healthcare (HIPAA) and banking (GLBA) that require organizations to address privacy and security, and the Securities and Exchange Commission requires auditing and reporting of controls associated with information security and cybersecurity, until now, there has been no legislative rubric mandating privacy-by-design.

Recently, the Ohio Medical Marijuana Control Program (OMMCP) created mandates for privacy and information security that are among the strictest in the country.

The long and short is that all medical marijuana industry participants (cultivators, processors, dispensaries, or testing facilities) that use an “electronic system” for storing and retrieving records required by the regulations or related to medical marijuana in any way (including all patient data for dispensaries) shall implement a system that does the following:

  • Guarantees the confidentiality of the information stored in the system (emphasis on the emphasis);
  • Is capable of providing safeguards against erasures and unauthorized changes in data after the information has been entered and verified;
  • Is capable of placing a litigation hold or enforcing a records retention hold for purposes of conducting an investigation or pursuant to ongoing litigation; and
  • Is capable of being reconstructed in the event of a computer malfunction or accident resulting in the destruction of the data bank.

One of the above requirements clearly stands out.  If medical marijuana businesses use a computer to store medical marijuana related data (which will be most if not all its data), the system must be capable to guarantee the confidentiality of the data. In other words, the Ohio medical marijuana industry must guarantee patient privacy and the security of its data systems.

The result is an entirely new, state-based industry which legally must be designed with privacy and security in mind.  Personally, I believe that guaranteed confidentiality is impossible and any cybersecurity, physical security, or privacy professional worth their salt will tell you “there is no such thing as perfect security.”  In fact, most, if not all, federal and state privacy and information security laws require reasonable security, a standard which itself is continually evolving in the law. Consequently, I also believe that the required guarantee will ultimately be amended, compelled by litigation, lobbying efforts, or both and Ohio’s medical marijuana regulations will move toward a standard something more akin “reasonable security”.

However, I have resolved that this ridiculously high standard will be a good thing for the Ohio medical marijuana industry. It will make the entire industry put privacy, information security, and date protection on the short list of organizational imperatives.  An organization simply cannot ignore a regulation that requires a guarantee of confidentially.  These fledgling companies must hardwire privacy and security into their businesses from the very start. Here are a few suggestions:

  1. Most privacy breaches are the result of human error. Develop a 21st century information governance program comprised of policies and procedures that clearly articulate how information will be handled within the organization.
  2. Regularly train all members of the organization on privacy and information and physical security. Training can be done in group settings or one-on-one, online, or in person. There are many privacy and security training options and most are not cost prohibitive.
  3. Document all your privacy and security incidents and all corrective measures taken.
  4. Engage legal counsel. Yes, I am an information security and privacy attorney who wants to help medical marijuana companies. Yes, I am self-interested. However, my self-interest doesn’t change the fact that one thing attorneys can do is provide virtually ironclad confidentiality related to client information under certain circumstances, particularly in anticipation of litigation or prosecution. With cannabis currently illegal on a federal level, wouldn’t all Ohio medical marijuana business be conducted under the auspices of federal prosecution?

With the OMMCP taking such a bold stance on privacy and security it will be interesting to see if such rigorous requirements will be a help or a hindrance to the industry. Although wouldn’t it be a sweet twist of fate if an industry imperiled by stigma of the black market and “reefer madness”, becomes a sterling example of privacy and security the modern age? It is our goal at Ickes\Holt to see that happen.

Stay tuned for our upcoming article on the privacy and information security requirements for Ohio medical marijuana dispensaries, which must be prepared to comply with the Health Insurance Portability and Accountability Act of 1996 (HIPAA), Ohio Automated Rx Reporting System (OARRS) along with a whole host of particularized recordkeeping and reporting requirements.

Wireless Routers – Buyer Beware

are wireless routers secure

I imagine by now most households have at least one wireless router.  Heck, my mom and dad have one and they are 70 year olds in rural Wisconsin.  While Wi-Fi greatly increases convenience in the modern world, consumers should be aware that setting up a wireless router (and other devices) straight out of the box using factory settings poses security risks.

In February of 2016, ASUS settled charges with the Federal Trade Commission (“FTC”) stemming from “critical security flaws in its routers [which] put the home networks of hundreds of thousands of consumers at risk.” ASUS touted its routers as “including numerous security features that the company claimed could ‘protect computers from any unauthorized access, hacking, and virus attacks’ and ‘protect [the] local network against attacks from hackers.’” Instead of the robust security measures advertised by ASUS, the routers allegedly contained “pervasive security bugs in the router’s web-based control panel to change any of the router’s security settings without the consumer’s knowledge” and more egregiously, the routers were manufactured with the “same default login credentials on every router: username ‘admin’ and password ‘admin’.”

On January 5, 2017, the FTC issued a release announcing it had filed charges against another prevalent wireless router manufacturer, D-Link, based on poor default security measures in its routers and webcams.  According to the complaint, the security flaws exposed consumers to the hacking of confidential information and live video feeds.  The primary security flaws alleged by the FTC include default “hard-coded” login credentials such as “guest”; public exposure of private key codes; and leaving users’ login credentials for D-Link’s mobile app unsecured in clear, readable text on their mobile devices.

The security flaws alleged against ASUS and D-Link are significant and, frankly, egregious.  First, there is no way that, for example, these default login credentials constitute “reasonable” security, let alone “robust” security.  Second, there is absolutely no reason that manufacturers cannot ship their products with more secure, randomly generated 12-14 digit login credentials.  Why ASUS and D-Link (allegedly) did not defies comprehension.

Now, the common consumer reaction is that security is not all that important because “I won’t be hacked.”  The first flaw in that reasoning is that the underlying premise is not true.  There is no reason to think that you will not be hacked.  As discussed in the FTC’s complaint, hacking a router provides hackers with multiple avenues of unauthorized use.  For example, they can: (1) obtain documents and information from the router’s onboard storage; (2) redirect users to fraudulent websites; and (3) attack or compromise devices connected to the Wi-Fi network.

Secondly, hacks don’t always involve stealing passwords and identity theft. As demonstrated in 2016, hackers will infiltrate systems to commandeer connected devices as part of a larger agenda to attack specific targets.  The unprecedented distributed denial of service (“DDoS”) attacks against the website of security analyst Brian Krebs and Internet performance management company Dyn in October 2016 were conducted by use of the Mirai botnet.  In these attacks, Mirai enslaved hundreds of thousands of connected devices by scanning the Internet for devices with vulnerabilities and then infecting them with malware.

Thirdly, once a hacker gets in to the consumer’s home network, they have an open door to come back any time they like.  So, even if a hacker does not steal any consumer information the first time, that does not mean they won’t steal information the next time.  Most consumers will be completely unaware that their network has been compromised.

Whether to protect their own information and privacy, or whether to avoid being used by a malefactor for other nefarious purposes, consumers must be aware of potential security flaws with their wireless routers, webcams, and other connected devices.  At the very least, consumers should change all login credentials for routers and connected devices to secure, randomly generated 12-14 character usernames and passwords.

https://www.ftc.gov/news-events/press-releases/2016/02/asus-settles-ftc-charges-insecure-home-routers-cloud-services-put

https://www.ftc.gov/news-events/press-releases/2017/01/ftc-charges-d-link-put-consumers-privacy-risk-due-inadequate

Cybersecurity in America’s Dairyland

cyber security in dairy land

On November 7, 2016, ICKESHOLT attorneys Jim Ickes and Joel Holt journeyed to Joel’s home-state of Wisconsin to record a webinar for the National Business Institute (http://www.nbi-sems.com/Home.aspx) entitled: “Cybersecurity: the Ultimate Guide.”

Suffice it to say, the name and scope of the seminar presented quite a challenge.  However, while neither Jim nor Joel would willingly label the presentation as the “ultimate guide”, they are proud of how it turned out.  The seminar is a robust, comprehensive, and rapid-fire excursion through the most prominent federal information security/cybersecurity laws, including HIPAA, GLBA, and FTC Act Section 5.  The seminar also touches on State laws, threat vectors, emerging technology, pending legislation, and the role of counsel in the information security space, including regulatory compliance and advocating an organizational culture of security.  It is a lot of information crammed into 3 hours.

Most importantly, Jim and Joel produced what has to be the granddaddy of all webinar written materials: a 70 page treatise on their topics.  Again, while trying to maintain some semblance of humility, Jim and Joel are quite proud of this document and are confident readers will come away with a good working knowledge of the topics covered.

Finally, Jim and Joel had the privilege to work with panelists Joe Carney and Victoria Ferrise, both of whom are attorneys at Brennan, Manna & Diamond, LLC, in Akron, Ohio, as well as Jeff Grady, Director of Security & Compliance at Three Pillars Technology in Madison, Wisconsin.  The result is 6 hours of informative, and hopefully not too monotonic, CLE credit.  If you are interested in participating in the seminar, it will broadcast nationally on Nov. 30, 2016.  Please check out this link:  Cybersecurity: the Ultimate Guide.

If you have questions about information security, feel free to contact us.  We love to talk about this stuff.

Encryption Prescription

encryptions and hipaa

Regardless of the actual legitimacy of the HIMSS Study, it raises an important discussion point regarding encryption. So, with due respect to the pundits advocating caution, I will presume it to be reliable.  When viewed as reliable, the HIMSS Study presents compelling statistics with immediate impact to the healthcare industry.

The Numbers Regarding Encryption.

According to the HIMMS Study, approximately 32% of hospitals and 52% of non-acute providers do not encrypt data in transit.  Further, 39% of acute providers and 52% of non-acute providers do not encrypt data at rest.   The overarching gist of the HIMMS Study is that a significant percentage of healthcare organizations (“HCOs”) do not encrypt data, either at rest or in transit.  But, what’s the big deal?

The Rules Regarding Encryption.

HIPAA does not necessarily require encryption.  However, encryption is an addressable implementation specification.  See 45 CFR 164.312(a)(2)(iv).   Importantly, “addressable” does not mean “optional.”  Instead, “addressable” means that a covered entity must “[i]mplement the implementation specification if reasonable and appropriate” under the circumstances for that covered entity.  See 45 CFR 164.306(d)(3).  If a covered entity determines that an addressable item is not reasonable and appropriate, it must document why and implement an equivalent measure, if the substitute measure is reasonable and appropriate.  Clearly, if encryption is reasonable and appropriate for a covered entity, failure to implement encryption violates HIPAA’s Security Rule.  Thus, the operative question is whether encryption is reasonable and appropriate.

In 2016, encryption tools are readily available and there is no excuse for failing to encrypt data at rest.   For example, Windows OS includes BitLocker Drive Encryption onboard.  Further, there are numerous affordable encryption options for Windows.[v]   Mac offers FireVault 2 encryption standard with OS X.  Firevault 2 encrypts not only the hard drive, but removable drives as well.  FireVault is a respectably robust encryption tool, especially for individuals or small business.  Mac users also have additional options for encryption.[vi]

Data in transit is a bit more technical.  I do not claim to be a CISSP – my knowledge base is in the law, not hardware and software.  So, for purposes of this article, let’s just consider that “data in transit” entails methods with which we are all familiar – email, fax, and text.  All of these transmissions may be encrypted by employing various programs, services, and technology, many of which are readily available and affordable.

People will undoubtedly argue about the viability of, and protection afforded by, these encryption tools.  For example, you can Google numerous articles discussing the security flaws in Firevault 2 and BitLocker.  Encryption options for faxing and texting usually fare no better.

The good news is that HIPAA does not demand that the encryption WORK – but only that covered entities “[i]mplement a mechanism to encrypt and decrypt” ePHI.  See 45 CFR 164.312(a)(2)(iv).   HIPAA defines encryption as “the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key.”  See 45 CFR 164.304.  So, the mere fact that a covered entity implements encryption methods meeting technical requirements[vii] satisfies HIPAA’s basic requirement.  Of course, covered entities must also keep safeguards up to date and monitor overall effectiveness in protecting information assets.

Finally, it should be stated that encrypting data relieves a covered entity from data breach notification requirements in many states, including Ohio.  In Ohio, data breaches exposing “personal information” must, under certain circumstances, be reported to the individuals.  See R.C. 1349.19(B)(1).  Information is only “personal information” “when the data elements are not encrypted, redacted, or altered by any method or technology[.]”  R.C. 1349.19(A)(7)(a).

In closing, it is arguable that encryption is currently reasonable and appropriate for 100% of covered entities.  Under that postulation, then, according to the HIMSS Study, between 32% to 52% of HCOs are violating HIPAA and perhaps do not even realize they are doing so.  While HIPAA’s Privacy and Security Rules go far beyond encryption, perhaps it is a good, objective starting point for covered entities.  Stakeholders in covered entities (and business associates) should ask:

  • Do we store data? If so, do we encrypt that data?

  • Do we transmit data? If so, how?  Email, fax, or text?

  • Do we encrypt the data we transmit? How?

  • Is encryption reasonable and appropriate for our organization?

  • If not, do we have the justifications documented?

Based on this self-analysis, covered entities should contact an information security lawyer to help them: (1) conduct a thorough and confidential analysis of existing information security policies and procedures; and (2) develop and implement an information security regimen tailored to foster an organizational culture of security.

[i] http://www.itworld.com/article/3110506/healthcare-it/many-hospitals-transmit-your-health-records-unencrypted.html

[ii] outpatient clinics, rehabilitation facilities and physicians’ offices.  See note iv, infra.

[iii] 2016 HIMSS Cybersecurity Survey, available at: http://www.himss.org/sites/himssorg/files/2016-cybersecurity-report.pdf

[iv] For example, the HIMSS Study was sponsored by FairWarning.  FairWarning is a provider of information security services and has a considerable market in … you guessed it … the healthcare industry.  Sure, it seem convenient that a study exposing a lack of information security in healthcare is sponsored by a seller of information security to healthcare. In fact, the lawyer in me demands the injection of a healthy dose of skepticism.

However, in fairness, as an information security attorney, I could be accused of the same sort of fear-mongering designed to scare people into hiring me.  But, I know this to be patently untrue.  No reasonable person would consider identification of critical issues and application of sound legal advice to mitigate those issues as “fear mongering.”  It is no different that advising a business owner to incorporate to avoid the risk of exposing personal assets to creditors.  So, because I know my motives are pure, I am inclined to extend the benefit of doubt to others.

[v] http://www.toptenreviews.com/software/security/best-encryption-software/

[vi] http://www.toptenreviews.com/software/security/best-mac-encryption-software/

[vii] HHS has issued guidance on encryption standards, namely referring to NIST guidelines.  For example, encryption for data at rest must be consistent with NIST Special Publication 800-111.  Encryption for data in transit must comply with other specifications, including NIST Special Publications 800-52,

 

What’s App-Ening to Your Financial Data?

venmo app security risks

Recently, a friend asked me to pay him back for movie tickets via Venmo.  For those of you born before 1985, Venmo is a mobile app owned by PayPal which allows users to “[p]ay anyone with a Venmo account instantly using money you have in Venmo, or link your bank account or debit card quickly.” Simply, instead of “divvying up the check”, people can now electronically transfer funds back and forth through Venmo, using Venmo “wallets” or a direct link to their bank.  Suffice it to say, I refused.  While we joked about my age, “youngsters and their ‘future money’” and “financial black magic”, my refusal was not based in age, fear, or lack of understanding.  Instead, it was based off of an informed and objective analysis of the interaction of mobile apps and security.

Well, it appears my fears were well founded.  According to PayPal’s 2016 1st Quarterly Report for the SEC, Paypal admitted that it was under investigation by the Federal Trade Commission (“FTC”) for unfair or deceptive acts and practices as related to Venmo.[i]  While Paypal does not elaborate on the nature of the investigation, it seems apparent that the FTC’s investigation is focused on a host of privacy violations.

In March 2016, the parties filed an “Assurance of Voluntary Compliance” (the “Assurance”) in In the Matter of State of Texas and Paypal, Inc. (the “Paypal Litigation”).  The Paypal Litigation derived from an investigation of Paypal by the Texas Attorney General for potential violations of Texas’ deceptive trade practices and consumer protection law.  The Assurance lays out a litany of privacy violations concerning Venmo, most notably:

  1. Auto-friending, which permits Venmo to access and assimilate a user’s contact list in order to add those contacts to the user’s Venmo Friends list, all without a deliberate action by the user or adequate choice. It appears that Venmo was also accessing users’ contacts lists without any real privacy notice.  See Assurance, ¶6(A)(i).
  2. Potential misrepresentations about the level of security provided by Venmo. See Assurance, ¶6(B).
  3. Venmo’s default “audience setting” is set to public – which publishes a “timeline” of your Venmo financial transactions. This setting can be changed to private, but according to the Assurance, it seems that this is not commonly known and Venmo doesn’t exactly make it easy to accomplish.[ii]  See Assurance ¶6(C) (“At the time of … any transaction, [Venmo] shall clearly and conspicuously disclose the audience setting for the transaction in close proximity beneath, beside, or adjacent to any field … or call to action.”).

If you look closely at the screen shot above, you will see how Venmo creates a crawling “ticker” of your financial transactions.   Think of a Twitter feed, but the updates are your financial transactions using Venmo.

Based on the Paypal Litigation and the Assurance, it seems to be a pretty safe bet that the FTC investigation of Paypal/Venmo settles smack dab in the wheelhouse of Section 5 of the FTC Act.

The 3 violations asserted in the Paypal Litigation are serious, especially considering the apparent lack of notice provided to Venmo users about the app’s information sharing practices.  However, I have a couple of other concerns about Venmo that were not addressed by the Assurance – 1 practical and 1 policy.

First, the practical. Signing in with Google or Facebook accounts has become very popular.  After all, it’s easy, right?   Venmo advertises this feature on its website.  See https://venmo.com/.  But have you ever stopped to consider HOW Venmo is able to create an account for you and log in by using your Facebook account?  Or, is it just yet another mystical Internet transaction that doesn’t concern you?

In order for Venmo to log you in using Facebook, an authentication process must occur, called “OAuth.”  Now, OAuth is by all accounts, a pretty decent way to do this.  OAuth creates “tokens”  which allow the third party app to access your Facebook account and do the things you have allowed it to do.[iii]  However, some services don’t exactly tell you what permissions you are giving away, or instead bury them in hard-to-find-and-harder-to-understand privacy notices

For example, the first time anybody sees Venmo’s privacy notice is after they’ve chosen to start the Facebook login process.  Further, notice the tiny “privacy policy” link in the bottom left hand corner.  Like most privacy notices, it is not clear and conspicuous.  However, if one bothers to read the privacy notice, they will discover that Venmo collects the following information from its users:

  • Account Information – text-enabled cellular/wireless telephone number, machine or mobile device ID and other similar information.
  • Identification Information – your name, street address, email address, date of birth, and SSN.
  • Device Information.
  • Social Media Information.
  • Financial Information – bank account and routing numbers and credit cards linked to your Venmo account.

Finally, Venmo makes the incredible caveat that it “may collect additional information from or about you in other ways not specifically described here.”  That stipulation conveniently seems to counteract the entire purpose of a privacy notice.  But, that is another topic for another day.

Back to the issue at hand.  It seems insane to sign into Venmo using Facebook.  The whole point of Venmo is that it is a financial app with a direct link to your bank account or credit card information.  While Venmo makes it very clear that it “does not share financial information with third party social networking services” there is no reason to disbelieve that a hacker infiltrating Facebook could somehow “back-door” into Venmo, and thus, users’ financial information.

What’s more, Facebook just had an epic security breach in 2013 where 6 million users were compromised.  Facebook is one of the largest social media platforms and is a high profile target for hackers.  With all due, respect, this layman will presume that logging into Venmo with my Facebook account will potentially expose my financial information.

Now the policy concern.  Venmo illustrates the one of the barriers to comprehensive federal cybersecurity legislation – the allocation of risk.  This struggle has occurred across sectors, but is very evident amongst retail and banking/financial.  And, I believe, with good reason.

An app like Venmo needlessly puts users’ financial information at risk, and banks will ultimately be the ones left holding the proverbial bag should Venmo get hacked and that financial information is used to infiltrate the banks’ networks.  If a bank is compromised through information obtained in a Venmo hack (think Target and Fazio, as I previously wrote about: https://informationsecurity.attorney/2016/03/20/information-security-and-privacy-round-up-memphis-neurology-fazio-mechanical/#more-133  ), then the bank, through no real fault of its own, will be subject to regulatory action and perhaps even civil liability.

Quite legitimately, we are talking about the potential exposure of: (1) Venmo users; (2) their banks; (3) their credit card companies; and (4) all of the OTHER customers of the banks and credit card companies. We are also talking about legal consequences for the banks and credit card companies for the disclosure. From a legal and policy perspective, it is problematic that the fate of a regulated entity may be so significantly intertwined with and affected by the security of an unregulated entity.

It’s no wonder that the banking and financial industry are supporting federal data security and breach notification standards.  They are subject to heightened standards and are exposed when an unregulated entity fails to take security seriously. In fact, according to a spokesperson: “Financial institutions have had this obligation for 15 years, and it’s long overdue for Congress to pass legislation ensuring that everyone has a similar mandate to keep customer data safe.”[iv]  Translation:  banks are mad as hell.

The morale of the story is that, until everyone is regulated, consumers have to be careful.  While the FTC does have jurisdiction over interstate commerce, they are limited to investigating unfair and deceptive trade practices.  A strong information security regulatory framework with a private right of action would go a long way to ensuring that all entities collecting personal information have sufficient security.

Call me old and out of touch.  Call me a curmudgeon.  Mock my puritanical sensibilities.  I don’t care.  There is no chance that I will ever divvy up the bar bill using Venmo.

[i]                  “On March 28, 2016, we received a Civil Investigative Demand (“CID”) from the Federal Trade Commission (“FTC”) as part of its investigation to determine whether we, through our Venmo service, have been or are engaged in deceptive or unfair practices in violation of the Federal Trade Commission Act.”
[ii]                 Venmo Likely Investigated Over User Privacy Violations, Jeff John Roberts, May 24, 2016, available at Fortune.com
[iii]                 http://lifehacker.com/5918086/understanding-oauth-what-happens-when-you-log-into-a-site-with-google-twitter-or-facebook
[iv]                 http://thehill.com/policy/cybersecurity/280905-financial-industry-spars-with-retailers-over-data-breach-bill

Keep it Like a Secret

TRADE SECRET

With the passage of the Defend Trade Secrets Act (DTSA), the federal government handed businesses a lethal new weapon to protect trade secrets in federal court. There should be champagne popping in boardrooms everywhere. Why, you ask?

Access to federal courts in and of itself is a major boon for businesses. Any seasoned litigator knows that in federal court, deadlines and dates are set quickly and are firm. Further, federal courts have more judges, more resources, and less cluttered dockets. Accordingly, federal litigation customarily moves at lightning speed compared to state court. Also, anecdotally speaking, the federal judiciary and its staff are the cream of the legal crop. Federal judges aren’t encumbered with running for re-election (as in Ohio), they take the time to understand complex legal issues and have the wherewithal to deal with those issues. Their staff attorneys are usually enjoy digging into the meat of legal issues and complex fact patterns. This isn’t to say that state court judges and staffs are substandard. More so, state courts generally lack the resources and time to put together a stellar legal team to review your case. Thus, when dealing with a complicated trade secrets cases, federal courts will be a welcome arbiter for practitioners and clients alike.

But … there is always a but … if one seeks to enforce trade secret rights in federal court, one must bring her or his “A” game. A federal court will expect a party to be able to prove their case and motion practice is more effective. To provide an example, the “trade secret” had better be a trade secret. At first blush, that seems an obvious statement. However, beneath the obvious is the point I am driving at: if you have a trade secret you better keep it like a secret. Let me explain.

The DTSA adopts the Economic Espionage Act’s (EEA) definition of a trade secret. According to the EEA, a trade secret is defined as follows:

“[A]ll forms and types of financial, business, scientific, technical, economic or engineering information, including patterns, plans, compilations, program devices, formulas, designs, prototypes, methods, techniques, processes, procedures, programs, or codes, whether tangible or intangible, and whether or how stored, compiled, or memorialized physically, electronically, graphically, photographically, or in writing.”

But that’s not all. To qualify as a trade secret, the owner must: (1) have “taken reasonable measures” to keep the information secret; and (2) “derive independent economic value, actual or potential, from not being generally known to, and not being readily ascertainable through proper means by, the public.” See 18 U.S.C. § 1839(3).

Reasonable measures? Sounds like another way of saying “gray area.” What are the federal courts likely to do with such gray area? I predict the federal courts will paint it with the many splendored colors of information security. Simply put, federal courts will look to set quantifiable standards for the reasonability of measures to protect trade secrets. I believe the easiest method to judge the reasonability of a plaintiff’s security measures is to view them through the lens of information security. Because many organizations still lack adequate information security, proving the existence of a trade secret trade secret in federal court will become increasingly problematic. Although scant at this juncture, federal case law seemingly bears this theory out.

In US v. Shiah, No. SA CR 06-92 DOC (C.D. Cal. Feb. 19, 2008), a case addressing trade secrets under the EEA, the defendant copied 4,700 computer files belonging to his employer, Broadcom, to an external hard drive shortly before leaving to start a new job with a competitor. In Shiah, the district court engaged in a lengthy discussion of the “reasonable measures” requirement. The measures taken by Broadcom to maintain the confidentiality of its secrets included confidentiality agreements signed by its employees that explained the value placed on confidentiality and attempted to indicate which documents were considered confidential. The confidentiality agreement also prohibited employees from taking confidential information with them upon their departure. The court noted Broadcom’s use of IT-managed firewalls, file transfer protocols, intrusion detection software, passwords to access the company’s intranet, a layer of protection between its intranet and the Internet, and selective storage of files. Broadcom further required non-disclosure agreements, tracked sharing through a program called DocSafe, and marked documents as confidential. Finally, Broadcom maintained a high security physical facility. Seems pretty good, huh?

Despite Broadcom’s security measures the court found them “barely sufficient” to qualify as reasonable under the EEA. The court opined that Broadcom should have provided education, training or guidance to employees regarding the information it considered confidential. The court stated that the training should have been “regular” and included methods for ensuring information remained protected. The court also noted that if Broadcom had a “comprehensive system in place designating which documents were and were not confidential”, it would have been easier for employees to identify confidential information. Regarding the confidentiality agreement signed by Shiah, the court stated that it was overly broad in designating nearly all information as confidential, making it difficult for employees to understand what information was actually confidential.

The court also criticized Broadcom’s off-boarding process with Shiah. The court indicated that Broadcom was overly concerned about “sending a message” as opposed to actually protecting its information. The court indicted that Broadcom should have had Shiah’s supervisor present to thoroughly explain the terms of the confidentiality agreement, identify the information the company determined to be particularly sensitive, and inquire as to what information he was taking with him. The court also stated that Broadcom should have taken steps to inspect Shiah’s computer to determine what information Shiah had accessed and when. The court indicated that had Broadcom simply inspected Shiah’s computer, it would have learned that Shiah copied thousands of files and would have been able to investigate immediately.

Lastly, what I find most interesting about the Shiah case is the court’s dicta regarding “reasonable measures”. The court presciently stated:

“The Court is also basing its determination on what would have been considered reasonable at the time, in 2003; the Court notes that the reasonableness standard will become more and more stringent as time passes. Over time, there will and have been improvements in technology, information, and knowledge pertaining to data secrecy[.]”

The controversy in Shiah happened 13 years ago. In terms of information technology, 13 years is an eternity. The threats to information are more formidable and pervasive than ever. Furthermore, with the development of various forums on the “Deep Web” and the rise of crypto currency, it has never been easier to sell information such as trade secrets to willing buyers and to do so anonymously. A comprehensive information security regime that emphasizes trade secret management will be the best prescription for protection in this new age of federal trade secret litigation.

Read Menzies Aviation v. Wilcox, 978 F.Supp.2d 983 (D.Minn. 2013) for a more recent take on federal trade secret litigation as it relates to the consideration of “reasonable measures”. In Wilcox, the U.S. District Court of Minnesota held that the trade secret owner failed to employ reasonable measures when the employer was aware that the subject employee used personal email and a personal computer for work matters, and that much of the confidential information was shared with another third-party vendor.

Does the employer in Wilcox sound like your organization? Do you allow employees to access their personal email or use a work computer for personal matters? Conversely, do you allow employees to use personal computers or devices (phones or tablets) for work matters? Does your organization even meet the security standard set by Broadcom, which was ultimately determined to be insufficient? If your answers are yes, it seems that you may be unwittingly undermining your own ability to enforce your trade secrets rights in federal court.


ICKES \ HOLT is a full-service, team-driven, and client focused law firm in Northeast Ohio concentrating on information security and governance. Information is the DNA of modern organizations and ICKES \ HOLT is dedicated to advising clients on how to protect its information. Please contact us to discuss establishing or improving the information governance policies for your organization.

The ADA’s Dental Debacle

ADA dental debacle

Talk about the ever-changing world of information security and data privacy. Literally, something new, interesting, or terrible occurs daily.

The latest giant balloon in the “parade of horribles” is the American Dental Association (“ADA”) providing its members with a free, electronic copy of the 2016 Dental Procedure Codes – with one small catch.  The handy, searchable PDF was stored on malware-laced USB drives.  Woops.

In other words:  Ransomware.So to recap:  one benefit of a paid membership in the ADA is a potential malware infection.  According to Krebs on Security, “Mike” (presumably a dentist) was suspicious of the USB drive and took a look at the code.  Mike discovered that one of the files on the USB drive tried to open a well-known malware distribution website.  Apparently, this website “is used by crooks to infect visitors with malware that lets the attackers gain full control of the infected Windows computer.”

On the surface, the ADA’s idea is merely just a bad idea.  If one looks deeper, however, there is a next level disconnect about protecting PHI.  Think about it.  According to the ADA’s instructions, a covered entity is supposed to: (1) “flip out” a USB drive obtained in the mail; (2) “plug [it] into the USB port” on their computer; and (3) “open … the file on your computer.”  WHAT?   A dental office’s computer contains PHI (and likely other provider specific sensitive information).  While “reasonable safeguards” under HIPAA is up for interpretation, I am pretty sure that it does not include plugging random USB drives into computers and networks containing PHI.

Let’s think about this.  HIPAA’s Privacy Rule requires “reasonable and appropriate administrative, technical, and physical safeguards.”  Covered entities must ensure the confidentiality and integrity of PHI, as well as “identify and protect against reasonably anticipated threats to the security or integrity of the information.”  HIPAA’s Security Rule mandates that the information is not made available or disclosed to unauthorized persons.  While the Security Rule does not dictate measures, covered entities must consider certain things, most notably: the likelihood and possible impact of potential risks.

It seems that “Mike” considered the “likelihood and possible impact” of inserting an unknown USB drive and opening unknown files.  But I am willing to bet that many or most would not, either from ignorance, inattention, or explicit faith in the ADA.  In the current landscape, none of these are acceptable reasons for failing to consider the likelihood and possible impact.  Covered entities, and all organizations in general, must build an organizational culture of security where, like “Mike”, a natural suspicion arises when faced with a seemingly harmless, but unknown, situation.   Please be like Mike.  Trust or do not trust.  But always verify.

One more thing.  The approximately 37,000 USB drives were “manufactured in China by a subcontractor of an ADA vendor[.]” [Insert forehead slap here].  So, let’s get this straight.  The ADA: (1) unknowingly sent malware laced USB drives to its members; (2) provided them specific instructions to potentially infect their computers with ransomware; (3) failed to include in those instructions anything resembling steps to securely access the USB; and (4) obtained those USB drives from a subcontractor of a vendor in China.  If you’re keeping score at home, that’s strikes 1, 2, 3 and 4.  But the ADA didn’t stop there.

In an email statement, the ADA exacerbated the problem by committing the cardinal sin of incident response:  failing to take ownership of the problem and downplaying the threat:

“Upon investigation, the ADA concluded that only a small percentage of the manufactured USB devices were infected … Of note it is speculated that one of several duplicating machines in use at the manufacturer had become infected during a production run for another customer. That infected machine infected our clean image during one of our three production runs. Our random quality assurance testing did not catch any infected devices. Since this incident, the ADA has begun to review whether to continue to use physical media to distribute products ….  Your anti-virus software should detect the malware if it is present.”

Seems pretty specific for “speculation.”

In this statement the ADA essentially acted like its mistake was no big deal.  Further, it not so subtly transferred responsibility to the members.  Did you catch it?  “Your anti-virus software should detect the malware if it is present.”  Translation:  if you have proper cyber security in place our mistake won’t hurt you.  If you don’t have proper cyber security in place, our mistake is your fault for not having proper cyber security.

Not only is this a peevish and puerile response to a serious screw-up, it is also not accurate.  According to Krebs on Security:

“It’s not clear how the ADA could make a statement that anti-virus should detect the malware, since presently only some of the many antivirus tools out there will flag the malware link as malicious.”

Nice job, ADA [golf clap].

What’s even more curious about the ADA’s post-incident position is that cheap USB drives manufactured in China containing malware are not a new threat.  They are, in fact, a very common threat.  According to one security consultant, this fact “… is why the ADA’s decision to use them is so disconcerting[.]”   The point is, that in 2016, use of untested USB drives should always be suspicious – and therefore, connecting them to information systems should warrant consideration of the “likelihood and possible impact[.]”  In fact, according to that same consultant “connecting untested thumb drives to information systems containing sensitive data like personal health information violates the most fundamental rules of InfoSec[.]”

Now, you might be saying … “well, the ADA didn’t violate any rule.”  Perhaps this is true.  However, the ADA’s dental debacle clearly demonstrates the great divide between where we are and where we should be related to information security.  To say that the ADA does not have any culpability is ludicrous.  The ADA has a responsibility to its paying members.  At the very least the ADA shouldn’t contribute to the immense threats that its members already face.[[i]][[ii]]

Ickes Holt is a full-service, team-driven, and client focused law firm in Northeast Ohio concentrating on information security and governance. Information is the DNA of modern organizations and Ickes Holt is dedicated to advising clients on how to protect its information. Please contact us to discuss establishing or improving the information governance policies for your organization.

[i]   http://krebsonsecurity.com/2016/04/dental-assn-mails-malware-to-members/;
[ii] http://www.healthcareitnews.com/news/american-dental-association-sends-malware-infected-usb-drives-its-members