Category: Editorial

Health IT Startup: Modio Health

Founded in 2014, Modio Health is a cloud-based credentialing and career management solution for healthcare providers and organizations.

Elevator pitch

Modio logoModio Health makes physician career management easier. Replacing outdated and time consuming credentialing processes, expensive middlemen and pushy recruiters with a technology platform that serves both physicians and healthcare organizations. Our goal is to streamline hospital operations, from straightforward, cost-effective credentialing to transparent physician staffing.

Product/service description

The Modio platform is home to thousands of healthcare providers, as well as many larger healthcare organizations and practices. By integrating with government agencies, public databases, and private sources, Modio has built a centralized practitioner database, called the Unified Provider Record, for healthcare providers and their affiliated organizations. Case studies show that the Modio platform decreases both provider credentialing time and the associated costs, reducing administrative burdens and eliminating lapsed licensure.

Origin story/founder story

Modio Health was born from the firsthand experiences of our team of doctors. Our founders had all been stung by the inefficiencies they encountered in their years of practicing medicine. The hassle of credentialing, the constant, nagging contact from recruiters, and high fees for licensing and job placements encouraged them to create a solution to these pain points. After heading a successful EHR implementation business in the early 2010s, they left their full-time jobs to get Modio off the ground. With the help of a Bay Area network of technology and production experts, and their own connections with healthcare providers, our founders launched Modio in July of 2015. Modio immediately gained traction with large ASCs, medical groups, and hospitals. Just nine months after its initial launch, Modio is already an integral part of many healthcare practices.

Marketing/promotion strategy

Our marketing strategy is heavily based on our extensive network of providers. Whether that’s our in-house team of physicians, or providers whom we’ve helped to get credentialed or find jobs, our network is constantly building up through referrals and simple word-of-mouth communication. We also promote the Modio name through targeted media, conferences, and mail campaigns.

Market opportunity (in your particular space—numbers, competitors, etc. are helpful)

Modio offers a scalable solution for healthcare management in a chaotic landscape. Few platforms aim for the level of comprehensivity that we do; Modio is the only service that combines credentialing services, an open job marketplace, and practice management all in one. In an industry that wastes more than $200 billion dollars every year in hospital administration costs, our efficient, inexpensive system is the first step to solving the problem.

Continue Reading

Does the US Technology Gap Push Med Errors into the Third Leading Cause of Death?

Guest post by Thanh Tran, CEO, Zoeticx, Inc.

Thanh Tran
Thanh Tran

Hardly a day goes by without some new revelation of a US IT mess that seems like an endless round of the old radio show joke contest, “Can You Top This”, except increasingly the joke is on us. From nuclear weapons updated with floppy disks to needless medical deaths, many of which are still caused by preventable interoperability communication errors as has been the case for decades.

According to a report released to Congress, the Government Accountability Office (GAO) has found that the US government last year spent 75 percent of its technology budget to maintain aging computers where floppy disks are still used, including one system for US nuclear forces that is more than 50 years old. In a previous GAO report, the news is equally alarming as it impacts the healthcare of millions of American’s and could be the smoking gun in a study from the British Medical Journal citing medical errors as the third leading cause of death in the United States, after heart disease and cancer.

The GAO interoperability report, requested by Congressional leaders, reported on the status of efforts to develop infrastructure that could lead to nationwide interoperability of health information. The report described a variety of efforts being undertaken to facilitate interoperability, but most of the efforts remain “works in progress.” Moreover, in its report, the GAO identified five barriers to interoperability.

CMS Pushing for “Plug and Play” Interoperability Tools that Already Exist

Meanwhile in a meeting with the Massachusetts Medical Society, Andy Slavitt, Acting Administrator of the Centers for Medicare & Medicaid Services’ (CMS) acknowledges in the CMS interoperability effort “we are not sending a man to the moon.”

“We are actually expecting (healthcare) technology to do the things that it already does for us every day. So there must be other reasons why technology and information aren’t flowing in ways that match patient care,” Slavitt stated. “Partly, I believe some of the reasons are actually due to bad business practices. But, I think some of the technology will improve through the better use of standards and compliance. And I think we’ll make significant progress through the implementation of API’s in the next version of (Electronic Health Records) EHR’s which will spur innovation by allowing for plug and play capability. The private sector has to essentially change or evolve their business practices so that they don’t subvert this intent. If you are a customer of a piece of technology that doesn’t do what you want, it’s time to raise your voice.”

He claims that CMS has “very few higher priorities” other than interoperability. It is also interesting that two different government entities point their fingers at interoperability yet “plug and play” API solutions have been available through middleware integration for years, the same ones that are successfully used in the retail, banking and hospitality industries. As a sign of growing healthcare middleware popularity, Black Book Research, recently named the top ten middleware providers as Zoeticx, HealthMark, Arcadia Healthcare Solutions, Extension Healthcare, Solace Systems, Oracle, Catavolt, Microsoft, SAP and Kidozen.

Medical Errors Third Leading Cause of Death in US 

The British Medical Journal recently reported that medical error is the third leading cause of death in the United States, after heart disease and cancer. As such, medical errors should be a top priority for research and resources, say authors Martin Makary, MD, MPH, professor of surgery, and research fellow Michael Daniel, from Johns Hopkins University School of Medicine. However, accurate, transparent information about errors is not captured on death certificates which are the documents the Center for Disease Control and Prevention (CDC) uses for ranking causes of death and setting health priorities. Death certificates depend on International Classification of Diseases (ICD) codes for cause of death, but causes such as human and EHR errors are not recorded on them.

According to the World Health Organization (WHO), 117 countries code their mortality statistics using the ICD system. The authors call for better reporting to help capture the scale of the problem and create strategies for reducing it. “Top-ranked causes of death as reported by the CDC form our country’s research funding and public health priorities,” says Makary in a press release. “Right now, cancer and heart disease get a ton of attention, but since medical errors don’t appear on the list, the problem doesn’t get the funding and attention it deserves. It boils down to people dying from the care that they receive rather than the disease for which they are seeking care.”

The Root Cause of Many Patient Errors

Better coding and reporting is a no-brainer and should be required to get to the bottom of the errors so they can be identified and resolved. However, in addition to not reporting the causes of death, there are other roadblocks leading to this frighteningly sad statistic such as lack of EHR interoperability. Unfortunately, the vast majority of medical devices, EHRs and other healthcare IT components lack interoperability, meaning a built-in or integrated platform that can exchange information across vendors, settings, and device types.

Various systems and equipment are typically purchased from different manufacturers. Each comes with its own proprietary interface technology like the days before the client and server ever met. Moreover, hospitals often must invest in separate systems to pull together all these disparate pieces of technology to feed data from bedside devices to EHR systems, data warehouses, and other applications that aid in clinical decision making, research and analytics. Many bedside devices, especially older ones, don’t even connect and require manual reading and data entry.

Healthcare providers are sometimes forced to mentally take notes on various pieces of information to draw conclusions. This is time consuming and error-prone. This cognitive load, especially in high stress situations, increases the risk of error such as accessing information on the wrong patient, performing the wrong action or placing the wrong order. Because information can be entered into various areas of the EHR, the possibility of duplicating or omitting information arises. Through the EHR, physicians can often be presented with a list of documentation located in different folders that can be many computer screens long and information can be missed.

The nation’s largest health systems employ thousands of people dedicated to dealing with “non-interoperability.” The abundance of proprietary protocols and interfaces that restrict healthcare data exchange takes a huge toll on productivity. In addition to EHR’s physical inability, tactics such as data blocking and hospital IT contracts that prevent data sharing by EHR vendors are also used to prevent interoperability. Healthcare overall has experienced negative productivity in this area over the past decade.

Continue Reading

The Broken Promise of Healthcare IT

Guest post by Paul Brient, CEO, PatientKeeper, Inc.

Paul Brient
Paul Brient

Vice President Joe Biden recently took the stage at Health Datapalooza in Washington, D.C. to discuss where healthcare technology currently stands, and he didn’t hold back. Among other things, he chastised the industry for poor health IT system interoperability and the resulting difficulties it causes providers and patients. “We have to ask ourselves, why are we not progressing more rapidly?” Biden lamented.

Biden’s criticism is only the latest high-profile commentary about the unfulfilled promise of information technology in healthcare. AMA leaders and individual physicians have been grousing about it for years. We’ve seen technology increase efficiency, reduce costs and improve productivity in every other industry – but why not healthcare?

Ironically, seven years after the passage of the HITECH Act of 2009, doctors are less productive than they were before, and IT is the culprit. Rather than enabling a better, more streamlined workflow, IT has become a burden.

The drag that IT is placing on healthcare providers is a principal reason why U.S. Health and Human Services (HHS) Secretary Sylvia Burwell announced with great fanfare at the HIMSS16 conference an “interoperability pledge,” which vendors and providers alike are encouraged to take. Its purpose in part is “to help consumers easily and securely access their electronic health information, direct it to any desired location, learn how their information can be shared and used, and be assured that this information will be effectively and safely used to benefit their health and that of their community.”

This call resonates because the promise of better healthcare through technology has been broken. Technology has changed the way we communicate, the way we shop, the way we watch TV, the way we drive, and the way we interact with our homes. As an industry, healthcare is lagging way behind. The consequences are drastic. In order for us to deliver the kind of holistic care that will truly improve people’s health, it’s time not only to talk about the potential, but to make it a reality for users and providers across the healthcare continuum.

Here’s the reality: we have today what 10 years ago was called a supercomputer in front of physicians – a device that knows virtually everything about the patient – but it isn’t helping out in ways that we take for granted in our everyday lives when we shop online, use Google Maps or order an Uber.

Continue Reading

Three Trends Shaping Health Informatics

Guest post by Justin Sotomayor, pharmacy informatics director, CompleteRx.

Justin Sotomayor, PharmD
Justin Sotomayor, PharmD

The field of health informatics has grown exponentially over the past 50 years. From Robert Ledley’s work paving the way for the use of electronic digital computers in biology and medicine in the 1950s, to the founding of the American Medical Informatics Association in the 1990s, to the launch of the Medicare/Medicaid Electronic Health Record Incentive Program in the 2000s, it continues to mark new milestones at an astounding pace, presenting both challenges and opportunities for the healthcare industry.

Three trends – in particular – will have a marked impact on patients and practitioners, and are certain to define health informatics in the near future, if not for years to come.

The end of Meaningful Use

In 2009, with the passing of the Health Information Technology for Economic and Clinical Health (HITECH) Act, came the launch of the Meaningful Use program – and the related requirement that healthcare providers show “meaningful use” of a certified EHR to qualify for incentive payments. With both Stage 1 (adoption) and Stage 2 (coordination of care and exchange of information) behind them, hospitals are fully responsible for Stage 3 (improved outcomes) by 2018. While, undoubtedly, the program has improved EHR adoption – in many cases, streamlining and enhancing patient care – it has been widely criticized. In a 2015 news release, the American Medical Association regarded Stage 2 as a “widespread failure,” suggesting it monopolized staff attention without commensurate benefit to patients, and hampered innovation.

Most recently, following highly-publicized remarks in January by CMS Acting Administrator Andy Slavitt that Meaningful Use would be replaced, the U.S. Department of Health and Human Services has proposed transitioning Meaningful Use for Medicare physicians to the “Advancing Care Information (ACI)” program under the Medicare Access and CHIP Reauthorization Act (MACRA). According to Mr. Slavitt, this program is designed to be “far simpler, less burdensome, and more flexible,” primarily by loosening the requirements to qualify for extra payments, and incentivizing providers based on treatment merit, known as Merit-based Incentive Payment System (MIPS). While this update doesn’t yet affect hospitals or Medicaid providers, and these groups should continue to prepare for full Meaningful Use implementation, it’s an indication that industry concerns over meaningful use are being heard and responded to, and that additional changes may be forthcoming.

The rise in cybersecurity threats

Continue Reading

Could Holography Be The Future Of Medicine?

Guest post by Nic Widerhold, owner, Ghost Productions.

To the average person, holography is the stuff of science fiction. Many people were first exposed to the concept of practical holography in the original “Star Wars” film, released in 1977. Although the apparent 3D images represented in the film were of relatively low resolution, the possibilities were undeniably intriguing — and undoubtedly inspirational to a generation of budding scientists. Subsequent portrayals of the inherent possibilities of this technology were explored on television series, such as “Star Trek: The Next Generation,” in the late 1980s and early 1990s.

Holography: From Science Fiction to Scientific Fact

In that imagined world, holography was vastly superior to the grainy, static-filled images portrayed in “Star Wars.” Entire interactive worlds were recreated in a special space. The unimaginably advanced technology was primarily used for recreation. This fictional technology more closely resembled the 3D interactive “worlds” promised by various recently introduced virtual reality (VR) systems. Although actual VR technology is arguably in its infancy, and interactive content is still largely lacking, these systems come closest to reproducing the experience of entering a “holodeck,” where fully realized, interactive, imagined worlds can be explored at will.

A Brief History

Of course, none of these imagined uses of holographic technology reflect present, real-world applications. That’s not to say holography doesn’t exist. It does, and has done since before the time of the original “Star Trek” series, which debuted in 1966. Although that seminal science fiction series made no mention of holography, the technology already existed in the real world, having begun conceptual development as early as the 1940s. In 1971, a Hungarian-British physicist was awarded the Nobel Prize in Physics for his invention of the holographic method. His success with optical holography was only made possible by the invention of the laser, in 1960.

In essence, a hologram is a photographic recording of a light field. The recording is subsequently projected to create a faithful 3D representation of the holographed subject. Technically speaking, it involves the encoding of a light field as an interference pattern. The pattern diffracts light to create a reproduction of the original light field. Any objects present in that original light field appear to be present, viewable from any angle.

Depth cures — such as parallax and perspective — are retained, changing as expected, depending on the viewpoint of the observer. Holograms have been compared to sound recordings. When a musician performs, the vibrations he produces are encoded, recorded, stored and later reproduced to evoke the original vibrations a listener would have experienced.

Of course, other forms of practical holography have been in common usage for decades. The so-called embossed hologram, which appears on many credit cards and even paper checks, was widely introduced in the mid-1980s. National Geographic magazine, which featured an image of a holographic eagle on its cover in 1984, marks the event among its most notable milestones.

The 2D embossed hologram image retains some of the characteristics of a traditional hologram, in that the image changes somewhat depending on one’s angle of view. It’s primarily used as a security measure, or as a marketing novelty (these mass-produced holograms have even appeared on boxes of children’s cereal). However, these illusions are not true holograms. While the National Geographic eagle was impressive, one could not simply examine the animal from any conceivable angle.

Continue Reading

CMS Proposes Extension of RAC Audit to Medicare Advantage Plans

Guest post by Rajeev Rajagopal, president, Outsource Strategies.

Rajeev Rajagopal
Rajeev Rajagopal

The proposal of the Centers for Medicare & Medicaid Services (CMS) to expand its Recovery Audit Program to Medicare Part C or Medicare Advantage (MA) plans is a new step in its efforts to fight fraud, waste and abuse in the Medicare program. The move is aimed at identifying overpayments and underpayments made on claims for services provided to Medicare beneficiaries. For physicians’ practices, the expanded recovery audit program would mean that they will have to take proactive steps to reduce their risks of falling prey to recovery audits by pay ensuring error-free submission of the claims of MA patients.  Outsourcing medical billing and coding is a great option to accomplish this task.

Medicare Advantage (MA) Plans and Allegations of Billing Fraud

MA plans or Medicare Part C are offered by private insurance companies approved by Medicare, which receive payment from Medicare for the coverage provided. There are different types of MA plans which provide all of a Medicare patient’s Part A (Hospital Insurance) and Part B (Medical Insurance) coverage. Part C plans are different from standard Medicare in that they are paid a set fee every month for each patient based on a complex formula called a risk score. CMS pays higher rates for sicker MA beneficiaries than for those in good health. CMS scrutinizes the diagnosis information reported by MA organizations and calculates risk scores for each enrollee using the Hierarchical Condition Category risk adjustment model. The risk score is calculated based on the enrollee’s demographic characteristics and health conditions. This practice aims to improve the accuracy of Medicare’s payments to MA organizations and reduces the incentives for plans to select only the healthiest beneficiaries.

Identifying Improper Medicare Payments with Recovery Audits

However, in recent years, there have been various reports of overbilling MA plans, costing taxpayers billions of dollars more than warranted. In Jan. 1, 2010, the government set up the Recovery Audit Program to fight fraud, waste and abuse in the Medicare program. It detects overpayments and underpayments for Medicare claims so that CMS can implement actions to prevent improper payments in all 50 states. Under the program, Recovery Audit Contractors (RACs) — private companies hired by CMS — have the authority to review medical records at short notice. RACs notify health care providers of the outcomes of the reviews via demand letters. An RAC demand letter would contain details of the problem with a claim, such as the coverage, coding or payment policy that was violated, a description of the overpayment made, recommended corrective actions, and explanations on the provider’s right to submit a rebuttal statement prior to recoupment of any overpayment and appeal and more.

Continue Reading

Prescription Drug Costs: In Washington’s Line of Fire

Guest post by Ken Perez, vice president of healthcare policy, Omnicell.

Ken Perez
Ken Perez

At two recent healthcare conferences run primarily for provider organizations, speakers spent a considerable amount of time highlighting the sharply increased U.S. spending on prescription drugs in 2014 (+12.5 percent versus 2013) and 2015 (+7.8 percent versus 2014)—about double overall healthcare cost inflation for those two years. In 2015, prescription drugs accounted for one-sixth of all the money spent on personal healthcare services. While drug spending growth is expected to moderate in the coming years, the attendees at the conferences were left with the lingering impression that pharmaceutical companies may have gotten away with inappropriate levels of profiteering in recent years.

Of course, that impression was made—and some would say cemented—last year when Martin Shkreli, former CEO of Turing Pharmaceuticals, famously hiked the price of Daraprim, a 62-year-old treatment for parasitic infections, by 5,455 percent overnight from $13.50 a tablet to $750. Similarly, Michael Pearson, outgoing CEO of Valeant Pharmaceuticals, raised by 1,800 percent the prices of two drugs used to treat cancer-related skin conditions: Targretin gel, a topical treatment for cutaneous T-cell lymphoma, and Carac cream, used to treat precancerous skin lesions called actinic keratosis. A 2012 report by Ipsos Public Affairs concluded that the U.S. pharmaceutical sector had a “net negative” favorability score with consumers, and the much-publicized actions of Shkreli and Pearson three years later obviously did not improve the public’s view of pharma.

As expected, the aforementioned price hikes by Turing and Valeant were denounced by numerous presidential candidates, and drug prices became a popular political football. Both former Secretary of State Hillary Clinton and Vermont Senator Bernie Sanders have made lowering prescription drug costs significant planks of their respective policy platforms. They both advocate allowing Medicare to negotiate drug prices with pharmaceutical companies. Sanders goes even further—to the brink of outright drug price controls—pledging to require pharmaceutical companies to publicly disclose information regarding drug pricing and research and development costs—with the obvious implication that there should be some reasonable relationship between the two.

Continue Reading

Why Medical Translators Need to be Professionals

Guest post by Sean Patrick Hopwood, MBA, president, Day Translations, Inc.

Sean Hopwood
Sean Hopwood

While no one can deny that electronic technology has made giant leaps towards protecting patient information and preventing errors and misuse, you may be surprised to learn that many health care practitioners and facilities are still cutting corners when it comes to medical translations or interpretation.

Inaccurate medical interpreting, or translations carried out incorrectly have the potential to put patients’ lives at risk, and there have been several cases of medical mistranslations documented that have led to severe complications, incorrect diagnosis, and even death.

Check out the following cases that could have had different outcomes had the correctly trained language professionals been used:

  1. Willie Ramirez

In 1980, a young baseball player called Willie Ramirez was taken to a hospital in South Florida, in a comatose state. A medical interpreter was called in to translate the family’s explanation of events; however, the interpreter was not familiar with the Cuban Spanish word “intoxicado,” which was translated as intoxicated, and the doctors assumed that Mr. Ramirez had taken a drug overdose.

This is one of the most famous cases in history of inaccuracy in medical interpretation, as what seems like a fairly small error led to the young baseball player waking up as a quadriplegic. How? Well, in Cuban Spanish, the word “intoxicado” means to be sick after ingesting something, which could be a food, drug, liquid, or anything else that could cause a person to be unwell.

When Ramirez’s doctors dismissed his case as an overdose, they failed to consider other possibilities and overlooked the fact that the patient was actually experiencing bleeding in the brain. By the time the mistake was detected and the proper course of treatment initiated, it was too late. The damage had been done and no amount of neurosurgery could reverse Ramirez’s quadriplegia.

  1. Maria Guevara

If ever there were a more heart-wrenching tale than that of Ms. Guevara’s, it would be hard to imagine, and this tragic example of a medical facility’s negligence has been hotly disputed over the last couple of years. Because of the absence of a medical interpreter, Ms. Guevara was accidentally given a prescription to induce abortion after apparently replying “yes”, to a question she did not understand.

While the doctor was asking her if she wanted to abort the baby, Maria thought the question was whether she wanted to keep the baby, and with no professional medical interpreter to translate between herself and the doctor, the outcome was losing the baby in a Californian hospital where almost half of all patients are Spanish speakers with English as their second language.

Occurring in 2013, the case of Guevara has forced the medical profession to come to terms with the real risk involved in inaccurate or total absence of professional linguists on hand to avoid liable and save lives; especially with the Hispanic population set to outnumber white Caucasians by 2060.

Continue Reading