Category Archives: Analytics

Patient data there for the asking, not the taking


The importance of using health data to target and optimise the care we deliver and to advance our understanding of medicine, health and care is undeniable and something we must do, but we really do have to secure public confidence in doing stop scoring so many “own goals”

As my good friend and colleague Dr Joe McDonald said recent in his column on digitalhealth.net “Patient data there for the asking, not the taking” and this brilliantly takes us to the heart of the issue.

When asked most citizen’s would be happy to have their health data used for a broad range of research purposes that bring health or economic benefit, but they do want to be asked and not asking them is a great way to trigger bloody mindedness and push up the extent to which people actively seek to opt-out as has been demonstrated by the 1.2 million opt-outs  generated by the crass mishandling of care.data.

We seem to be repeating these mistakes with the Royal Free giving data to Google without an adequate opportunity for patients to opt-out. Sources in the NHS tell me that the Royal Free are not the only NHS Trust to do this although no more names have yet been mentioned.

To have damaged public support and confidence in the way we have is both unforgivable and avoidable the result of arrogance and ignorance of those making the decisions with a failure to listen to the advice given to them and learn from the experience of others.
Firstly, it is necessary to acknowledge that we are talking about sharing potentially identifiable data. The work of Prof Paul Ohm  has graphically illustrated that even apparently very anonymous datasets can be re-identified. In the case of a rich datasets like those in EHR re-identification is trivially easy for those with a mind to do so It provides little comfort that is probably the last thing most researchers want to do.

Generating and maintaining public confidence is possible. Most people already understand the value of their data for research purposes and are willing to share even identifiable data if approached correctly. We only need to look to the likes of UK Biobank who have successfully persuaded over half a million people to share sensitive identifiable health data and actively participate in providing blood samples to support Biobank’s research work, with no prospect of direct personal benefit

In my view the key things that those wishing to use patient data for purposes other than those very directly related to the deliver of care to the data subject must do are:

  • Acknowledge the re-identification and privacy risk associated with share health data.
  • Take all reasonable steps to mitigate these risk with appropriate governance and the use of privacy enhancing technologies (making the effort to find out what these are and what they can do.)
  • Allow those who for whatever reason my wish to do so to have an informed opportunity to easily opt-out.
    Invest in technology and approaches that allow us to move towards an opt-in approach.

The Centre can’t say they weren’t told. Had they read and heeded “Fair Shares for All” produced by the BCS Primary Health Care Group under the leadership of Ian Herbert in 2012. Things might have been different (I have since discovered that those making the decision never read anything longer than 140 characters)
It’s a long document because there are no short answers to the complex issues it addressed, but to draw out a single paragraph that will give you a flavour:

“In summary we want to encourage patients and their clinicians to provide their data for laudable research purposes, and acknowledge the need to use it to administer and manage the NHS, but we must seek to retain public confidence while doing so. Patients accept the electronic processing of their health data for primary purposes, but should have reason to feel confident that it is protected and used properly”

The document will need some updating, particularly as new privacy enhancing technologies (e.g. block chains and homomorphic encryption) have become practical tools over the past 4 years, but it still remain highly relevant.

openEHR a Game Changer Comes of Age


I’ve been watching openEHR over more than fifteen years, and although I have always been impressed by its potential to enable us to do things differently, I must admit it has been a slow burn, and take up has been limited, particularly in the UK where it was invented. However, due to some recent developments, I think this is about to change, and that openEHR is going to take off in a big way. This is going to revolutionise how we think about and do digital health, and it should increase the speed at which we can do it by at least two orders of magnitude. Why do I say this, and what evidence is there to support my assertion?

openEHR has come of age with a large number of successful small implementations, and a few much larger ones (1) which have proven the approach works at scale. We have also seen the use of openEHR by governments and major health providers across the globe, including the NHS (2), as the mechanism for the creation and curation of clinical content standards in their territories. In addition, changes to the openEHR Foundation have made it unarguably an open source organisation with a global user community; a growing vendor community has developed offering both open source and proprietary tools and components supporting the standard; and there is serious interest from major system integrators. These changes make openEHR look like a much better alternative to the hegemony of the big US megasuite providers, who still want to shape health and care systems in their image and who own the platforms on which these providers will increasingly depend.

UPDATE (12 April 16)

Possibly the best explanation of openEHR I’ve seen “openEHR technical basics for HL7 and FHIR users” Well worth reading.

UPDATE (15 March 16)

Some great videos here which provide an easy way to understand various aspects of openEHR.

In particular:

Clinician-led e-health records – An introduction to openEHR for clinicians

National governance of openEHR archetypes in Norway – A national approach to building information models with openEHR. Many lessons here for HSCIC who have just started doing something similar here.

UPDATE (13 July 15)

In their PQQ which kicks of the procurement for the Datawell to support Devo Manch,  Manchester have mandated openEHR along with other established standards including IHE-XDS. This could potentially lead to the largest implementation of openEHR in the UK with Manchester building on pioneering work in Moscow and Leeds

UPDATE  (24 Apr 15) Further information and news about the growing interest in openEHR will be found here

Firstly, everything I read and all of the people I talk to across the globe about digital health agree on a couple of things.

  • Firstly, we need to move towards a platform architecture into which we can plug the thousands of apps and hundreds of traditional systems that we currently use in health and care; an architecture which will enable all of these to interoperate and work together.

  • Secondly, we need to separate content (the data, information and knowledge that applications consume and update) from the applications that process it; and that content needs to be expressed in a modular, computable and reusable format.

Beyond this, agreement breaks down – people do argue about business models (who should own and control the platform), and also the details of the particular standards and technology to be used – but on the core principles, everyone with any credibility agrees.

When it comes to business models, some would like to own the platform, because doing so would create a massive commercial opportunity. And while some still pursue this goal, most significantly Apple, others have decided, as I have, that ownership of the platform is neither achievable (competitive and customer pressures mean even the mighty Apple can’t win this battle) nor is it desirable from the perspective of citizens, health and care providers and payers – none of whom wish to be locked in or to pay the ‘fruit tax’ or its equivalent.

Others, including me, and more significantly some big players, have come to the view that while it might be great to own the platform, that isn’t going to happen and so we need to move to an open platform which nobody owns (in the sense that nobody owns the Internet). As for commercial opportunities, they will still exist higher up the value chain, and the existence of the platform will create such opportunities by the spadeful. Surely it’s more fruitful to concentrate on these, rather than waste time and resource on a battle no one can win.

On the details of implementation, disagreement is less significant. The two major contenders, openEHR and the Healthcare Consortium, both have similar approaches, and they are already converging through the Clinical Information Modeling Initiative (CIMI) to reduce their differences to the point where they really don’t matter and can be dealt with at a purely technical level, with their components being easily interchangeable.

So, if we want to create an open platform, what do we need? We need openEHR or something like it – and frankly there is nothing else as mature or as well supported as openEHR.

OpenEHR is not software, nor is it a particular technology. It’s an open specification or standard for the representation of a key bit of content – the health and care record. The specification is open source (insofar as you can apply this term to something that is not software), and it’s curated by the openEHR Foundation, which is a not-for-profit company democratically controlled by those who choose to be part of the global openEHR community (and anybody can). The community is truly global and growing, and consists of both users and developers; and is supported by a number of vendors who can offer tools, components and services supporting the standard.

openEHR provides a simple, robust and stable over-arching reference model (3) which defines a formalism for the representation of the modular components of a health and care record. openEHR calls these ‘archetypes’ and they define the elements of a record, their properties, and how they are represented (including bindings to terminologies and classifications). Archetypes are intended to represent a superset of all those properties that might be associated with the concept they represent (at a high level these will be either an observation, an evaluation, an instruction or an action). Archetypes can then be constrained and/or combined in a ‘template’ to provide practical interoperable components for use in a particular context or system.

The tools available for the creation of archetypes and templates are open source (as are the vast majority of the archetypes and templates created with them), and this makes openEHR easily accessible to clinicians and other domain experts while also providing system developers with robust components to handle many of the technical complexities. openEHR enables clinicians to concentrate of the clinical stuff, and developers to concentrate on the technical stuff, without needed to understand more about the other domain than they want to.

By building systems using openEHR, system development work shifts from the technical level to the domain level. A repository that has been built to store an openEHR health and care record does not need to take account of the particular content of a given archetype. Whatever that archetype might represent, the repository will be able to store it, and you will be able to query that repository about its content. This feature of openEHR is the key enabler of much faster application development, because the addition of new features will not require changes to database schemas (with all the associated testing and data migration that entails). Instead, all that is needed is the addition of some archetypes and/or templates – and these may already be available as the result of work by others in the community, or else they can be created rapidly by a relevant domain expert – plus the creation of some new user interface components, and these can often be generated automatically from the underlying templates. In this way changes can be made by end users, or by people close to them. This will reduce the time to add new features from months to hours, and the time to build new systems from years to weeks.

openEHR is also technology independent. Applications don’t need to concern themselves with the technology of a particular implementation of an openEHR repository – that’s purely a matter for the implementer, who can chose whatever technology works best for them at a particular time and in a particular context. The applications that use it will not be affected, so long as they remain compliant with the standard. We can see this happening in the dozen or so existing implementations of openEHR repositories: they use different operating systems, different databases (SQL and NOSQL) and various development tools to create both open source and proprietary implementations of the standard. Compliant implementations of the standard from different vendors are interchangeable, and a single query can be executed across multiple implementations. openEHR is vendor independent, and it eliminates vendor lock-in.

Suppliers of openEHR repositories will have to compete on performance, security, robustness, value and service – they cannot rely on customer lock-in, as the vendors of many traditional EHR systems have in the past. From the perspective of health and care providers, openEHR puts them back in charge of their own destiny. This contrasts with most of the current successful approaches to the delivery of enterprise-wide EHR, where customer institutions have adopted one of the four big US megasuites, and then have had to adapt internal processes and organisation to fit with the chosen system – in effect, you become an EPIC, Cerner, Allscripts or Meditech institution, rather than a customer who calls the shots.

The ‘megasuite model’ has worked spectacularly well (if expensively) in a number of big US hospitals, particularly for EPIC, but that model starts to break down when you seek to extend the scope of a system from an institution to an integrated health and care community. It also fits badly with UK and other European models of health and care, which are not so close to the US model as the megasuite vendors might hope them to be.

Of course European health and care providers don’t want to remodel their processes along American lines – why would relatively successful European providers want to adopt systems designed primarily for the inequitable and unsustainable US system? According to the well respected US Commonwealth Foundation the United States ranks last among eleven leading developed countries on measures of access, equity, quality, efficiency, and healthy lives (and, by the way, the UK’s NHS takes the number one spot).

Much of my conviction about openEHR comes from work I’ve been involved in with HANDI, in building HANDI-HOPD – the HANDI Open Platform Demonstrator, which has now been adopted by NHS England as the NHS England Code4Health Platform. This platform provides a simulation environment for any system or service that wants to expose an API (interface) within an open ecosystem, and it includes an openEHR repository loaded with test data from the Leeds Lab Project.

We have exposed SMART and FHIR APIs, as well as the native openEHR service API, on top of the repository; we have used this to build a number of apps, and also demonstrated how you can simply plug in apps that were developed elsewhere using the SMART API. We have also used this platform to prototype a UK localisation of an open source ePrescribing product (www.openep.org), and the speed at which we have been able to carry out the localisation and meet some special mental health requirements has been impressive – indeed so impressive that we will shortly be announcing the first NHS Trusts who will be taking the system live.

Work is currently being completed to re-brand the HANDI platform as the NHS Code4Health Platform, and this will shortly be available for those who want to learn more and experiment with this and other open technologies.

openEHR has come of age – If you don’t believe, me give it a try.

Notes:

This is a slightly updated version of the original with a few minor changes to make it more readable to the general reader and correct some typos my thanks for this to my friend and colleague Conrad Taylor.

1) Large scale implementations of openEHR include:

Moscow – Integrated health and social care 12 million population 

Slovenia – Country wide 2 million population

Brazil – Unimed Medical cooperative

2) Health systems using openEHR to create curate and publish clinical content.

NHS HSCIC

NHS Scotland

Australia

Norway

Slovenia

Brazil

openEHR Foundation 

Applications built on openEHR platform

OPENeP EPMA product www.openep.org

Marand Think!Med Clinical, Ljubljana Children’s Hospital http://www.marand-thinkmed.com

Ocean Multiprac Infection control, Queensland Health, Australia http://www.multiprac.com/?portfolio_4=infection-control-2

Ocean LinkedEHR, Western Sydney, Australia  http://openehr.org/news_events/industry_news.php?id=121

DIPS Arena, Norway http://openehr.org/news_events/industry_news.php?id=97

mConsole, Mental Health patient portal, Code24, Netherlands

Clinical Decision Support, Cambio, Sweden http://www.cambio.lk/News-and-facts/Produktnytt/COSMIC-Clinical-Decision-Support1/

See also http://www.openehr.org/who_is_using_openehr/healthcare_providers_and_authorities

3 Some key documents on OpenEHR

OpenEHR Architecture Overview

OpenEHR Reference Model

What makes an Open Source community?

There has been a lot of interest in the role of Open Source software in the UK over recent months, initially stimulated by NHS interest in the American VistA Open Source EHR, but now taking on a broader scope including some of the exciting home grown initiatives.

Included amongst these are a number of projects that started in a closed source environment, where the IPR owner has decided to shift to an Open Source model. From a narrow technical perspective making software Open Source is easy – You just make release it under a recognised Open Source licence and make it freely available for download. However, Open Source is about much more than the licensing model and much more needs to be done to achieve the benefits of Open Source than what the Open Source community disparagingly call a “Code Dump”.

Open Source is about an approach and philosophy that at its’ heart believes that by creating a community who can freely use and contribute to a product that we can create better software and release new commercial and social value not available from other approaches. Open Source enshrines some import  freedoms and principles which defined and maintained by the Open Source Initiative  that also provides guidance on licences that meet these principles.

To be effective an Open Source community has to be diverse and well supported; containing all of those stakeholders needed to ensure a sustainable business model for the products’ ongoing development and use in which no single entity has effective monopoly control and requires governance structures around a particular distribution or version of the source code (often called a “Distro” in the Open Source world) so that users can have confidence in the safety, security and quality of that Distro including changes and new contributions made to it by the community – Something that is particularly important in context of health and care software.

Stakeholders include:

  • Those that gain financial value from the existence of the Distro – These might be organisation that use the software or the data it generates (like the NHS, researchers and other health and care commissioners and providers) or organisation that sell services to community made possible by the existence of the Distro (including developers, implementers and maintainers) – It is this group of stakeholders who will be the main source of resources to sustain the development and use of the Distro.
  • End users of systems and those who they seek to serve using the software – It is only by involving end users in an agile user-centred design processes that we can build systems that truly unlock the potential of digital technology – Too often the poor design of tools that people are expected to use is a barrier to doing what’s important. In the context of health and care this means involving frontline clinicians, other health and care professionals, managers and administrators – Their needs are often not well understood by policy makers, senior management and IT departments. Most important of all it means working with patients, service users and their informal careers who are too often the victims of poor service resulting from poor design.
  • Academics and technologists who are able to educate the community with regard to those things they know that might enable the community to improve the Distro and/or the effectiveness of its deployment and help the community critically evaluate it use. This might include ensuring that the community is aware of existing and emerging standards, technology and theoretical frameworks of potential value to the community.
  • Policy makers and senior management who need to understand how the Distro can be deployed to improve services and how such use can both shape and support policy.
  • A vibrant market of individuals and organisations who can provide a range of services to support the development, implementation and use of the system as well as relevant add-on products and services. This market should ideally include individual consultants and contractors, SMEs, social enterprises and large global system integrators. It is vital for the health of the community that there is a competitive market in the products and service needed to improve, deploy and exploit the Distro so that user organisation have a choice of who they contract to provide these service.

The Distro needs a custodian, owned and controlled by the community, who will promote nurture and protect the Distro, provide mechanisms to encourage, manage and quality control changes and improvements to it by the community and commission the delivery of enhancements and other services on behalf of the community.  The custodian needs to set and maintain source code and documentation standards and ensure that documentation is available of a sufficient quality to enable a competent developer without prior knowledge of the product to work with the source code and ideally should be able to provide additional guidance and training to enable those who want to work with the software to be able to so as quickly as possible.

A key aim of the custodian is to try and keep the community together on a common Distro. Too often, short-term pragmatism results in changes to source code somewhere that breaks something somewhere else creating a “fork”in the source code tree. While some limiting forking might be healthy if too many users “fork off” the benefits of Open Source are diminished. Avoiding this requires that the custodian provides support for people to make changes to meet their needs without breaking things important to others, in a rapid agile and responsive way. However, making changes in this way will still be slower, in terms of achieving immediate local priorities, but doing so has damaging medium and long-term sequelae. The custodian has to close the gap between the two approaches and educate developers about  the benefits of doing things for longer term benefit.

Additionally , the custodian has a role in providing assurance and warranties to users that deployments based on the Distro support by organisations accredited by the custodian will be safe and secure to deploy in live health and care settings.

Enabling the custodian to deliver its’ responsibilities will require that it is funded by the community to do so. To facilitate this the custodian is probably best constituted as not-for-profit Community Interest Company (CIC) whose control is vested in the community such that no single class of stakeholder can determine its’ actions.

If we can build effective communities then the wider introductions of Open Source software in the NHS as part of a mixed economy alongside proprietary products will help drive better value and front line user engagement and commitment  across the board, just dumping source code under an open source licence (or worse some bowdlerised licence) will not.

 

 

 

The power of information and digital technology to transform the NHS

I believe passionately in the power of information and digital technology to transform the way we deliver health and care, indeed I consider it essential if we are the meet the growing demands the health and care system faces within the resources likely to be available.

We need to mobilise information to help us redesign services and target the resources available most effectively. We need to use digital technology to deliver higher quality more convenient services more efficiently and we need to make information about how services perform transparent  so that the public, patients and health and care professionals can see how they perform and how they can be improved.

However, we need to leave decisions about how local health and care communities realise this potential to them, measure their success in terms of the health outcomes and efficiencies they achieve and avoid mandating particular approaches. While I think it is inevitable that the effective uses of digital technology will lead to a reduction in reliance on paper and an increase in the use electronic record systems it is not true that a move away from paper towards electronic records will necessarily lead to an improvement in the quality of care, indeed when the emphasis is on implementing particular systems rather than improving the processes of care experience tells us that the opposite is more likely. Focusing on becoming paperless and implementing EPRs is a dangerous distraction which potentially provides local health communities with an excuse to fail at their core task of delivering higher quality care.

The primary focus needs to be on how we apply digital technologies to mobilise information and knowledge at the point of care to improve the experience and outcomes for patients and health and care professional at the frontline, while an important secondary focus should be on how we use information and knowledge to design, target, evaluate and improve care.

We should have zero-tolerance for systems that slow down or make tasks at the frontline more difficult (as is so often currently the case) Our expectation must be that good design can create systems that meet upstream information needs without additional frontline burdens.

The incremental upgrading and of digital technology in line with the incremental redesign of care processes is more likely to bring about positive-only changes in care quality than radical big-bang implementations which at best typically result in a substantial negative impact before any net positive benefit is achieved (which in health and care means avoidable death and suffering.)

This requires a new approach from the health IT industry, but one that current technologies can deliver and which can be successfully built on top of the substantial, and in many places excellent IT, already in place. This approach will draw heavily on app and portal technology, open-systems, open-interfaces, open-standards and data transparency. It will require the extension and opening up of existing systems and infrastructure to create an open health IT ecosystem creating a mixed economy for open-source and proprietary components. My experience with both the established health IT vendors and the rapidly growing app community convince me they are more than up to the challenge.

The Centre does have a role to play in creating an environment in which local health and care communities are encouraged and enabled to embrace information driven, digital ways of working, but have to be careful balancing  this with the risk of creating unintended consequences and sub-optimising behaviour in local health and care communities. The Centre needs to ensure that personal and organisational incentives are aligned with the need to deliver integrated patient centred services in ways that improves overall quality and drives down overall cost (which they currently are not) and also has a role in creating the technical, cultural and commercial environment in which successful innovation can be translated in to widespread adoption creating a vibrant market.

In playing its’ part the Centre needs to have a clear understanding of the history – In NHS this history of has many clear examples of both spectacular success and failure and needs to engage with those who not only share their vision, but who also understand this history, what life’s really like on the front-line of the NHS and the practical implementation challenges of achieving the vision.

A Paperless NHS

Given the usual scant regard to the commonly accepted meaning of words that seems to be the norm in the NHS – “Paperless” is something that we now have in the majority of general practices and that many have had for some years and I’m really enthusiastic about the desire from the centre to drag the rest of the NHS into the Digital Age, but am really concerned that the initiative is being driven by a leadership who are, to be frank, clueless.

Firstly, the focus is wrong – Creating electronic records and/or removing paper, desirable as these may be, should not be the objective. The objective should to use technology to support and coordinate the processes of care so that the patient sees a integrated service that delivers greater convenience and quality.

This inevitably means more digital services, and will result in the creation of electronic records and the removal of paper from many processes, but in a way where these changes support process improvement. This is what happened in general practice – Processes were digitised one-by-one  and the data needed to support this digitisation was stored in electronic records. Over time these records became comprehensive and GP practices have become “paperlite”  this approach gave quick wins and avoided the risk of suddenly trying to go to fully electronic records. See Lessons from GP Computing

Secondly, we don’t have the infrastructure. Already nurses and junior doctors fight over ward terminals and the COWS (Computers on Wheels) for access to IT systems and when they finally rest a keyboard from colleagues typically find themselves  using obsolete hardware, operating systems and browsers accessing poorly integrated multiple  systems over inadequate networks. We need to address the infrastructure issues before we can go paperless (or lite). Every health and care professional needs their own device connected with ubiquitous LAN (WiFi) and WAN(3/4G) across the whole NHS estate and out into the community for those that work there. A paperless NHS without the infrastructure to support it will be worse than the current paper based system.

I’m all for putting pressure on NHS Trust to embrace a digital future and have some sympathy with the approach attributed to Richard Nixon: “When you have them by the short and curlies their hearts and minds will follow”. However, to secure the benefits that I believe are possible from the digitisation of the NHS we do have to win the hearts and minds of frontline staff and I don’t think this will be achieved by exhortations to do the impossible which will end up with ill-considered and poorly implement EPR systems  running on wholly inadequate infrastructure damaging morale and undermining patient care.

Creating a truly digital NHS requires careful design involving the public, patients, health and care professionals and digital engineers working together to create digital services to deliver truly holistic care. It requires infrastructure that is fit for purpose and needs the support of a health IT ecosystem that ensures all of the components play nicely together. See the HANDI Vision and the work of OpenGPSoC for more about what this ecosystem might look like.

I’m a firm believer that it is only by using information and information systems in innovative ways, both to support the way we directly deliver services and through analytics to effectively target and evaluate what we do, that we can hope to meet the challenges that the NHS and healthcare systems across the globe face. Headline political targets like  “a Paperless NHS” have their place in stimulating debate, but unless they are followed by meaningful action from those who have sound insight in to how digitisation might transform the way we deliver care and involve the public and patients in the process are little more than an distraction.

The Commissioning Board should focus on the Commissioning of care (both directly and through CCGs) and making sure that personal and organisational incentives for all the actors in the system are aligned with the imperative to deliver better quality more convenient services for less. They also need to ensure that the information flows required to support individual care and monitor the performance of the system as a whole are available by making these the basis on which providers are paid for their services. I can’t see how providers can achieve the transformation required without embracing IT, widespread digitisation and social media, but hold them to account for their outputs, not how they achieve them.  By setting ill-consider targets about a paperless NHS and EPRs the Commissioning Board is just giving providers excuses to fail at their core task.

Theses issues were discussed on the  #CCIO tweetchat on Wed 20 Feb  7-8pm  For the best bits and full transcript can be found here 

There are also themes here to pick up at the PHCSG UnConference on 6th June

NHS-Life Sciences Partnership

“The NHS should be “opened up” to private healthcare firms under plans which include sharing anonymous patient data, David Cameron is due to announce”
http://www.bbc.co.uk/news/uk-16026827

25 years ago I launched AAH Meditel. My plan was to give GPs free computers in return for anonymised patient data, which I planned to sell, primarily for life-sciences research. Today’s endorsement of this concept by Prime Minister David Cameron is therefore one that I welcome, but with some critical reservations.

AAH Meditel was successful in establishing a large database of over 5 million patient records and one competitor VAMP (now part INPS), who launched at the same time, did something very similar. The commercial models didn’t work (we were too far ahead of our time in so many ways) but it is the process we started, later built upon by others (notable EMIS) that has provided the foundations on which today’s announcement is made.

Over the past 25 years I and others in the primary care informatics community have learnt a great deal about the issues associated with building a longitudinal “cradle – grave” record and in particular those that arise when you start to share it and use it for both primary and secondary purposes distant from those purposes in the minds of those who created the record.

The value of this record is created by the willingness of patients to divulge often sensitive information to healthcare professionals. They do this primarily to get the care they need, but we also know that when asked, the vast majority are happy for it to be used for other purposes, particularly medical research, as long as all practical steps to protect their privacy have been taken. David Cameron has made it clear that such steps will be taken, but I have little confidence that Government understands what is necessary and possible or that the research community go much beyond lip-service in their attempts to address these issues. It is clear to me while the research community has no need or desire to compromise patient privacy it also has little willingness to take the problem seriously and risk creating a public backlash and worse, undermining patient confidence in the doctor-patient relationship that lies at the heart of health care.

I want to see health data used to support the British life sciences industry, but more importantly I want to protect patients’ confidence in their relationship with those who provide their healthcare. I believe if we get it right we can have both, but to do so we have to protect certain key principles:

1. The use of patient data for research is a privilege that patients grant not a right for researchers to take. Patients must be able to opt-out; we know that very few will choose to do so and by denying those who wish to the opportunity we create much unnecessary conflict.

2. It is not a simple matter to protect personal information and comprehensive anonymised data can often be easily re-identified. It is important that those concerned properly understand the risks and how privacy enhancing technologies can mitigate these risk if applied as part of an appropriate governance framework.

3. There must be an acknowledgement by the research community that their first duty it to respect the wishes of patients and the privacy of their data, not their research.

4. That we recognise while health data is a valuable resource its fitness for purposes distant from those for which it was collected is not as great as some might believe. We have much work to do to understand and improve the quality of data (see my blog http://wp.me/p1orc5-15 and http://wp.me/p1orc5-13 )

The BCS Primary Health Care Group published a discussion paper in March this year which I think provides a good starting point http://www.phcsg.org/main/documents/PrivacyandConsent.pdf

BCS Health have a much longer document in preparation “Fair Shares for All” which should appear soon. This provides an extensive review of the issue including a comprehensive review on patient attitudes on which I draw in making some of my statements above.

Let’s make the most of the opportunity, but please, be careful out there. Privacy is a fundamental human right, and should not be treated as an inconvenience by those wishing to use patient data for purposes other than care.

Analytics – Whose data is it anyway?

There are a growing number of techniques which might be described by the term “health analytics” which are able to use patient data (generally pseudonymised) for a range of valuable purposes which can help identify opportunities to delver more appropriate, better quality and more cost-effective care. With the challenges healthcare faces using information more intelligently is not optional – We need to do all we can to facilitate the development and application of better health analytics.

There are many governance issues associated with using data for these purposes, which are not the topic of this piece, but suffice it to say there are real concerns, but concerns which can be addressed to ensure patient’s privacy and wishes are respected.

The application of analytics typically requires the extraction and linkage of data from more than one source and this requires the corporation of application designers and those organisations that host systems to facilitate access and the extraction of data. Designers and hosting companies (often one and the same) have some legitimate concerns with regard to risks to the integrity of their systems and operational impact of data extraction, but I’m concerned that some are less cooperative than they might be , sometimes to the point of being obstructive, going well beyond what can be justified by their legitimate concerms. My particular experience is in primary care, where access to practice hosted systems has generally be possible where the practice wish it, but with the growth of hosted systems control seems to be shifting to system suppliers.

It seems to me that it is the customer (more specifically the customer’s Data Controller or Caldicott Guardian) who should be in control of who is allowed to extract data from systems after satisfying themselves of the appropriateness of the data extract and that all patient privacy and any other governance issues have been appropriately addressed. Purchases of IT system should ensure that suppliers are contractually required to provide facilities to support approved extractions in a timely manner, but should understand that this may have an impact on the cost and/or service levels in a hosted environment. The basic facilities required should be no more than those any adequate system should provide as part of its standard reporting tools, but some of the requirements particular to analytics purposes (e.g. pseudonymisation, or the ability to run standard queries like HQL (Miquest, GPES)) might reasonably require additional facilities which might attract additional charges.

The requirements of health analytics are sometimes better met by third-party tools rather than the native reporting tools of individual systems and purchasers of systems should ensure that API’s are available that will allow third-party tools to connect efficiently.
Many suppliers see commercial opportunities in the exploitation of data in customer systems that they supply or host and I have no problem with their exploiting such opportunities subject to the following caveats:

• In general patient’s should be the final arbiter of how their data is used for secondary purposes. They should be made aware of such uses and have an opportunity to object (as required by both the NHS Code of Confidentiality and GMC Guidance.

• Their customers, not the suppliers should be in full control of how data in systems is used and they are responsible for ensuring such use is appropriate and respects patient’s confidentiality and wishes and meet other governance requirements.

• While supplier s may work with their customers to develop services based on secondary uses of data, they should not seek to restrict customers from working with any other party they may choose.
The actions of some suppliers to create artificial technical barriers to data extraction (e.g. by imposing arbitrary limits on the number or records that can be extracted or refusal to make available appropriate APIs to allow third parties to connect to their systems) are unacceptable and customers should ensure that contracts exclude such anti-competitive behaviour.

Opening up information to health analysis and scrutiny to all those with an interest in doing so is central to Government policy and the key to identifying opportunities to delver more appropriate, better quality and more cost-effective care. Subject always to respect for patient’s wishes and privacy, other barriers to access to information need to be swept aside.

(Declaration of interest. My company, Woodcote Consulting has a number of clients who we advise in relation to the extraction of data for analytic purposes.)

Secondary Uses of Data – A Poachers Tale

Early in my career in health informatics I had plans to make myself fabulously rich by selling pseudonymised patient data from GP for a range of secondary purposes. I managed to spend £15 million of my backers money giving away 1000 GP systems and established a database of 6 million patients records it all ended in tears (at least from the financial perspective) in 1992.

In those early days I had a naïve view of the extent to which pseudonymisation could protect patient privacy and the ease with which data could be used for secondary purposes, so in this world I am very much a poacher turned gamekeeper, but one that still believes in the massive benefits that could flow from intelligent secondary use of patient data.

In this blog piece I won’t dwell on issues of patient privacy, suffice it to say for now, that I don’t now believe that pseudonymisation of rich datasets is fully effective but I do believe that with a sophisticated approach that we can adequately protect patient privacy when we use their data for secondary purposes. What I want to concentrate on here are the challenges of using data for secondary purposes that have nothing to do with the need to protect patient privacy.

There are two ways in which we might consider the use of data secondary, the first is that the use is not directly connected with the care of the individual patient whose data it is and the second is that it’s a use not of direct concern to the person collecting the data. Here I want to concentrate on the second definition uses with which the collector of the data is not concerned and indeed may even be ignorant of. (There are clearly some secondary uses in the sense of the first definition with which the data collector is very concerned – maybe their own research interest.)

There are a number of issues that need to be considered when using data for secondary purposes.

• What were the primary purposes for which the data were collected and how do the requirements of these primary purposes fit with the proposed secondary uses?

• Is there a conflict as to how something is best recorded for the secondary purposes? The requirements of the primary use should and will prevail

• How aware is the data recorder of the secondary purpose?. Awareness may encourage the recorder to take more care that the data is fit for the secondary purpose or may result in a range of gaming activities when they have motivation to “spin” the results of the secondary use either to their own benefit or that of the patient. E.g. blood pressure readings clustering just below the QoF cut of point.

• How important is accuracy in the recording of data to the recorder? A particular issue where users are forced to record data by system design or management pressure. If you have to record something but the accuracy of record has no direct impact on you then you may guess or make-up data or just type any old rubbish to get past a mandatory field for which you don’t have valid data. E.g. A GP recording prescribing details will take great care to record the information accurately as this will be use to produce the prescription and errors would create a serious patient risk, whereas they might be tempted to just guess to complete a mandatory dataset where they don’t see value in recording the data.

• Are definitions shared between the primary and secondary purpose and between different recorders, have they even been told what assumptions about definitions have been made? Researchers are typically much tighter that frontline data recorders e.g. some clinicians will record a diagnoses of “asthma” on the basis of limited clinical findings, just because it is probably right while others will want further confirmation and just record it as “wheezing”.

• System design and configuration can have a profound effect on what and how people record data and the extent to which they code data. Most work using data from multiple GP systems assumes data across different systems are directly compatible when the evidence suggests this is often not the case – Work by Professor Simon de Lusignan based on video observation of many consultations shows a four fold difference between the major systems in the number of consultations with no coded data and a two-fold difference in the average number of codes used http://bit.ly/kO5tgw . He also found that the way different systems mange pickings list had a significant effect on the data entered http://1.usa.gov/itbtc5 Secondary uses have to take account of system biases.

This bring us to Van de Lei’s law, coined by the eponymous Dutch health informatician “Data should not be used other than for the purposes for which it was collected” While I would not take this extreme position (and I suspect Van de Lei said it to emphasise the point, rather that to be taken literally) There are significant challenges in using data where the use is not one that was in the mind of the recorder when they recorded it.

There is a massive growth of interest in health analytics based on data extracted from GP systems. Data quality is adequate for many of these purposes but not as good or consistent as some secondary users seem to assume. While there can be dangers in telling recorders about the secondary uses to which the data they enter will be put in most cases these are greatly outweighed by the benefits of making recorders aware of secondary uses and trying to secure their cooperation to make sure what they enter is fit for the secondary uses to which it will be put.

Users of data for secondary purpose beware.

Beyond the Hawking Horizon

The idea that a single shared electronic health record (SSEHR) operating over a wide geography serving many care settings and diverse professional groups is a good idea is one that has some currency in the NHS. However, evidence seems to be growing that this approach does not lead to more effective care and communication and brings new problems of it own.

Myself and colleagues in the British Computer Society Primary Health Care Group (PHCSG) have been struggling to untangle the issues that flow from SSEHR and have contributed to guidance on their use intended to help achieve a better balance between the benefits and problems they bring. However, after much debate I those of my colleagues involved in this work have concluded that the SSEHR is a fundamentally flawed idea and one that we should not pursue further.

As always with our debates we have struggled with the semantics of our discourse. What is a record?, what is an EHR?, what do we mean by a SSEHR? and what differentiates it from a EHR?. So first some definitions.; there are various terms in use for EHRs these have subtle differences in meaning that are not always agreed or understood; EHR, EMR, EPR, PHR and HER (the last created by the default auto-correct setting in MS Office) I’ve wasted too much of my life on these definitions so I am going to call them all ExRs and let others botanise about them.

So what then do I mean by an SSEHR. Sadly, applying common meaning to the name is misleading. It is Single, in that it is the main record of prime entry and reference for those that use it. (So it’s not a summary record or a consolidated record created from other records of prime entry). It’s shared, but then with a few very limited exceptions all records are shared (indeed the facilitation of sharing is one of a records main purposes) but to meet our definition of an SSEHR it has to be shared widely both geographically and functionally, certainly beyond a single organisation or care setting and across also across diverse users. It is this degree of sharing that differentiates an SSEHR from other ExRs and which is the root of it problems.

SSEHRs are shared beyond a single domain of trust, beyond a single homogenous record culture and on too broad a scope for a single set of governance arrangements to be meaningfully applied and it is this broad scope of use in at the heart of the problems with the SSEHR. The first set of issues are around issues of data security, privacy and consent ,the second around record quality and the third around innovation and choice The first gets the most attention but while important I think these problems don’t represent the biggest challenge for the SSEHR, so In this blog piece I’m going to concentrate of the second set of problems around record quality. I shall come back to the other two sets of issues in a latter blog.

I’ll pick-up on a more detailed discussion on the definition of record quality and the purposes of ExRs another time, but for now lets just say that quality is about fitness for purpose and that ExR have a wide range of purposes. Even within a single organisation with a shared record culture and governance framework these purposes are not fully compatible and the record needs to be a compromise between these purposes which reflects the weight given to each by the users of the record. As the scope of sharing increases the dissonance between the various purposes becomes greater and the extent to which all users understand the purposes of all other users reduces and we reach a point where the utility of sharing starts to fall as the scope of sharing increases, I call this the Hawking Horizon in acknowledgment of my friend and colleague Mary Hawking who is responsible for so much of the best thinking about this problem. Where the Hawking Horizon is is open to debate and it position can certainly be affected by the quality of systems design, governance arrangements and user training, but the Hawking Horizon is clearly closer than the boundaries of many SSEHRs we are attempting to implement today. Probably, to keep within the Hawking Horizon a record scope should not extend beyond a single service or domain of trust (i.e. a GP practice, hospital department or community service) and we should look to other mechanism to share and communicate over the Hawking Horizon (other types of shared record i.e vertical and horizontal summaries and purposeful clinical communication – More about these in a later blog).

What then are the practical problems that arise when we try and push the scope of a shared record beyond the Hawking Horizon? Firstly, we get conflicts of purpose with user recording information in ways fit for their purpose but actively damaging to the purposes of other users. Some example reported to the PHCSG include:

• The recording of a rogue high blood pressure in an out of hours emergency of a patient whose blood pressure is otherwise normal undermining the QoF target for a GP
• The use diagnostic label “stroke” for every encounter between a patient and physiotherapists for rehabilitation treatment follow a single stroke distorting incidence data.
• The referral management centre who recorded a hysterectomy, as this was the reason for referral, which, if not spotted would have excluded the patient inappropriately from further cytology screening.

Secondly, we get irresolvable differences between users with no governance arrangements in place to resolve them. Again examples reported to PHCSG Include.

• The podiatrist who refused to remove a diagnosis of diabetes from a patient where the GP had biochemistry results which proved conclusively that the patient was not and untreated diabetic, even though she had a leg ulcer that the podiatrist reasonably considered to be a classic diabetic leg ulcer.

• The GP and social worker who could not agree on the diagnoses of bi-polar disorder, because the patient would not accept the diagnoses which the social worker consider to be a social construct.

All of these issues are potentially resolvable through better system design, clear governance arrangements and better user training, but in practice become irresolvable when the scope of the record gets too great, much better that each user shares their primary record only with those within their Hawking Horizon and uses other methods (described briefly above) to communicate beyond it.

When the record quality issues of an SSEHR are added to the security, privacy and consent issues associated with such records and considered alongside the ossifying effect they have on competition, choice and innovation, we really have to think again.

I shall return to this and associated issues in future blogs and try and describe some alternative approaches that make it easier to get the better more appropriated sharing of information and communication that can lead to better care.