How does the review process work?
The Review Process is set out here. If you would like to see a more detailed explanation of the process, please contact us and we will get in touch to discuss any specific requirements or queries you have.
Who does your Reviews?
Our reviewers are recruited and trained by ORCHA from a wide array of backgrounds and roles. They are not experts in any particular field of App assessment but rapidly become expert in interrogating Apps to answer the questions posed by our Review Development Team who are all experts in the relevant areas such as clinical, technical, regulation, user experience and design.
The Review Development Team are responsible for the questions our Reviewers are looking to answer. They set out very clearly what evidence the Reviewers need to find to answer a question affirmatively and they also determine the consequences of the answer to each question in terms of positive or negative points being awarded which drives our scoring process.
Our Reviewers are guided through each review through our online review tool and this ensures that all the relevant questions are investigated. Where our Reviewers encounter scenarios that the Review Development Team haven’t specified, the Reviewers refer these back to the relevant member of the Review Development Team for guidance.
This combination of highly trained dedicated reviewers and our wider expert group in the Review Development Team means that we can review lots of Apps in a timely and cost effective manner.
If you’re interested in becoming an ORCHA Reviewer please contact us here
How detailed is your Review?
Our Review aims to provide useful indications of whether an App appears to be safe and of value across a range of areas such as Data and Security and Clinical Assurance. It involves an evaluation of over 130+ elements of an App and validates crucial information such as compliance with relevant Data and Privacy regulations, compliance with key Medical Device regulations (currently being implanted to reflect new regulations) and it also assesses an App against a range of other areas of standards and best practice. It is not however the same as a full user test of an App and it does not validate all the claims made by App Developers many of which would require detailed clinical and other testing and in many cases, extensive user testing and trials.
Outside of a small number of national accreditation schemes there is no other extensive review process that covers as many elements of an Apps delivery as ours. One such process is delivered in the English NHS via NHS England/Digital who have an assessment process which covers many of the elements we do, but also undertakes a more detailed analysis of other elements that goes beyond our current process (albeit we do also operate – but currently don’t publish- a secondary review stage that is aligned to these additional stages). We do however capture and note all the Apps that have achieved an ‘approval’ through this process and the few other equivalent national processes in operation.
Whilst these national schemes aspire to increase the number of Apps they look at, at present they have only reviewed a tiny fraction of the Apps available and a small percentage of the numbers of Apps we have assessed. This means that the level of coverage and choice that these approaches offer is currently quite limited.
For some Apps there is a further level of scrutiny applied if they are classified as Medical Devices, albeit it still depends a lot on what level of classification they reach as to how much additional scrutiny they are subjected to. Class i medical devices for example are largely based on a self-certification approach.
Even Apps that have undertaken and published clinical trials or equivalent studies and have been subjected to relevant peer review, whilst clearly showing a great commitment to establishing their clinical credentials and efficacy, may not properly comply with other crucial aspects of an Apps overall delivery such as data security or user configuration etc which we also evaluate.
Against this backdrop and in this context, we believe our process is as robust as it is possible to be without undertaking a much more detailed and costly evaluation that would limit the range and scope of Apps we are able to consider. We also believe – and constantly monitor and test our process around this – that the combination of the measures we consider does give a very accurate view of an App which usually aligns to the view that other more detailed assessments deliver and as such is a reliable overall indicator of an Apps quality.
Why don’t ORCHA have medical experts looking at each App?
Our review process looks for proof points across a wide spectrum of App characteristics. From a clinical evaluation perspective, it would be very difficult and costly to undertake detailed analysis of the clinical accuracy and effectiveness of all Apps given the vast array of clinical and health related conditions and issues Apps now support. Even for suitably expert clinicians the evaluation of many elements of an App would require more than a simple ‘read through’ or user test. This type of evaluation is therefore not practical for a ‘baseline’ review of the hundreds of thousands of Apps that are out there.
We therefore use indicators of clinical quality as a proxy for this level of detailed analysis to gauge whether an App appears to have the key ingredients in the area of clinical safety and effectiveness. These include an investigation of whether there is a suitable qualified individual or body behind the App, consideration of whether an App should be and is treated as a Medical Device with appropriate CE marking, an assessment of the evidence base that the developer has presented to support any efficacy claims.
These easily and practically assessable elements provide in most cases a good indication of whether an App can be considered clinically safe and sound. It is not 100% accurate and there are many instances where an App has scored poorly in this section simply because the Developer has not included this detail. We have not yet come across the reverse scenario where a Developer has scored well in this section and subsequently been found to have significant clinical safety issues or concerns but as with all but the most stringent and resource intensive assessment processes, this could theoretically happen.
Does the Review take account of other analysis of an App that expose issues or concerns?
It isn’t practical for our review to monitor all other analysis of an App in all media. Our review has to rely on solid and easily identifiable data points to ensure that it is fair and equitable to all. If concerns about an App are raised publicly, it isn’t necessarily clear that these are fair and these often relate to aspects of an App that are subject to regulatory oversight by relevant bodies such as MHRA, the Information Commissioner or CQC. It is not for ORCHA to form a view - and to reflect that view in our score - ahead of appropriate investigation by these authorities. Where ORCHA identifies issues or concerns through our review - such as a lack of CE marking or GDPR compliance issues - we will after appropriate notifications have been issued to the relevant Developer, inform those bodies of these concerns through the relevant processes.
How do ORCHA analysis medical device compliance?
ORCHA is not a Notified Body for the purposes of the Medical Device Regulations. Our review process therefore uses the guidance provided by MHRA to assess whether an App falls under the definition of a medical device. Where we consider it does and no CE mark is present, we highlight this to the relevant developer and seek further information from them around this. If the relevant Developer doesn’t engage with us or is unable to provide any satisfactory additional information - such as evidence from a Notified Body that an App has been assessed as not being a medical device - we inform the Developer that we will notify the MHRA of our concerns. In the forthcoming new version of our Review which takes account of the new Medical Device Regulations, these issues are also flagged prominently on the body of the ORCHA Review and the ORCHA score is heavily reduced to reflect this compliance risk.
It should be noted however that with the introduction of the new Medical Device Regulations, many more health Apps are being captured and classified as medical devices. The health App market is struggling to catch up with these changes and the significant capacity issues in the Notified Bodies who support this space means that there is likely to be considerable none compliance issues arising which will capture both the good and the bad alike. A balance needs to be struck between flagging and suitably penalising none compliance with the wider impact on the public perception of this nascent but hugely valuable emerging therapeutic sector.
How do you assess directly whether an App works?
Our review doesn’t test all of the claims an App Developer might make about what the App can or might do. Whilst for some Apps this isn’t a particularly onerous task, for others this would involve a huge amount of testing and trials to validate all aspects of safety and effectiveness.
For some Apps that fall within the definition of medical devices, this is something that they will potentially undertake as part of their certification as a medical device where – dependent on their classification – they will need to provide evidence to a Notified Body of the Apps compliance with specified standards.
Our Reviews aim to rapidly assess an Apps overall compliance levels with regulation, standards and best practice to provide a proxy view of whether it is something to engage with or not. High levels of compliance do tend to indicate a higher level quality in the end product and vice versa.
Why do you publish App Review with very low scores?
Our review process is designed to help inform people who are looking for Apps about the issues, challenges and benefits an App offers. It uses an assessment of an Apps compliance with relevant regulations, standards and best practice to evaluate the App based on publicly available information on the face of the App, the relevant App Store and any associated Developer website.
It is in our view crucial to highlight Apps that have issues and challenges as much as it is important to showcase Apps that are on the face of it highly compliant and offer great value. In most cases, very low scoring Apps do not appear on the first page of search results on our sites - as search results are always ordered highest scoring first. From extensive user testing and engagement, users fully appreciate what a ‘good score’ looks like and rarely search beyond the top 3-5 Apps in a given search area or consider Apps with scores below 65%.
By removing low scoring Apps from our platforms, we take away the ability for users to discover the flaws and challenges with a given App and increase not decrease the risk that they engage with an unsuitable App. Similarly, our health care professional community often report that patients present findings from Apps that they have been using in support of self diagnosis etc. Clinicians find the ability to search for that App on our site useful in this scenario and where the App in question has again achieved a low score, this information allows the professional to discuss this with the patient and ultimately direct them to more reliable or suitable App for their needs.
How does your Review link to other Review or Assessment Processes such as in the UK NHS?
Our Review process is independent of any other process or accreditation approach but does align and dovetail with many of these. In the English NHS for example, our review does align to many aspects of the emerging Digital Assessment Questionnaire process and in combination with our ‘Advanced Review’ process (which is due to go-live shortly), covers all of the relevant evaluation areas. Our review can therefore give Developers a very good view of their potential compliance with the standards expected within the English NHS.
We also capture as part of our review whether an App has been ‘approved’ through schemes such as the NHS scheme or others that are emerging around the world and these ‘accreditations’ will shortly be shown in our reviews and available as a ‘filter’ option.
As new schemes and approaches emerge, we will continue to update our process to reflect these and track Apps that have achieved any new ‘accreditations’.
How do I query or flag a concern about a Review?
If you are a developer, you can login to your account and follow the process outlined.
We collate end user opinions through our ‘Leave a Review’ questionnaire (Note: requires you to have downloaded the App via our sites), which has been designed to ensure that this essentially subjective data, can be developed into statistically relevant and robust user experience data.
If you wish to flag an inaccuracy or specific concern outside of these processes, please contact us at firstname.lastname@example.org and we will investigate this as soon as is possible.
How do you keep your Review up to date?
We monitor all the Apps available on the iTunes platform and Google Play on a weekly basis and that tracks both new Apps that have come to market and also updates and new versions of Apps that are already available. Where we detect an update or new version of an App that we have already reviewed, we automatically flag the existing Review with a notification that it relates to a prior version and we put the new version of the App back into our Review Queue to be reassessed. It is important to only rely on Reviews that relate to current versions of an App as updates and new versions can materially change an Apps performance, risks and issues.