`

How Do I Know If My Medical Communications Program Has Been Successful?

So how do you mitigate the risk of something going wrong? I would suggest getting guidance or support from those that have run similar programs, be they internal or external. In many cases they will be able to get these risks down close to zero. 

If you choose to go with a consultancy I would expect they would have the people who know what makes a good program because they would have measured and evaluated lots of them; what’s more, they should be able to provide examples of how they have used program evaluation to avoid or overcome some of the potential issues.

Having run a number of these programs, we have often used the evaluation process to better understand how a program is being received, what’s working and what’s not. 

To give you an example: I worked on an RACGP accredited education program for a number of years and we encountered and overcame just about every medical communication challenge you could imagine. 

To assess the program and determine whether or not we were meeting our learning objectives, we evaluated every dinner meeting and every workshop. It was also important to gather feedback that would help improve what we were doing and inform development of the next phase.

Our initial priority was to evaluate the dinner meetings so as to compare one meeting to another and the results in one state to those in another. If a meeting scored outside what we considered an acceptable margin for error we would investigate and, where we could, address any identified problem. This proved particularly valuable as the meetings progressed from the eastern seaboard to the other side of the country. 

One meeting in particular scored poorly and fell well outside our margin of error. Was it the content, the presenter, the technology, the venue, or something else? 

Closer scrutiny of the anonymous evaluation forms did not reveal any obvious issues. Having checked with the program sponsor, it became apparent there was a confluence of circumstances that led to a less than optimal outcome. 

Firstly, the weather was miserable that night, resulting in lower than anticipated turnout; the presenter was probably not on their game and a couple of the attendees marked the program harshly. The relatively low turnout amplified the unfavourable feedback.  Further scrutiny turned up some local field intelligence that suggested there might have been some competitive tension between a couple of the attendees and presenter. 

The process proved quite valuable because we could explain the distortion as a ‘geo-political’ anomaly rather than a content problem. Unfortunately, there are some things you can never control and that is fine, so long as you have a way to evaluate and identify them.   

Notwithstanding this local anomaly, this program continued as planned because the ongoing positive feedback constantly reaffirmed our confidence in its ability to deliver quality education. By the time we hit the tipping point we had a good handle on the variables we had to watch. 

It is probably worth taking a minute to cover off the tipping point, because it is important to the success of a program. The tipping point generally occurs when the positive effects of an education program lead to exponential uptake or engagement. 

The most effective way to bring the tipping point closer is to ensure you are meeting, and preferably exceeding, the expectations of those attending. Measurement is therefore fundamental to confirming progress towards the tipping point.