This study looks at two cohorts of students sitting a mid semester test using the Blackboard Multiple Choice Computer Aided Assessment (CAA) package, but with vastly different content to be assessed. One test was the simple Multiple Choice Question (MCQ) format of a stem followed by four simple text options whist the other utilized the MCQ format to compare complex SQL scripts to meet the required outcome stated in the stem. In total 465 students completed a questionnaire as part of the standard subject evaluation for the two subjects. The questions were designed to evaluate their opinion of the testing procedure and highlight their concerns. The results were statistically analyzed using the Chi-squared test for significant difference between the two student cohorts, producing some interesting results. We conclude that a CAA package should be well matched with the content to be assessed. In this study it is apparent that there was a serious problem with the mismatch between the content being tested and the CAA type chosen to do the job. The study also highlights that previous exposure by the students to the CAA is intrinsic to the success of the exercise. In addition we also observe that the students generally demonstrate acceptance of CAAs as a reliable, time efficient and trustworthy assessment mechanism, encouraging the use of CAAs. We also observed some interesting response regarding gender, but have not attempted to draw any conclusions in the area at this stage.