Archive for December 2011

When we started doing the big 5 personality traits in our groups, i felt a little odd being one of the few yelling them all out 😛  So I promised my group to share a great BBC documentary “Child of Our Time – The Big Personality Test” that I learned so much from years ago!  Whilst lying ill in bed, hoping that the cysts on my ribcage do not mutate and absorb my body, i decided to track down the videos in the meantime (honestly, this is the reason as to why i wasn’t in today!  doctors orders to not aggrivate it and move as little as possible until they go down – score!)  There are two episodes, and – in my opinion, they’re really interesting!  (Dr. Robert Winston was my psychology inspiration whilst growing up!  What a sad fact!)

So here’s the list:

Episode 1 – Part 1/6

Episode 1 – Part 2/6

Episode 1 – Part 3/6

Episode 1 – Part 4/6

Episode 1 – Part 5/6

Episode 1 – Part 6/6

Episode 2 – Part 1/6

Episode 2 – Part 2/6

Episode 2 – Part 3/6

Episode 2 – Part 4/6

Episode 2 – Part 5/6

Episode 2 – Part 6/6

Hope you guys enjoy them as much as I did 🙂 helped me remember a lot of things about personality traits (particularly with development!)

Also A BIG THANK YOU TO THANDI!  Who has been an amazing TA!  I hope she sticks with us – it’s sad to hear she could be moving groups, but I wish her all the best!

Have a good christmas!


1) “I am scientific!!” – nirapsy

2) No Ambiguities – tlf23

3) Are Qualitative Methods Always Subjective? – psychology invaders

4) Is it dishonest to remove outliers and/or transform data? – bannee diu

5) Qualitative research isn’t as scientific as quantative methods – jessywessywoooo



Final blog of the semester!

Likert or not, here I come! 😀


There has always been a discrepancy as to whether it is best to use a likert or forced scale when creating a survey/questionnaire and gathering data (Bartlett, 1960).  But what are the differences between these scales that make them such a arguable topic within quantative data collecting?

With a likert scale questionnaire, we obtain a wide variety of options, most often something like; do you agree with this statement?

  • Strongly Agree
  • Agree
  • Neither Agree nor Disagree
  • Disagree
  • Strongly Disagree

and then we get to select an option…

But with a forced, the answer choices are even, essentially missing out the “Neither Agree nor Disagree” option so we are forced to select an answer (see that they did with the name there!  Very clever!)  So we get an answer choice that looks a lot like this for the same question ‘Do you agree with this statement?’:

  • Strongly Agree
  • Agree
  • Disagree
  • Strongly Disagree

Essentially, using a forced option means missing out the middle man; the “maybe”‘s or the “not sure”‘s or “don’t know”‘s so a definitive answer can be collected.  This method of collecting results is also known as an ‘ipsative’  measure and is more google-friendly to look-up, but let’s just stick with the word forced for now…

Johnson et al (1988) claims that using forced-choice testing is unethical, purely because the participant is stripped of free-will and is forced to assume an option or a second-best option which is in itself, gives an inaccuracy to results.  If participants are simply given a “yes” or “no” answer, this could generate the possibility of ‘static’ results;  But gaining a definitive answer in results is what most researchers look for; what good is a “maybe” when we want to discover a “yes” or “no” answer?  In essence, Parker et. al (2002) goes as far as to say that “maybe” answers are (in some cases) just as useful as outliers when we are looking for definitive answers in research, and casting away these “middle” choices in some cases, is better for the research as a whole.  But then of course, this option is also walking a fine line when it comes to generating ethically correct data and really puts the reliability at risk.

According to Martin et al (1995)’s study on Faking Personality Questionnaires however, forced options are much more emperically correct as it stops participants from faking, or exaggurating their scores.  Participants may also “shy away” from the edges of high-rated scores (particularly females) and this may affect the overall scores of answers.  But when given options such as “I like to go out and party” or “I’d rather stay in and read a book” means a participant places themselves in an either-or catagory, relating the most likely option to themselves, rather than placing themselves lower/higher than deserved on a trait-rating scale.

Ultimately, although there is a discrepancy with using ‘forced’ choice questionnaires, using likert scales must also be carefully written, as when participants are given multiple-choices, the ways in which the answers are written can also have an impact onto answers given.  I however, would much rather a definitive, to-the-point answer of “this” or “that” to collect my data, so I’m thinking forced choice answers are definitely something I will be looking into when compiling any questionnaires in the future!  There is no clinically ‘right’ or ‘wrong’ method of use, as long as surveyors bare in mind the impact of using either/or scale to collect and create data in both positive and negative ways so that the reliability (and honesty) of the research is little affected!

Happy surveying!