Friday, February 7, 2014

My Criticisms of the Big 5, and an alternative Dynamic Systems Paradigm



 (Disclaimer - this is a really long facebook post I once made during an arguement and it is kind of a work in progress right now.  I just want to get some of the ideas out there.  This is basically a summary of my objections to the Big 5 and an alternative Dynamic Systems Theory model)

This is an article I reference just to show an example of the common criticisms of MBTI

Ok  - Here is what I think.  First, from what I understand all the studies that have been used to check the validity of Meyers Briggs do not use the theory correctly.  As JT Cove points out the author of this article does not use the theory even remotely correctly.  All the Academic discussions I have seen on this do the same thing.  They ignore the role of cognitive functions and instead use the simplistic idea that MBTI models personality as what I can best describe as a Cartesian, 4-dimensional “pseudo-binary” space.  When you use this line of thinking (which is incorrect) you guess that for E-I, S-N, F-T, and J-P you should see bi-modal distributions in the data.  This is not what I understand you see in the data though.  The data always comes out unimodal with a mean in the middle.  The problem is though that the MBTI theory does not model personality in Cartesian sense at all.  The mathematical framework that I currently think best matches Jungian theory is something more like a Markov chain or a probabilistic graphical network, where the states in the model are the cognitive functions.  If nothing else I think some kind of dynamic system model is much more appropriate to model human personality.  Actually one of the main strengths of MBTI over the big 5 in my opinion is that it does take into account that all people have thinking and feeling, etc, etc.  They are just used in different preference orders (sort of).  The author in the article totally misses this.  Big 5 just places people on a static point on a continuum.  To the best of my knowledge no psychologist in academia has ever tried to fit any kind of dynamic system model to personality outside of perhaps some work with infants.  Dario Nardi at UCLA has also mentioned dynamic systems models, but I can find no peer-reviewed papers of his.  If humans are better modeled like dynamic systems I really do not know what the resulting distributions in a Cartesian representation these types of models would give on the questionnaires used to check the validity of psychological tests.  Hard to say if it should come out bi-modal or uni-modal.  I could actually use some hand-wavy central-limit theorem argument to explain the fact that the data does keep coming out unimodal.  Personality is a pretty complicated and this hunch is not totally unreasonable but I really have no idea.  It would need to be studied more deeply.  This leads me to my criticisms of big 5. 
First, we need to take a step back and look at what big 5 actually is.  Big 5 is really a data-driven, static, 5 dimensional Cartesian model of human personality.  From what I can tell psychology researchers basically took a bunch of surveys and performed factor analysis on them and got out 5 main factors and said that is a good way to model human personality.  Now the fact that it is purely data driven means there was never any theory really used to guide the investigation other than perhaps the implicit idea that humans can be broken into independent factors.  This means in some ways it is arguable whether or not it is even “scientific” because there really is no hypothesis or model you are trying to test against.  It is more like a modified form of an observation.  This criticism is common and I think the big 5 article on Wikipedia also mention it.  My main question though is, why in the world does anyone think a static, Cartesian model is a good way to capture human personality?   What about humans is static?  Why did they do this?  I will give my theory.  Basically psychology is often considered a “soft” science and as a result I think the academic field sort of has an "inferiority complex."  In order to maintain credibility they seem to try to tie everything to data as much as possible.  This is reasonable enough, but I see at least one problem in the implementation.  Psychology education appears to have chosen statistical factor analysis as its weapon of choice for data analysis.  They focus on using statistical factor analysis for pretty much everything.  Problem is that in doing this they are making the assumption that everything can be modeled in a static Cartesian space.  This is kind of ridiculous.  Many important phenomena are much better described using other types of models.  Dynamic system models for instance (This is my current preference for modeling humans).  I am pretty sure Psychology researchers do not have the same kind of mathematical maturity that an engineer or computer-science researcher has.  My background was full of dynamic system models and techniques do exist to fit data to them.  I think psychology is in a mode where they really like factor analysis and either are not aware of other types of models, or they just do not want to bother with them.  I feel like they are kind of caught in a case of “When your only tool is a hammer the whole world becomes a bunch of nails.”

One quick mention on the tests.  I do not think the current survey-based tests are very good.  They are not really repeatable or reliable.  I am currently exploring the possibility of using different types of tests to measure personality.  I am wondering for instance if tracking the kinds of moves people make in playing certain kinds of games might provide a better measure. Fe might be measurable by looking at how much vital signs in a person change when exposed to certain images.  I am also curious if for instance S and N people might remember details from scenes differently.  This might be a measurable difference.  I think better testing techniques need to be developed before progress can be made.    
 I believe my thoughts on this could actually be used to build much more life-like machines.  I think many algorithms you see in computer science map to cognitive functions.  For instance, search algorithms seem sort of like Ne.  Design of experiment is like Se.  Simulations are like Ni.  Pattern recognition is like Si.  POMDPs seem a lot like Te.  Hierarchical deep learning and PCA are sort of like aspects of Ti.  Neural networks seem somewhat similar to Fe (You can make fast decisions on complicated data but you really do not know how you did it).  Fe is in some ways like using your own system as an analog computer to calculate results about external events.  I think you could make a machine with current algorithms and arranging their use according to some kind of dynamic system model such as a Markov Chain.  I have seen some cutting edge personality engineering work for robotics, and in my opinion it is pretty crude.  Engineering researchers are blindly adopting big 5 as well, and I think it is to their detriment.  Engineering is not constrained as science is to concepts such as searching for “truth.”  It just has to work.  I think I can make a better human-like machine using MBTI theory.  I am laying down theoretical framework for this in my spare time.

At the end of the day MBTI is just a model like anything else.  The pertinent question is how useful is it.  I know however that a person is going to have a hard time convincing me that there is no such phenomena as the “INTJ” or “Fe.”  I think if nothing else Jung was really onto something.  May not be perfect but something is there worth taking a look at. A dynamics systems approach may be the paradigm shift the personality research field needs to advance. 

No comments:

Post a Comment