Keith McCormick's Blog, page 6

June 19, 2013

My take on IBM’s new SPSS Analytic Catalyst

IBM has just released a new SPSS brand product. I have numerous friends in the SPSS community, and I have been a frequent beta tester, but I didn't know in advance about this release. It does resemble something that I saw demonstrated at last year's IOD. What to make of this product? It is web based, and looks pretty slick: Analytic Catalyst. There is also a video on YouTube. I like the visuals, and I agree that it looks easy to use. I'm anxious to try it, and might recommend it in certain client situations.


Never forgetting that the lion's share of a Data Mining project's labor is spent on Data Prep, and since I've never been on a project that didn't need Data Prep, I think that a tool like this is most useful after a successful Data Mining project is complete. For instance, I worked like crazy on a recent churn project, but after the project the marketing manager had to explore high churn segments to come up with intervention strategies. This could be used for that purpose. Or perhaps 'repurposed' for that since the video seems to indicate that it would be used in the early stages of a project.


My reaction, not a concern exactly, is the premise. It seems to assume that the problem is business users tapping directly into Big Data to explore it, searching for 'insight'. I don't think most organizations need more insight. I think they need more deployed solutions – solutions that have been validated that are inserted into the day to day running of the business. My two cents.

 •  0 comments  •  flag
Share on Twitter
Published on June 19, 2013 11:29

May 20, 2013

New Role at QueBIT Consulting

As of today, I have joined QueBIT Consulting as VP and General Manager of the Advanced Analytics team. I will have the exciting task of building a world class team of SPSS Experts. Joining the team with me will be Scott Mutchler of Big Sky Analytics


Here is today's press release.

 •  0 comments  •  flag
Share on Twitter
Published on May 20, 2013 10:19

December 5, 2011

Reflections on Statistical Non-Significance

Statistical Hypothesis testing does an OK job at avoiding proving the presence of effects, but it does a mediocre job (or worse) at disproving them. There are a lot of reasons for this, poor training among them, but it is largely systemic. I spent my Thanksgiving morning watching the “Vanishing of the Bees,” and my mind kept drifting to thoughts of Type II error. I know. I can grasp the obvious … maybe I need a break.


I don’t have any biological expertise in evaluating, in detail, the research on either side of the fascinating Colony Collapse Disorder debate, but I am always suspicious of negative findings of any kind unless I can read the research. In the case of this documentary, they claim (a claim that is perhaps biased) that pesticides were determined to be safe after administering a fairly large dose to an adult bee, and determining that the adult bee did not die during the research period. Was that enough? I can’t speak to the biology/ecology research, but it got me thinking about Type II.


We know well the magnitude of the risk we face in committing Type I, and it is trained into us to the point of obsession. When meeting analysts wearing this obsession on their sleeve, reminding everyone who will listen, leveling their wrath on marketing researchers daring to use exploratory techniques, I am often tempted to ask about controlling for Type II. I am often underwhelmed with the reply. There are just so many things that can go wrong when you get a non-significant result. Although I wrote about something similar in my most recent post, I’m am compelled to reduce my thoughts to writing again:


1) The effect can be too small for the sample size. Ironically, the problem is usually the opposite. Often researchers don’t have enough data even thought the effect is reasonably big. In this case, I was persuaded by the documentary’s argument that bee “birth defects” would be a serious effect. Maybe short term adult death was not subtle enough. More subtle would require more data.


2) The effect can be delayed. My own works doesn’t involve bees, but what about the effect of marketing? Do we always know when a promotion will kick in? Are we still experiencing the effects of last quarter’s campaign? Does that cloud our ability to measure the current campaign? Might the effects overlap?


3) The effect could be hidden in an untested interaction (AKA your model is too simple). The bee documentary proposed an easy to grasp hypothesis – that the pesticide accumulates over time in the adult bee. Maybe a proximity * time interaction? We may never know, but was the sample size sufficient to test for interactions, or was Power Analysis done assuming only main effects. Since they were studying bee autopsies the sample size was probably small. I don’t know the going rate for a bee autopsy, but they are probably a bit expensive since the expertise would seem rare.


4) Or its hidden in a tested interaction (AKA your model is too complex). I had a traumatic experience years ago when a friend asked me what “negative degrees of freedom” were. Since she was not able to produce a satisfactory answer to a query regarding her hypothesized interactions, her dissertation committee required here to “do all of them”. Enough said. It was horrible.


5) The effect might simply be, and what could be more obvious, not hypothesized. This, we might agree, is the real issue regarding the adult bee death hypothesis. It may not have been the real problem at all.


Statistics doesn’t help you find answers. Not really. It only helps you prove a hypothesis. When you are lucky, you might be able to disprove one. Often, we have to simply “fail to prove”. In any case, I recommend the documentary. Now that I’ve been able to vent a bit about Type II, I should watch it again and focus more of my attention on the bees.

 •  0 comments  •  flag
Share on Twitter
Published on December 05, 2011 13:01

October 15, 2011

Your Statistical Result is Below .05: Now What?

When you get a statistical result, one too often immediately jumps to the conclusion that the finding “is statistically significant” or “is not statistically significant.” While that is literally true since we use those words to describe below .05 and above .05, it does not imply that there are only two conclusions to draw about our finding. Have we ruled out the possible ways that our statistical result might be tricking us?


Things to think about if it is below .05

Real:
You might have a Real Finding on you hands. Congrats. Consider the other possibilities first, but then start thinking about who needs to know about your finding.


Small Effect: Your finding is Real, but is of no practical consequence. Did you definitively prove a result with an effect so small that there is no real world application of what you have found? Did you prove that a drug lowers cholesterol at the .001 level, but the drug only lowers it at a level so small that no Doctor or patient will care? Is your finding of a large enough magnitude to prompt action or to get attention?


Poor Sample: Your data does not represent of population. There is nothing you can do at this point. Are you sure you have a good sample? Did you start with a ‘Sampling Frame’ that accurately reflects the population? What was your response rate on this particular variable? Would the finding hold up if you had more complete data? Have you checked to see if the respondent and non-respondent status on this ‘significant’ variable is correlated with any other variable you have? Maybe you have a census, or you are Data Mining – are you sure you should be focused on p values?



Rare Event:
You have encountered that 5% thing. It going to happen. The good news is we know how often it is going to happen. If you are like everyone else, you probably are operating at 95% confidence, and then each test, by definition, has a 5% chance of coming in below .05 from random forces alone. So you have a dozen findings – which ones are real? Was choosing 95% Confidence a deliberate and thoughtful decision? Have you ensured that Type I error will be rare? If you have a modest sample size did you chose a level of confidence that gave you enough Statistical Power (see below)? If you are doing lots of tests (perhaps Multiple Comparisons) did you take this into account or did you use 95% confidence out of habit?


Too Liberal: You have violated an assumption which has made your result Liberal. Your p value only appears to be below .05. For instance, did you use the usual Pearson Chi-Sq when Continuity Correction would have been better? Maybe Pearson was .045, Likelihood Ratio was .049,  Continuity Correction was .051. Did you chose wisely? Did you use Independent Samples T-Test when a non-parametric would have been better? Having good Stats books around can help, because they will often tell you that a particular assumption violation tends to produce Liberal results. You could always consider a Monte Carlo simulation or Exact Test, and make this problem go away. (An interesting ponderable is to ask if we are within a generation of abandoning distributional assumptions as ordinarily outfitted computers get more powerful?)


Things to think about if it is above .05


Negative Finding: You might have disproven your hypothesis. (I know that you have ‘proven’ your ‘Null Hypothesis’, but does anyone talk that way outside of a classroom?) Congrats might be in order. Consider the other possibilities and then start thinking about who needs to know about your negative finding. If it is the real thing, a negative finding could be a valuable. Be careful however before you shout that the literature was wrong. Make sure it is a bona fide finding.


Power: You may simply have lacked enough data. Did you do a Power Analysis before you began? Was your sample size commensurate with your number of Independent Variables? Did you begin with a reasonable amount of data, but attempted every interaction term under the sun? Did you thoughtlessly include effects like 5 way interactions without measuring the impact that it had on your ability to detect true effects? If you aren’t sure what a Power Analysis is, it is best that you describe your negative results using phrases like: “We failed to prove X”, not “We were able to prove that the claim of X, believed to be true for years, was disproved by our study (N=17)”. You can also Google Jacob Cohen’s wonderful “Things I have Learned (So Far)” to learn more about Power Analysis. I mention is in my Resources section, and it has influenced my thinking for years. Its influence is certainly present in this post.


Poor Sample: Your data is not representative of the population. This one can get your p value to move, incorrectly, in either direction.


Too Conservative: You have violated an assumption which has made your result Conservative. Your p value only appears to be above .05. Did you use an adjusted test in an instance when no adjustment was needed? Did you use Scheffe for Multiple Comparisons, but aren’t quite sure how to justify your choice? Most assumptions make our tests lean Liberal, coming in too low, but the opposite can occur.


 


This list has served me well for a long time. Always best to report your findings thoughtfully. Statistics, at first, seems like a system of Rule Following. It is more subtle than that. It is about extracting meaning, and then persuading an audience with data. Without an audience, there would be no point. They deserve to know how certain (or uncertain) we are.

 •  0 comments  •  flag
Share on Twitter
Published on October 15, 2011 13:49

September 25, 2011

Seminar Series in Kuala Lumpur, Malaysia

I will be speaking in Kuala Lumpur, Malaysia next week on the subject of Data Mining. I will be discussing Data Mining, in general, and then participants will get a chance to try it using the resources providing by the excellent tool neutral Elder, Miner, Nisbit book. I believe the event is at capacity, but there are already tentative plans to try this format again in January, 2012, also to be held in Kuala Lumpur, Malaysia. The event organizer stays in charge of the details, but if you are interested in finding out more about the January four day event please email me.

 •  0 comments  •  flag
Share on Twitter
Published on September 25, 2011 05:46

September 24, 2011

Essential Elements of Data Mining

This is my attempt to clarify what Data Mining is and what it isn’t. According to Wikipedia, “In philosophy, essentialism is the view that, for any specific kind of entity, there is a set of characteristics or properties all of which any entity of that kind must possess.” I do not seek the Platonic form of Data Mining, but I do seek clarity where it is often lacking. There is much confusion surrounding how Data Mining is distinct from related areas like Statistics and Business Intelligence. My primary goal is to clarify the characteristics that a project must have to be a Data Mining project. By implication, Statistical Analysis (hypothesis testing), Business Intelligence reporting, Exploratory Data Analysis, etc., do not have all of these defining properties. They are highly valuable, but have their own unique characteristics. I have come up with ten. It is quite appropriate to emphasize the first and the last. They are the bookends of the list, and they capture the heart of the matter.


1) A Question
2) History
3) A Flat File
4) Computers
5) Knowledge of the Domain
6) A lot of Time
7) Nothing to Prove
8) Proof that you are Right
9) Surprise
10) Something to Gain


1) A Question: Data Mining is not an unfocused search for anything interesting. It is a method for answering a specific question, meeting a particular need. Getting new customers is not the same as keeping the customers you already have. Of course, they are similar, but different in both big and subtle ways. The bottom line is that every decision that you make about the data that you select and assemble flows from the business question.


2) History: Data Mining is not primarily about the present tense, which contrasts it from Business Intelligence reporting. It is about using the past to predict the future. How far into the past? Well, if your customers sign a 12 month contract than it is probably more than 12 months old. It must be old enough to have a cohort of customers that have started and ended a process that is ongoing. Did they renew? Did they churn? You need a group of records for which the outcome of the process is known historically. This outcome status is usually in the form of a Target or Dependent Variable. It is the corner stone of the data set that one must create, and is the key to virtually all Data Mining projects.



3) A Flat File:
Data Miners are not in the Dark Ages. They work with relational databases on a daily basis, but the algorithms that are used are designed to run on flat files. Software vendors are proud to tout “in database modeling,” and it is exciting for its speed, but you still have to build a flat file that has all of your records and characteristics in one table. The Data Miner and author Gordon Linoff calls this a “customer signature.” I rather prefer the idea of a customer “footprint” as it always involves an accumulation of facts over time. The resulting flat file will be unique to the project, specifically built to allow the particular questions of the Data Mining project to be answered.


4) Computers: Data Mining data sets are not always huge. Sometimes they are in the low thousands, and sometimes a carefully selected sample of a few percent of your data is plenty to find patterns. So, despite all the talk of Big Data, the size of the data file is not really a limiting factor on today’s machines. Statistics software packages were capable of running a plain vanilla regression on larger data sets decades ago. The real thing that separates Data Mining from R. A. Fisher and his barley data set is that Data Mining algorithms are highly iterative. Considerable computing power is needed to find the best predictors and try them in all possible combinations using a myriad of different strategies. Data Mining is not simply Statistics on Big Data. Data Mining algorithms were created in a post computing environment to solve post computing problems. They are qualitatively different from traditional statistical techniques in fundamental and important ways, and even when traditional techniques are used they are used in the service of substantively different purposes.


5) Knowledge of the Domain: A sales rep once told me a story, probably apocryphal, about the early days of the Data Mining software I use. A banking client wanted to put them to the test, so the client said: “Here are some unlabeled variables. We are going to keep the meaning of them secret. Tell us which are the best predictors of variable X. If you answer ‘correctly’, we will buy.” What a horrible idea! The Data Mining algorithms play an important role in guiding the model building process, but only the human partner in the process can be the final arbiter of what best meets the need of the business problem. There must be business context, and if the nature of the data requires it, that context might involve Doctors, Engineers, Call Center Managers, Insurance Auditors or a host of other specialists.


6) A Lot of Time: Data Mining projects take time, a lot of time. They take many weeks, and perhaps quite a few months. If someone asks a Data Miner if they can have something preliminary in a week, they are thinking about something other than Data Mining. Maybe they really mean generating a report, but they don’t mean Data Mining. Problem definition takes time because it involves a lot of people, assembled together, hashing out priorities, figuring out who is in charge of what. With this collaboration, the project lead can’t easily make up lost time by burning the midnight oil. Data Preparation takes much of the time. Perhaps you assume that you will be Mining the unaltered contents of your Data Warehouse. It was created to support BI Reporting, not to support Data Mining, so that is not going to happen. Finally, when you’ve got something interesting, you have to reconvene a lot of people again, and you aren’t done until you have deployed something, making it part of the decision management engines of the business. (See Element 10.)


7) Nothing to Prove: If you are verifying an outcome, certain that you are right, having carefully chosen predictors in advance, simply curious how well it fits, you aren’t doing Data Mining. Perhaps you are merely exploring the data in advance, biding you time, waiting until your deadline approaches and then using hypothesis testing to congratulate yourself on how successfully your model fits data that you explored. This is, of course, the worst possible combination of Statistics and Data Mining imaginable, and violates the most basic assumptions of hypothesis testing. Neither of these approaches are Data Mining.


8) Proof that you are Right: Data Mining, by its very nature does not have a priori hypotheses, but it does need proof. A contradiction? The most fundamental requirement of Data Mining is that the same data which was used to uncover the pattern must never be used to prove that the pattern applies to future data. The standard way of doing this is to divide ones data randomly into two portions, building the model on the Train data set, verifying the model on the Test data set. In this is found the essence of Data Mining because it gives one freedom to explore the Train data set, uncovering its mysteries, awaiting the eventually judgement of the Test data set.


9) Surprise: A common mistake in Data Mining is being too frugal with predictors, leaving out this or that variable because “everyone knows” that it is not a key driver. Not wise. Even if this is true, it discounts the insight that an unanticipated interaction might provide. Even if true, it is a needless precaution because Data Mining algorithms are designed to be resilient to large numbers of related predictors. This is not to say that feature selection is not important – it is a key skill – but rather that Data Miners must be cautious when removing variables. Each of those variables cost the business money to record, and the insights they might offer have monetary value as well. Doing variable reduction well in Data Mining is in striking contrast with doing variable reduction well in Statistics.


10) Something to Gain: It might be somewhat controversial, but I think not overly so, to establish an equivalence: Data Mining Equals Deployment. Without deployment, you have may have done something valuable, perhaps even accompanied with demonstrable ROI, but you have fallen short. You may have reached a milestone. You may even have met the specific requirements of your assignment, but it isn’t really Data Mining until it is deployed. The whole idea of Data Mining is taking a carefully crafted snapshot, a chunk of history, establishing a set of Best Practices, and inserting them in the flow of Decision Making of the business.


The issue of clarifying what Data Mining is (and what to call it) comes up in conversation often among Data Miners so I hope the community of data analysts will find this a worthy enterprise. I intend to present this list to new Data Miners when I met them in a tool neutral setting. Please do provide your feedback. Would you add to the list? Do you think that there any properties that are listed here that are not required to call a project Data Mining?

 •  0 comments  •  flag
Share on Twitter
Published on September 24, 2011 17:47