Back to the main index
Part of the introductory series Python for Vision Researchers brought to you by the GestaltReVision group (KU Leuven, Belgium).
In this part we introduce an advanced package, psychopy_ext, that helps you tie together the entire research cycle. It is based on the following paper:
Kubilius, J. (2014). A framework for streamlining research workflow in neuroscience and psychology. Frontiers in Neuroinformatics, 7, 52. doi:10.3389/fninf.2013.00052
Author: Jonas Kubilius
Year: 2014
Copyright: Public Domain as in CC0 (except for figures that, technically speaking, need an attribution as in CC BY because they are part of the publication mentioned above)
So far we've discussed how to code experiments. But research is more than just making an experiment! You have to analyze data, possibly also compare them to simulated data, present them in conferences and publish in journals. You should also nicely organize and verify your scripts. Ultimately, the goal would be to have your entire project completely reproducible, such that anybody could start from scratch and redo your experiments, regenerate your figures, posters, and papers, and directly build on your work -- this is how knowledge is accumulated and that is the whole Open Science concept that is taking over academia in recent years.
I even made a figure to illustrate that:
So that is what you might want to do. But this is what you and I do instead:
This is not reproducible at all. We're too often relying on:
I do not blame researchers for relying on all these ad-hoc solutions. While in theory it would be nice to code everything from A to Z, in practice we don't have the time to play these silly games. Why we don't have the time is a topic for a separate discussion, but why we have to play these silly games is the problem of software. Simply put, we the lack tools that would seamlessly enact good coding and sharing standards. We need tools that act clever.
Technology rarely has this quality, unfortunately. If you don't agree with this, you obviously have never tried to explain a newbie how to run a Python script: "OK, now run it. I mean, open the command line... it's in... um... OK, click on the Start button, type 'cmd', hit Enter. OK, now navigate to where there script is. OK, open Windows Explorer and get the path...". This is not clever -- this is developers not caring. A smartphone that my grandfather cannot figure out is not clever -- it's pretentious.
There is a reason why people stick to spreadsheets -- they're simple, intuitive and the data is there, as opposed to being only available when you run your analysis script (Bret Victor's point). They're still stupid, of course -- have you ever tried making figures nice in Excel? -- and we rather want tools that:
psychopy_ext
¶Let's build something better, something that would:
Please give a warm welcome to psychopy_ext, a package that has these aims in mind though probably does not live up to them quite yet. Psychopy_ext is nothing but a collection of wrapper scripts to a number useful packages:
Then psychopy_ext is for you!
Let's go through a simple demo to understand what it gives you.
Note that in this demo we import fix
so that the code could run from the notebook. In real life you don't do it and the Exp1
inherits from exp.Experiment
.
from psychopy import visual
from psychopy_ext import exp
from collections import OrderedDict
import scripts.computer as computer
PATHS = exp.set_paths('trivial', computer)
class Exp1(exp.Experiment):
"""
Instructions (in reST format)
=============================
Press **spacebar** to start.
**Hit 'j'** to advance to the next trial, *Left-Shift + Esc* to exit.
"""
def __init__(self,
name='exp',
info=OrderedDict([('subjid', 'quick_'),
('session', 1),
]),
rp=None,
actions='run'
):
super(Exp1, self).__init__(name=name, info=info,
rp=rp, actions=actions,
paths=PATHS, computer=computer)
# user-defined parameters
self.ntrials = 8
self.stimsize = 2 # in deg
def create_stimuli(self):
"""Define your stimuli here, store them in self.s
"""
self.create_fixation()
self.s = {}
self.s['fix']= self.fixation
self.s['stim'] = visual.GratingStim(self.win, mask='gauss',
size=self.stimsize)
def create_trial(self):
"""Define trial composition
"""
self.trial = [exp.Event(self,
dur=.200, # in seconds
display=[self.s['stim'], self.s['fix']],
func=self.idle_event),
exp.Event(self,
dur=0,
display=self.s['fix'],
func=self.wait_until_response)
]
def create_exp_plan(self):
"""Put together trials
"""
exp_plan = []
for trialno in range(self.ntrials):
exp_plan.append(OrderedDict([
('trialno', trialno),
('onset', ''), # empty ones will be filled up
('dur', ''), # during runtime
('corr_resp', 1),
('subj_resp', ''),
('accuracy', ''),
('rt', ''),
]))
self.exp_plan = exp_plan
if __name__ == "__main__":
Exp1(rp={'no_output':True, 'debug':True}).run()
Oopsies, that's complex! Let me parse that for you step-by-step:
Here's a pic to illustrate that (focus on the class Experiment
for now):
computer
module imported where parameters of your computer (screen size etc) are defined. Feel free to edit it.psychopy_ext
. Then you only have to define or redefine methods that are not in that template. For example, looping through trials is in there, so if yu're happy with it, you don't have to write it again.__init__()
. You can provide instructions how to run the experiment just above this method. If you format them using the reST syntax, as done in the example, it will render nicer-looking instructions.create_stimuli()
. A fancy fixation spot is available from psychopy_ext
.create_trial()
. Each trial is composed of a series of Events that have a particular duration, stimuli that need to be displayed, and a particular function describing what to do (e.g., how to present stimuli).create_exp_plan()
as a list of dict
entries. Importantly, all the fields you provide here are written to the output file, and this is the only information that is written out. You can see that some fields, like accuracy, are empty. But they are filled in as the experiment progreses.And that is all you need to create a full experiment. OK, but where is run()
? It's in the Experiment template so you don't have to do anything extra.
It may seem that you could have easily written a similar experiment using the same old PsychoPy but don't underestimate how many things are happening behind the scenes:
And that's only the beginning!
Let's make sure you understand how classes work. What is the output of the following code?
def myfunc():
print 'stuff'
class Output(object):
def __init__(self):
print 'init'
def run(self):
print 'run'
How about this one?
class Output(object):
def __init__(self):
print 'init'
def run(self):
print 'run'
Output()
And this?
class Output(object):
def __init__(self):
print 'init'
def go(self):
print 'go'
def run(self):
print 'run'
class Child(Output):
def run(self):
print 'child'
Child().run()
The best way to learn how to use psychopy_ext
is to build your own experiment based on the demo above (or on more complex demos that come with the package). So let us reenact the Change Blindness Experiment from Part 2 using the psychopy_ext
framework. It may be a good idea to keep both notebooks open as we are going to mostly copy/paste code.
The first thing, as usual, is to import all relevant modules. But note that since psychopy_ext extends PsychoPy, we don't have to import most of PsychoPy's modules as in Part 2.
import numpy.random as rnd # for random number generators
from psychopy import visual
from psychopy_ext import exp
from collections import OrderedDict
import computer
PATHS = exp.set_paths('.', computer) # '.' means that the root directory for saving outout is here
PATHS['images'] = 'images'
all modules should be familiar more or less, except the mysterious computer
. Well, that's the user-defined module where settings specific to your computers are defined (example settings are here). This is sper handy when you have several machines with different setups (e.g., one in your office, anoter in the testing room, and yet another at home for those of us who have no life).
Also note that we set the paths where all output files are supposed to be saved. This is done to help you organize your project better. Since we set up paths here, it also makes sense to define the path to the images folder here too. (See the example below or check the default paths here.)
Next we define the ChangeDet
class with its properties. This class is derived from exp.Experiment
which, in turn, is nothing but the same old TrialHandler
. Thus we ought to pass the relevant parameters here, as we do with method='sequential'
. (Other options that the __init__
takes are explained in the documentation.)
The idea of __init__
is to define all (I mean, all) parameters here so that you can easily find and change them later.
There are several kinds of parameters you can define:
info
: parameters that you want a user to be able to change on the go, e.g., participant IDrp
: parameters conrolling the behavior of the program that you want a user to be bale to change on the go, e.g., whether to save outout or notself.var_name
where self
means these variables are shared within the ChangeDet class -- you can access them from any other function in that class.info
and rp
are in fact used in a GUI similar to the dialog box we used before (but more elaborate). You don't have to create the GUI yourself -- it all happens automatically and we'll demonstrate that later.
This is also where we define keys used to respond in the self.computer.valid_responses
in the format {'key name': correct or incorrect response}
. By default, Shift+Esc is used for escape and spacebar to advance from instructions to testing, so here we only need to define what counts as a correct response to advance to the next trial. Since everything in this experiment is "correct", we set space':1
.
Note that we're defining instructions right at the top here. That serve a twofold purpose. On the one hand, it is natural to explain the experiment that the rest of the code enacts. On the other hand, this is also the docstring that is encouraged as a good programming practice, so you're documenting your code at the same time. Trying to act clever here!
We're also omitting writing date string to the output file because psychopy_ext
creates a log file (you'll see later) with all this information and more.
Given all this information, psychopy_ext
also automatically knows how to create output files and place them in a convenient location. So a large chunk of code is not necessary anymore.
class ChangeDet(exp.Experiment):
"""
Change Detection Experiment
===========================
In this experiment you will see photographs flickering with a tiny detail in them changing.
Your task is to detect where the change is occuring.
To make it harder, there are bubbles randomly covering the part of the photos.
Hit **spacebar to begin**. When you detect a change, hit **spacebar** again.
"""
def __init__(self,
name='exp',
info=OrderedDict([('exp_name', 'Change Detection'),
('subjid', 'cd_'),
('gender', ('male', 'female')),
('age', 18),
('left-handed', False)
]),
rp=None,
actions='run',
order='sequential'
):
super(ChangeDet, self).__init__(name=name, info=info,
rp=rp, actions=actions,
paths=PATHS, computer=computer)
# user-defined parameters
self.imlist = ['1','2','3','4','5','6'] # image names without the suffixes
self.asfx = 'a.jpg' # suffix for the first image
self.bsfx = 'b.jpg' # suffix for the second image
self.scrsize = (900, 600) # screen size in px
self.stimsize = (9, 6) # stimulus size in degrees visual angle
self.timelimit = 30 # sec
self.n_bubbles = 40
self.changetime = .500 #sec
self.computer.valid_responses = {'space': 1}
self.trial_instr = ('Press spacebar to start the trial.\n\n'
'Hit spacebar again when you detect a change.')
The window is usually created automatically for us, but in this particular case we want to be able to define its size so we have to override the particular window creation routine with our custom function. This example is also useful for you to see how to change the default behavior of psychopy_ext
.
def create_win(self, *args, **kwargs):
super(ChangeDet, self).create_win(size=self.scrsize, units='deg',
*args, **kwargs)
Should be straightforward by now, except that all stimuli are kept in a dict
called self.s
. Moreover, the window is defined in terms of degrees visual angle, so stimuli are implicitly using these units too.
def create_stimuli(self):
"""Define your stimuli here, store them in self.s
"""
self.s = {}
self.s['bitmap1'] = visual.ImageStim(self.win, size=self.stimsize)
self.s['bitmap2'] = visual.ImageStim(self.win, size=self.stimsize)
self.s['bubble'] = visual.Circle(self.win, fillColor='black', lineColor='black')
Remember, each trial consists of events of a certain duration, and we can pass a custom function of what should be happening during the trial. Here we create structure with a single event that lasts the maximum duration (i.e., 30 sec) and call a custom function show_stim
that will control flipping between images, drawing bubbles etc.
def create_trial(self):
"""Define trial composition
"""
self.trial = [exp.Event(self,
dur=self.timelimit, # in seconds
display=[self.s['bitmap1'], self.s['bitmap2'], self.s['bubble']],
func=self.show_stim)
]
Here we put all information about stimuli and so on that will be recorded in the output files.
def create_exp_plan(self):
"""Put together trials
"""
# Check if all images exist
for im in self.imlist:
if (not os.path.exists(os.path.join(self.paths['images'], im+self.asfx)) or
not os.path.exists(os.path.join(self.paths['images'], im+self.bsfx))):
raise Exception('Image files not found in image folder: ' + str(im))
# Randomize the image order
rnd.shuffle(self.imlist)
# Create the orientations list: half upright, half inverted
orilist = [0,1]*(len(self.imlist)/2)
# Randomize the orientation order
rnd.shuffle(orilist)
exp_plan = []
for im, ori in zip(self.imlist, orilist):
exp_plan.append(OrderedDict([
('im', im),
('ori', ori),
('onset', ''), # empty ones will be filled up
('dur', ''), # during runtime
('corr_resp', 1),
('subj_resp', ''),
('accuracy', ''),
('rt', ''),
]))
self.exp_plan = exp_plan
We need to show instructions before each trial and decide whether stimuli will be upright or inverted. To be more efficient, we first load images (it may take some time) and only when that is ready, show instructions.
def before_trial(self):
"""Set up stimuli prior to a trial
"""
im_fname = os.path.join(self.paths['images'], self.this_trial['im'])
self.s['bitmap1'].setImage(im_fname + self.asfx)
self.s['bitmap1'].setOri(self.this_trial['ori'])
self.s['bitmap2'].setImage(im_fname + self.bsfx)
self.s['bitmap2'].setOri(self.this_trial['ori'])
self.bitmap = self.s['bitmap1']
if self.thisTrialN > 0: # no need for instructions for the first trial
self.show_text(text=self.trial_instr, wait=0)
Finally, we define what happens during each trial. It's mostly copy/paste from our previous implementation with one significant change: we use last_keypress()
function to record user responses. This function is aware of the keys that we accept as responses as well as about special keys, such as Shift+Esc for exit. We therefore do not have to then check manually if the participant pressed a spacebar or and exit key. Moreover, the information about responses needs to be passed further (for writing responses to files etc) thus we have to include the return keys
statement at the end.
Also notice that since everything is defined in terms of degrees visual angle, we have to adjust bubble size accordingly.
def show_stim(self, *args, **kwargs):
"""Control stimuli during the trial
"""
# Empty the keypresses list
event.clearEvents()
keys = []
change_clock = core.Clock()
# Start the trial
# Stop trial if spacebar or escape has been pressed, or if 30s have passed
while len(keys) == 0 and self.trial_clock.getTime() < self.this_event.dur:
# Switch the image
if self.bitmap == self.s['bitmap1']:
self.bitmap = self.s['bitmap2']
else:
self.bitmap = self.s['bitmap1']
self.bitmap.draw()
# Draw bubbles of increasing radius at random positions
for radius in range(self.n_bubbles):
self.s['bubble'].setRadius(radius/100.)
self.s['bubble'].setPos(((rnd.random()-.5) * self.stimsize[0],
(rnd.random()-.5) * self.stimsize[1] ))
self.s['bubble'].draw()
# Show the new screen we've drawn
self.win.flip()
# For the duration of 'changetime',
# Listen for a spacebar or escape press
change_clock.reset()
while change_clock.getTime() <= self.changetime:
keys = self.last_keypress(keyList=self.computer.valid_responses.keys(),
timeStamped=self.trial_clock)
if len(keys) > 0:
print keys
break
return keys
Notice that you did not have to do many things here anymore:
%load scripts/changedet.py
import os
import numpy.random as rnd # for random number generators
from psychopy import visual, core, event
from psychopy_ext import exp
from collections import OrderedDict
import scripts.computer as computer
PATHS = exp.set_paths('change_detection', computer)
PATHS['images'] = '../Part2/images/'
class ChangeDet(exp.Experiment):
"""
Change Detection Experiment
===========================
In this experiment you will see photographs flickering with a tiny detail in them changing.
Your task is to detect where the change is occuring.
To make it harder, there are bubbles randomly covering the part of the photos.
Hit **spacebar to begin**. When you detect a change, hit **spacebar** again.
"""
def __init__(self,
name='exp',
info=OrderedDict([('exp_name', 'Change Detection'),
('subjid', 'cd_'),
('gender', ('male', 'female')),
('age', 18),
('left-handed', False)
]),
rp=None,
actions='run',
order='sequential'
):
super(ChangeDet, self).__init__(name=name, info=info,
rp=rp, actions=actions,
paths=PATHS, computer=computer)
# user-defined parameters
self.imlist = ['1','2','3','4','5','6'] # image names without the suffixes
self.asfx = 'a.jpg' # suffix for the first image
self.bsfx = 'b.jpg' # suffix for the second image
self.scrsize = (900, 600) # screen size in px
self.stimsize = (9, 6) # stimulus size in degrees visual angle
self.timelimit = 30 # sec
self.n_bubbles = 40
self.changetime = .500 #sec
self.computer.valid_responses = {'space': 1}
self.trial_instr = ('Press spacebar to start the trial.\n\n'
'Hit spacebar again when you detect a change.')
def create_win(self, *args, **kwargs):
super(ChangeDet, self).create_win(size=self.scrsize, units='deg',
*args, **kwargs)
def create_stimuli(self):
"""Define your stimuli here, store them in self.s
"""
self.s = {}
self.s['bitmap1'] = visual.ImageStim(self.win, size=self.stimsize)
self.s['bitmap2'] = visual.ImageStim(self.win, size=self.stimsize)
self.s['bubble'] = visual.Circle(self.win, fillColor='black', lineColor='black')
def create_trial(self):
"""Define trial composition
"""
self.trial = [exp.Event(self,
dur=self.timelimit, # in seconds
display=[self.s['bitmap1'], self.s['bitmap2']],
func=self.show_stim)
]
def create_exp_plan(self):
"""Put together trials
"""
# Check if all images exist
for im in self.imlist:
if (not os.path.exists(os.path.join(self.paths['images'], im+self.asfx)) or
not os.path.exists(os.path.join(self.paths['images'], im+self.bsfx))):
raise Exception('Image files not found in image folder: ' + str(im))
# Randomize the image order
rnd.shuffle(self.imlist)
# Create the orientations list: half upright, half inverted
orilist = [0,180]*(len(self.imlist)/2)
# Randomize the orientation order
rnd.shuffle(orilist)
exp_plan = []
for trialno, (im, ori) in enumerate(zip(self.imlist, orilist)):
exp_plan.append(OrderedDict([
('im', im),
('ori', ori),
('onset', ''), # empty ones will be filled up
('dur', ''), # during runtime
('corr_resp', 1),
('subj_resp', ''),
('accuracy', ''),
('rt', ''),
]))
self.exp_plan = exp_plan
def before_trial(self):
"""Set up stimuli prior to a trial
"""
im_fname = os.path.join(self.paths['images'], self.this_trial['im'])
self.s['bitmap1'].setImage(im_fname + self.asfx)
self.s['bitmap1'].setOri(self.this_trial['ori'])
self.s['bitmap2'].setImage(im_fname + self.bsfx)
self.s['bitmap2'].setOri(self.this_trial['ori'])
self.bitmap = self.s['bitmap1']
if self.thisTrialN > 0: # no need for instructions for the first trial
self.show_text(text=self.trial_instr, wait=0)
def show_stim(self, *args, **kwargs):
"""Control stimuli during the trial
"""
# Empty the keypresses list
event.clearEvents()
keys = []
change_clock = core.Clock()
# Start the trial
# Stop trial if spacebar or escape has been pressed, or if 30s have passed
while len(keys) == 0 and self.trial_clock.getTime() < self.this_event.dur:
# Switch the image
if self.bitmap == self.s['bitmap1']:
self.bitmap = self.s['bitmap2']
else:
self.bitmap = self.s['bitmap1']
self.bitmap.draw()
# Draw bubbles of increasing radius at random positions
for radius in range(self.n_bubbles):
self.s['bubble'].setRadius(radius/100.)
self.s['bubble'].setPos(((rnd.random()-.5) * self.stimsize[0],
(rnd.random()-.5) * self.stimsize[1] ))
self.s['bubble'].draw()
# Show the new screen we've drawn
self.win.flip()
# For the duration of 'changetime',
# Listen for a spacebar or escape press
change_clock.reset()
while change_clock.getTime() <= self.changetime:
keys = self.last_keypress(keyList=self.computer.valid_responses.keys(),
timeStamped=self.trial_clock)
if len(keys) > 0:
print keys
break
return keys
if __name__ == "__main__":
ChangeDet(rp={'no_output':True, 'debug':True}).run()
pscyhopy_ext
is not meant only for helping to run experiments. As we discussed above, there are many other tasks that a researcher needs to do. One of them is data analysis. You may be used to doing it in Excel or SPSS, or R, but Python is actually sufficient to carry out many simple and more complex analyses. And it may also be nice to have your experimental and analysis code together in a single file.
There is the pandas
package in Python offering great data analysis capabilites. psychopy_ext
wraps it with the stats
and plot
modules to help you do typical analyses efficiently. For more power, you may want to use statsmodels
.
So let's look at how to analyze data from your experiment. For this example, we will use data from a paper by de-Wit, Kubilius et al. (2013).
Reading in data is done by a clever read_csv
method that can get datga both from local sources (your computer) and the internet. In this example, we fecth data for 12 control participants (so that is twelve files) and concatenate them together into a single large structure, called a DataFrame, as seen in the output.
import pandas
# get data from de-Wit, Kubilius et al. (2013); will take some time
path = 'https://bitbucket.org/qbilius/df/raw/aed0ac3eba09d1d688e87816069f5b05e127519e/data/controls2_%02d.csv'
data = [pandas.read_csv(path % i) for i in range(1,13)]
df = pandas.concat(data, ignore_index=True)
df
If you did your experiment using psychopy_ext
, then there is a helper function in the exp
module, called get_behav_data()
, that will find and import the relevant data from you experiment.
Typically we want to average data across participants and plot it comparing several conditions. Aggregating data in pandas is not too bad but still might take some effort, and plotting it in a nice way is definitely a not trivial. Let's see how that can be done in psychopy_ext
. Let's first compute reaction times using the stats.aggregate()
function:
from psychopy_ext import stats
rt = stats.aggregate(df, values='RT', cols='context')
rt
If you are used to Excel PivotCharts, this should look familiar. We simply specify the data source (df
), which column we want to aggregate (values
) and how it should be structured (cols
). Here we say that we want to split data by the context
column. If you look at that column, you'll see there are three unique values in it: 'Whole', 'Parts' and 'Fixation', thus in the output you see an average for each of these values. There were no responses during fixation, so the average is coded as 'NaN' ('not a number').
Don't want this fixation? Let's filter it out:
df = df[df.context != 'Fixation']
rt = stats.aggregate(df, values='RT', cols='context')
rt
The way it works is by first evaluating which elements in the 'context' column are not 'Fixation' (df.context != 'Fixation'
). The output of this is a boolean vector, whcih we then use to filter the entire DataFrame.
Now let's compute these averages for each participant separately (this will be used to compute error bars in plotting later):
rt = stats.aggregate(df, values='RT', cols='context', yerr='subjID')
rt
Also for more conditions:
rt = stats.aggregate(df, values='RT', cols=['pos', 'context'])
rt
But what it you want to compute accuracy? There's a function for that too, called accuracy()
. For it to work, we need to specify which values are considered "correct" and which are considered "incorrect":
acc = stats.accuracy(df, values='accuracy', cols='context', yerr='subjID', correct='Correct', incorrect='Incorrect')
acc
Because we aggregated data using psychopy_ext
, plotting it is super quick now with the plot()
function:
%matplotlib inline
from psychopy_ext import plot
plt = plot.Plot()
plt.plot(acc, kind='bar')
plt.show()
Notice how you get error bars for free and even if the two conditions are significantly different from each other!
It can also produce other kinds of plots (see the Gallery). One of the nicer ones is called a bean plot. It cleverly combines all data points (as these horizontal bars; if several data points coincide, the line is longer) and the estimated density of the measurements, so that you can quickly see the distribution of your data and spot any outliers or non-normality.
plt = plot.Plot()
plt.plot(acc, kind='bean')
plt.show()
You can also easily plot several subplots:
rt = stats.aggregate(df, values='RT', cols='context', subplots='pos', yerr='subjID')
plt = plot.Plot()
plt.plot(rt, kind='bean')
plt.show()
There are many more option available in this module, so check out its documentation.
Also, I hope you have noticed by now that the plots in this tutorial are beautiful. They are so pretty by default thanks to a great design by the Seaborn package, so you may want to check out that library too.
So far we've looked at examples where a single experiment is implemented. But often we have several experiments in the same study -- how could we accomodate for this? psychopy_ext
has a concept of a Task: Every experiment of composed of several Tasks that we ask participants to perform. Thus, if you have two tasks, it would looks something like the following (borrowing code from the twotasks.py
demo):
class TwoTasks(exp.Experiment):
def __init__(self, ...):
self.tasks = [Train, Test]
class Train(exp.Task):
def __init__(self, parent):
...
class Test(exp.Task):
def __init__(self, parent):
...
Here the TwoTasks
class knows about Train
and Test
because we put them in the self.tasks
variable. Train
and Test
know about TwoTasks through the parent
argument that is passed when these classes are initiated during runtime.
We can also have several separate experiments, like Study 1 and Study 2. You simply make two files in the scripts
folder, study1.py
and study2.py
. The data for these experiments by default are saved in separate locations (that are called, guess what, study1
and study2
).
As secretly mentioned before, psychopy_ext
can automatically produce rather complex GUIs such that you can fully customize your experiment before running it. These GUIs are constructed using information you provide at the top of Experiment class from the info
and rp
parameters. And it looks like this:
Because this GUI is so convenient, it is actually the default mode of running code in psychopy_ext
. Calling any command is unified within the run.py
file, providing a very easy, replicable way to run code and analyses. It looks something like this:
%load run.py
#! /usr/bin/env python
from psychopy_ext import ui
__author__ = "Jonas Kubilius"
__version__ = "0.1"
exp_choices = [
ui.Choices('scripts.trivial', name='Quick demo'),
ui.Choices('scripts.changedet', name='Change Detection Experiment')
]
# bring up the graphic user interface or interpret command line inputs
# usually you can skip the size parameter
ui.Control(exp_choices, title='Demo Project', size=(560,550))
Here we have two important bits: defining Choices (tabs on the left side of GUI) that correspond to diffferent experiments (not tasks), and Control that creates the GUI itself.
It is not possible to demonstrate this functionality from a notebook directly so we will use IPython magic commands to execute a shell command. Try this:
%run run.py
But not everybody is keen to use these GUIs. Thus, psychopy_ext
also offers command-line interface in the following manner (running it from the Terminal, Powershell, cmd or a similar program):
python run.py myproject exp run --subjid subj_01 --n
Here we provide the name of the project (in case there are several), task name in it (experiment, analysis, simulation etc.), function we want to call (run
) and parameters for info
and rp
. Look at this figure above for a graphical illustration. Notice how you can abbreviate parameters: --n
instead of no_output
.
Try it in practice:
%run run.py changedet exp run --subjid subj_01 --debug --n
You don't want your experiment to fail with your first participant or after a small tweak in the middle of a pilot run, do you? Imagine you run an experiment for an hour only to learn later that no data was recorded! But then the only way to know if it is really ready is to run it yourself -- which is reasonable to do several times but definitely not after every little tweak that "shouldn't change anything". People with a long enough history in development know that these small innocent-looking tweaks can sometimes lead to small accidental issues such as output files not being saved or the script breaking in the middle of running...
To prevent from such unforseen problems occuring, the best strategy is to have automated tests, called unit tests, that would quickly check if everything is in order. For experiments, this means being able to run the experiment automatically to make sure it works and produces meaningful output. psychopy_ext
comes with this functionality out of the box. Simply choose the "unittest" option in the gui or --unittest
in the command line.
Let's try that for the Change Detection experiment:
%run run.py changedet exp run --d --n --unittest
You'll notice that the whole experiment runs on its own at a very high pace -- or you may not even see anything really because it's so short. But you see that it prints out what it can see on the screen and thus you can easily verify it went through the entire experiment without any errors.
So that's cool and good for a quick reassurance that all is in order! But sometimes, especially for longer experiments composed of multiple tasks, you actually want to run the experiment half manually, such that you can read instructions and advance to testing, then quickly go through trials, then read the instruction again etc. For this, there is an autorun
option that also allows you to choose how quickly we run through trials ('1' means the actualy speed, '100' would be 100x faster).
Notice that the program is actually performing the experiment just like a participant would, so in the end we get an output file that we can use to meaningfully test our analysis scripts. In fact, it is greatly encouraged to write your data analysis scripts at the same time as your experimental scripts. You will often see that by doing the analysis on such simulated data you will learn that perhaps a particular information about a stimulus or condition is missing and would be useful for the analysis.
psychopy_ext
also has a prototype for a quick data analysis, drawing ideas from Excel's PivotChart, and, consistent with PsychoPy's Builder and Coder modules, named the Analyzer. It is really an early prototype and not even documented yet but here's a quick preview:
# first get the data from de-Wit, Kubilius et al., (2013) again
import pandas
path = 'https://bitbucket.org/qbilius/df/raw/aed0ac3eba09d1d688e87816069f5b05e127519e/data/controls2_%02d.csv'
data = [pandas.read_csv(path % i) for i in range(1,13)]
df = pandas.concat(data, ignore_index=True)
df.to_csv('data.csv') # save to a file
# now open the Analyzer GUI
from psychopy_ext import analyzer
analyzer.run()
Suppose you run an experiment and find that people can tell if there is an animal in an image based on a very brief presentation. Obviously, you may want to make claims that people process this high level object and scene information very quickly, perhaps even in a feedforward manner. But you have to be careful here. Maybe people are able to do this task based on some low level information, such as a particular power spectrum difference between animal and non-animal stimuli.
A good strategy to address these concerns, to a certain extent at least, is to process your stimuli with a model of an early visual cortex, and use some sort of categorization algorithm (such as computing a distance betweet the two categories, applying a support vector machine, or using a number of other machine learning techniques). Typically it is a tedious procedure but psychopy_ext
comes with several simple models included in the models
module, such as Pixelwise (for pixelwise differences), GaborJet (a very simplistic V1 model) from Biederman lab, and HMAX'99, the early implementation of the HMAX model.
In the example below, we use the images from the Change Detection experiment to see how different they appear to the HMAX'99 model. You shoudl see that some stimuli are much more different from the others (dark red spots) but on the diagonal images are quite similar to each other, as it should be since version a and version b are only slightly different.
import glob
from scipy import misc
from psychopy_ext import models
import matplotlib.pyplot as plt
# read images from the Change Detection experiment
ims = glob.glob('../Part2/images/*.jpg')
ims = [misc.imread(im) for im in ims]
# crop and resize them to (128, 128)
ims = [misc.imresize(im[:,im.shape[0]], (128, 128)) for im in ims]
hmax = models.HMAX()
hmax.compare(ims)
Often you want to be able to export stimuli that you used in the experiment for using in a paper. One possibility is to capture display using PsychoPy's getMovieFrame
and saveMovieFrames
fucntionality that captures what is presented on the screen. However, the resolution of this export is going to be low and you will often be unable to use these images on a poster or for a paper.
A better approach is to export stimuli in the SVG (scalable vector graphics) format that exports objects rather than pixels, and, as the name implies, you can scale them as much as you like without losing quality in programs like Inkscape, Scribus, and Adobe Illustrator. To help you with that, psychopy_ext
provides an undocumented (read: not fully functional) feature: a whole SVG module in the exp
class that will try to export your stimuli in the svg format as much as possible. Note that currently it only works with shape and text stimuli (lines, circles, etc) and images (that are, of course, not really scalable). This is how it works:
from IPython.display import SVG, display
from psychopy import visual, event
from psychopy_ext import exp
win = visual.Window(size=(400,400))
stim1 = visual.Circle(win)
stim2 = visual.TextStim(win, "Ceci n'est pas un cercle!", height=.1)
# write to svg
svg = exp.SVG(win, filename='stimuli.svg')
svg.write(stim1)
svg.write(stim2)
svg.svgfile.save()
display(SVG('stimuli.svg'))
# optional: show stimuli on the screen too
stim1.draw()
stim2.draw()
win.flip()
event.waitKeys()
win.close()