Friday, April 6, 2012

Defining co-authorship

Here is an important article on how co-authorship should (and should not) be determined:

And here is an interesting scoring algorithm on authorship criteria:

Saturday, March 31, 2012

Post-Pre-Pre-GLOW memory mechanisms workshop dinner, March 26, 2012, Potsdam

Last week, the whole linguistic world descended on Potsdam in connection with the GLOW conference. Embedded within that world is another world that works on memory mechanisms. So we parasitically organized an open-ended discussion on some major open issues. It was basically all of Maryland and all of Potsdam, plus Philip Hofmeister and Patrick Sturt and John Hale.

The meeting was very, very productive, much better than the talks+discussion model.

We will do this again next year!

Monday, March 12, 2012

(Attempted) replication drama unfolding on the web

Discussed here. One side issue here is the inherent value of replication. In psycholinguistics too, replications (including failed replications) tend to be treated as somehow less valuable than "new" studies (they're just as valuable,  in my opinion, only in a different way). 

Saturday, March 3, 2012

Continuity of Mind by Spivey

Here is a fantastic book (Continuity of Mind, by Michael Spivey), of great relevance to people like us. It has a great chapter on language comprehension, an excellent, high-level overview of the constraint-based view (inter alia), ideal for entry level psycholinguists. This chapter also has some very important ideas hidden in it, which have not yet been exploited in sentence comprehension research.

Friday, March 2, 2012

What use are journal impact factors?

This article on arXiv discusses several important facts about journal impact factors. This is an interesting issue for psycholinguistics; I have encountered the situation in our field of a person who said he would not cite a paper because it didn't appear in a high impact factor journal. Another, an editor in a high-impact journal (high impact factor in our field means 3 or more ;), once said to me he only reads papers that come out in a specific journal X, because if it didn't appear there, it's not worth reading by definition.

 Educating people is hard. If you are too busy to read the article linked below, I can summarize the advice for you: read the paper to decide if it has anything of importance; don't look at the impact factor of the journal to decide whether to read it.

Of course, it is possible that what holds for applied mathematics has no relevance for psycholinguistics.

Monday, February 27, 2012

On reviewers and reviewing

"As an editor, I observed that sometimes otherwise congenial/collegial individuals would become rather brutal (Blackwell, 2004) in their role as manuscript reviewer. This is somewhat akin to the person who is usually quite considerate and polite until he/she gets behind the wheel of a car.  This is the kind of reviewer who makes personal, snide comments about the author, is sarcastic, and tends to be quite destructive in tone.  Such behavior is a blatant violation of the ethic of reciprocity and sabotages the goal of  adding value to the manuscript by offering ways to improve it."
Charles C. Fischer

Everyone is familiar with the reviewer who writes unfairly aggressive reviews, something which often prompts (and the reviewer knows this will be the outcome) the editor to tell the authors: never show your face again in this journal with this paper. Why do reviewers do this kind of reviewing? I have some guesses:

1. It's a blood-sport. It's always fun to draw blood, and we are trained to do it in grad school. See my earlier blog entry citing the article on pit-bull reviewers.
2. It's an easy way to eliminate perceived competitors. This motivation drives researchers who firmly believe that the perceived scientific importance of researchers is a zero-sum game: for one person to become more important, the nearest neighbors have to be brought down. Result: reject all papers from the competitors.
3. Eliminating ideas that the reviewer does not believe in or does not support must be prevented from entering the public domain. Reject.

Here's a much more comprehensive document than mine on this topic:

Friday, February 10, 2012

Reviewers of journal articles (plus future reviewers aka grad students) need to read this

A Stanford professor tells it like it is:

If you are reading this and love to destroy papers that come to your desk for review, read the above article to discover---this may surprise you!!!---where you are really coming from.

[Thanks to Titus for reminding me of this article; I think bibcite or some such software helpfully provides this as an example article.]

Wednesday, February 1, 2012

CUNY 2012 and other lab output for January 2012

We had a good start to the year, with four journal article submissions (and at least four more to go in the next few weeks, from Felix, Bruno, Lena Benz, and me), five CUNY 2012 posters, and two GLOW talks. Great work!

Monday, January 30, 2012

The cumulative dissertation---a five-step program to success

Here is an informative text from kisswin on the subject of cumulative dissertations:
A special type of doctorate is the so called cumulative dissertation. While a traditional doctorate finishes with one doctoral thesis, a cumulative dissertation consists of various publications which are then combined to one complete work and evaluated.The publications are normally papers, articles etc. which have been published in renowned (“peer reviewed”) professional journals. The presented publications are evaluated and chosen by qualified experts (“peer reviewer”). Thereby the review secures that the publications meet the standards of renowned professional journals and conferences.Furthermore the prestige of professional journals is connected with the Journal Impact Factor (JIL). The JIL does not make statements about the quality of an article but measures the frequency with which an article in a journal has been cited in a given period of time in other journals. Nevertheless it is important to note that a comparison can be difficult because of different citation rates within different research fields. In spite of that a high JIL increases the prestige of professional journals.If scientific work is published depends on many different factors. Therefore a cumulative dissertation is often less calculable concerning time schedules than a traditional dissertation. In principle, publications are important for a traditional dissertation as well, but they are not as important for a traditional dissertation as for a cumulative dissertation.Up to now cumulative dissertations are rather rare at German universities and no standardised method does exist for German-speaking countries. The respective conditions can be found in the regulations for doctoral studies of the respective university.
This topic is of great interest to me, and it should be of great interest to any PhD student in Germany where cumulative PhD are offered. The cumulative diss. is the best possible way to do a dissertation. Two of my students, Titus, and Umesh, just got done with cumulatives, and it's amazing: three papers each.
When I got done with my PhD, I had zero publications stemming directly from the PhD work. It took me one more year to get a book published, and then another three years before I got my first major paper published. Compared to that, having three papers under review or published (in both Titus and Umesh's case, one is published and two under review) at the moment of submission is amazing and worth replicating. Even if the submitted papers are rejected, it is great to hit the ground running.

The only problem is: it's very hard to replicate this kind of performance. In order to do a cumulative, you need to get started real early with writing and submitting articles, and in a typical PhD, one cannot get enough data for a publication until one is well into the second year (in my own case, I had data within 3 months of becoming all-but-dissertation, but I had gone to India with five laptops and gathered 250 subjects' data in a month or two).  Also, in psycholinguistics, journals articles are long-drawn out affairs, with ritualistic introductions that go on and on, and the obligatory long General Discussion, usually consisting of large amounts of waffling and wild speculation (I exaggerate wildly, but GDs are too long for my taste; in my own journal articles I try to create a new culture by making it short, but reviewers always complain about too-short GDs). It would be awesome if journals would encourage pithiness rather than (and I use this word in its correct sense, cf. Sarah Palin's illiteracy on display) verbiage. But that's not going to happen any time soon.

So students have to be able to write; even if the advisor (i.e., me) is intensively involved in the writing, they still have to be able to do the core writing themselves. This is hard enough for native speakers of English.

Also, students have to be ready to put up with the ugliness and fundamental lack of friendliness of the review process. Reviewers are often former graduate students who (even after they have become extremely-former grad students) were trained in green-beret universities to become attack dogs, and never learnt to become human again. Many reviewers like to nail a paper just because they can. No paper is ever perfect (at least, I have never read one yet that I would call perfect), but for many reviewers this is no reason to let a paper through.

Getting eviscerated by a nasty reviewer is not a pleasant introduction to the scientific process. The cumulative student has to develop nerves of steel when the rejection letter comes in, and that's one more thing to learn. But it takes many years to develop the thick skin necessary to be able to subsist within this kind of review culture. Unless one is born tough, it's hard to recover from this shock right away.

If one considers the fact that often a paper goes through many revisions before it is accepted, it can take up to three years or more to get a single paper accepted. If it's controversial (as most good papers are),  then the chances of an argument with the reviewers are much greater.

So how can a student deliver three papers (this is informally our requirement at Potsdam) as part of a cumulative, and within three years? Is this possible to achieve?

My advice for a successful three year cumulative:

Five-step success program for a cumulative dissertation 
Results guaranteed if you if drink the potion:

1. Don't waste two or more years trying to figure out what to do for your PhD. German PhDs require no course work, you are expected to start right away on research. My experience has been that whoever loses time in the beginning pays a heavy price towards the end (well, duh).
2. Forget about holidays, parties and the like and forget about the 9 to 5 work-life-balance stuff that you may have seen in Work-Life-Balance folders lying around the university; they are relevant for a lot of people, but not for a PhD student. A PhD is a 24 hr a day 7 days a week job, and a German PhD with a three-year window even more so (in the US, students typically take five years, often much more to get their PhD done in linguistics). Anyone who thinks they can do a PhD in three years and have a life is doomed. You should only be doing a PhD if you enjoy it anyway. It should be your entire life, for those few years. Never forget that the three letters in the abbreviation, PhD, stand for Total Immersion Program. Of course, I exaggerate a bit here; you have to sleep, eat. But the point is that if you are doing a PhD, it has to be a priority in your life for the few years that you are busy doing it.
3. As soon as you know what your first paper will be on, and as soon as the data comes in and you know you have a publishable result, don't waste time. The Daily Show is not going anywhere, it's always going to be online; the Colbert Report too. It's possible to have a rough draft of a normal sized paper ready in two to three weeks of full-time work. You just have to have the discipline to do it. If you are in my lab, I envy you, because your advisor usually reads papers very quickly and responds to drafts with lightning speed. He can help you get that draft ready for submission. But you have to produce a draft first.
4. If you worked hard the first year, you have data. By the first quarter of your second year, you should have a paper under review or under revision after an eviscerating rejection. By the first quarter  of the second year, you should have a second paper under review/rejection. By the end of the third year, you should have your third paper under review. That's it. It doesn't matter how many times the papers get rejected (revise and resubmit, to a different journal if they tell you, "we never want to see you face again"), they need to be in the pipeline. Just before you submit the cumulative dissertation, you should have the papers under review (for the 100th time, if need be).  
5. That's it, you have submitted a cumulative with three publications either published or under review. As a last step, after submission, have a ritualized party: print out all those rejection letters and burn them (not in the Besprechungsraum please).

Open data initiatives

There has always been a vague desire on part of experimentalists to have publicly obtainable data that's already been published and is in the public domain.

In psycholinguistics, the first such database I know of is Reinhold Kliegl's PMR2.  

In my lab, we are (or rather I am) also thinking of some way to provide easy access to our published lab data (in addition to listing it on PMR2, it would be good to have a local copy so that one is not 100% dependent on an archive maintained by someone else). 

I just heard about another data repository, directed more to linguistics (but also to psychologists): CLARIN-D.

In this context, I've been thinking about what properties our local repository should have (this is only about our own public repository). Here is a preliminary list:

1. Data access should require login and registration, as well as an "I agree" button to get an agreement of the terms of the data release (below).
2. The data should be released on the condition that any new result derived from the data should be uploaded there so that people can follow the history of what happened with that data. The people downloading the data should cite the original work where the data were reported.
3. Once the re-analysis of the original data is in the public domain and the new analysis has been uploaded, it would be ideal if there were room for comments from others (e.g., a response from the original authors). I.e., this would be like a blog, but the blog should be integrated seamlessly with the repository, and not be a separate interface (see PRM2 for an example of what I would not like to have). Downloaders and users of our data should also agree to show us their reanalysis before publishing it, so that we have the chance to respond if we find something that we disagree about (e.g., how to remove extreme values).
4. We should release full data for the published study. E.g., all items used, all fillers (excluding filler experiments that are not published yet), and the raw as well as analyzed data (which could be non-raw, e.g., aggregated).  People often dislike releasing all their data (I have had several people refuse to release the raw data, making re-analysis effectively impossible), they limit the release to just the data in the format that allows exactly the analysis already done, nothing further. What's the use of that kind of a data release? Suppose I want to look for a particular kind of confound in the data, and I can only do it if I have the raw data, a data release of the reduced dataset would be useless (I have been in this situation, and I could not use the released data).
5. Our own analysis for a particular dataset should be an Sweave'd document, with .Rnw, .R source, the data itself, a pdf. Ideally the paper should be the Sweave file. If every downloadable item has this collection of items, it will have a completely predictable structure, easy to understand for the outsider. I know that developing standards is hard, even within the confines of our own lab, but it might be worth it. 
6. The data should not be in .Rda files, but rather as text files. I have had some problems accessing .Rda files in a new version of R that were created with an older version of R.
7. There has to be a contact person locally whom people from outside can contact (and it's not gonna be me!). That's the central problem with good ideas; they always require some work.
8. There should be a possibility to upload a new, improved data analysis even after the data is published. For example, I published a paper in 2004, when the state of my statistical knowledge was even more miserable than it is right now. I would like to post a revised analysis, done to the best of my current ability. There should be space for that in the interface. This cannot count as a re-analysis of the dataset by an outside, third party, and therefore it should be presented as part of the lab data set but marked as "revised data analysis", or something like that.
9. What about our re-analyses of *other* people's data? For example, Titus reanalyzed the Meseguer et al dataset; this should be presented not as original data from our lab; there should be a separate section for showcasing reanalyses that we did.

I'm looking forward to suggestions for further improvements (from anyone, not just lab members, that's why it's in the public domain).

Thursday, January 26, 2012

Tuesday, January 24, 2012

Titus von der Malsburg has submitted his dissertation

We inaugurate this lab blog with the news that Titus has submitted his dissertation! This is a special occasion for me too, since he's the first person I'll be graduating. Congratulations Titus!