Feeds:
Posts
Comments

Posts Tagged ‘journal’

evolution_of_intellectual_freedom_cham_phdcomics

Source URL: PhD Comics

Figuratively speaking, what is the ‘worth’ of a certain academic? Between two academics, which one has had more positive academic impact than the other? How do you rank academics? And award grants, promotion, tenure etc. to the best* ones?

I’m not going to answer these questions but would like to chip in with some food for thought and suggestions.

Well; one may say: “It’s easy! Just compare their h-index and total no of citations!

This may be an effective way to go about answering the question. Surely someone with an h-index of 30 has had more positive academic impact than someone with let’s say an h-index of 15 – and is the better candidate?

Maybe – that is if all things are equal regarding the way citations and the h-index works i.e. if both academics:

  • are in similar fields – as papers in certain fields receive more citations overall than papers in other fields,
  • are in similar stages in their careers – as comparing an early-career postdoc with an established “Prof.” wouldn’t be fair,
  • have similar numbers of first/equal-first or last author papers – as an academic with many middle-authorships can have excessively inflated h-indexes,
  • have similar number of co-authors – as it may be easier to be listed as a co-author in some fields than others and/or mean that more people will be presenting and citing the paper as their own, and
  • have a similar distribution of citations across the papers – as the h-index ignores highly influential papers and the total citations can be highly influenced by even just one of these (see figure below).

I may have missed other factors, but I think these are the main ones (please add a comment below).

mesut_erzurumluoglu_h-index_academic_2018

Calculating my h-index: Although problematic (discussed here), the h-index has become the standard metric when measuring the academic output of an academic. It is calculated by sorting the publications of an academic from most to least cited, then checking whether he/she has h papers with h citations e.g. if an academic has 10 papers with ≥10 citations but not 11 papers with ≥11 citations then their h-index will be 10. It was proposed as a way to summarise the number of publications that an academic has and their academic impact (via citations) with a single number. The above citation counts were obtained from my Google Scholar page

As of 31st July 2018, I have 14 published papers – including 5 as first/equal-first author – under my belt. I have a total citation count of 316 and an h-index of 6 (225 and 5 respectively, when excluding publications marked with an asterisk in the above figure). It is fair to say that these numbers are above average for a 29-year-old postdoc. But even I’m not content with my h-index – and many established academics are definitely right not to be. I’ll try and explain why: the figure above shows the citation distribution of my 14 publications sorted by the ‘number of times cited’ from the left (highest) to right (lowest). One can easily see that the h-index (red box) captures only a small portion of the general picture (effectively, 6 x 6 i.e. 36 citations) and ignores the peak (>6 on the y-axis) and tail (>6 on the x-axis) of the publication-citation distribution. I have also included the publication year of each paper and added an asterisk (*) against the publications where I haven’t provided much input e.g. I have done almost nothing for the Warren et al (2017) paper but it constitutes almost a third of my total citations (90/316)**. The ‘ignored peak’ contains three highly cited papers to which I have made significant contributions to and the ‘ignored tail’ contains research papers that (i) I am very proud of (e.g. Erzurumluoglu et al, 2015) or (ii) are just published – thus didn’t have the time to accumulate citations. What is entirely missing from this figure are my (i) non-peer-reviewed publications (e.g. reports, articles in general science magazines), (ii) correspondence/letters to editor (e.g. my reply to a Nature News article), (iii) blog posts where I review papers or explain concepts (e.g. journal clubs), (iv) shared code/analysis pipelines, (v) my PhD thesis with potentially important unpublished results, (vi) other things in my CV (e.g. peer-review reports, some blog posts) – which are all academia-related things I am very proud of. I have seen other people’s contributions in relation to these (e.g. Prof. Graham Coop’s blog) and thought that they were more useful than even some published papers in my field. These contributions should be incorporated into ‘academic output’ measures somehow.

It is also clear that “just compare their h-index and total no of citations!” isn’t going to be fair on academics that (i) do a lot of high-quality supervision at different levels (PhD, postdoc, masters, undergrad project – which all require different skill sets and arrangements), (ii) spend extra time to make their lectures inspiring and as educative as possible to undergrad and Masters students, (iii) present at a lot of conferences, (iv) do ‘admin work’ which benefits early-career researchers (e.g. workshops, discussion sessions), (v) do a lot of blogging to explain concepts, review papers, and offer personal views on field generally, (vi) have a lot of social media presence (e.g. to give examples from my field i.e. Genetic Epidemiology, academics such as Eric Topol, Daniel MacArthur, Sek Kathiresan take time out from their busy schedules to discuss, present and debate latest papers in their fields – which I find intellectually stimulating), (vii) give a lot of interviews (TV, online media, print media) to correct misconceptions, (viii) take part in public engagement events (incl. public talks), (ix) organise (inter-disciplinary) workshops, (x) inspire youngsters to become academics working for the benefit of humankind, (xi) publish reliable reports for the public and/or corporations to use, (x) provide pro bono consultation, (xi) take part in expert panels and try very hard to make the right decisions, (xii) engage in pro bono work, (xiii) do their best to change bad habits in the academic circles (e.g. by sharing code, advocating open access publications, standing up to unfair/bad decisions whether it affects them or not), (xiv) extensively peer-review papers, (xv) help everyone who asks for help and/or reply to emails… The list could go on but I think I’ll stop there…

I acknowledge that some of the above may indirectly help increase the h-index and total citations of an individual but I don’t think any of the above are valued as much as they should be per se by universities – and something needs to change. Academics should not be treated like ‘paper machines’ until the REF*** submission period, and then ‘cash cows’ that continually bring grant money until the next REF submission cycle starts. As a result, many academics have made ‘getting their names into as many papers as possible’ their main aim – it is especially easier for senior academics, many with a tonne of middle-authorships for which they have done virtually nothing****. This is not how science and scientists should work and universities are ultimately disrespecting the tax payers’ and donors’ money. Some of the above-mentioned factors are easier to quantify than others but thought should go into acknowledging work other than (i) published papers, (ii) grant money brought in, and maybe (iii) appearing on national TV channels.

Unless an academic publishes a ‘hot paper’ as first or corresponding author – which very few have the chance and/or luck to do – and he/she becomes very famous in their field, their rank is usually dictated by the h-index and/or total citations. In fact, many scientists who have very high h-indexes (e.g. because of many middle-author papers) put this figure at the top of their publication list to prove that they’re top scientists – and unfortunately, they contribute to the problem.

People have proposed that contributions of each author are explicitly stated on each paper but this is going to present a lot of work when analysing the academic output of tens of applicants – especially when the number of publications an individual has increases. Additionally, in papers with tens or even hundreds of authors, general statements such as “this author contributed to data analysis” are going to be assigned to many authors without explicitly stating what they did to be included as a co-author – thus the utility of this proposition could also be less than expected in reality.

It’s not going to solve all the problems, but I humbly propose that a figure such as the one above be provided by Google Scholar and/or similar bibliometric databases (e.g. SCOPUS, CrossRef, Microsoft Academic, Loop) for all academics, where the papers for which the respective academic is not the first author are marked with an asterisk. The asterisks could then be manually removed by the respective academic on publications where he/she has made significant contributions (i.e. equal-first, corresponding author, equal-last author or other prominent role) but wasn’t the first author. Metrics such as the h-index and total citations could then become better measures by giving funders/decision makers the chance to filter accordingly.

Thanks for reading. Please leave your comments below if you do not agree with anything or would like to make a suggestion.

academic_worth_researcher_university_mesut_erzurumluoglu

The heuristic that I think people use when calculating the worth of an early career researcher (but generally applies to all levels): ‘CV’ and ‘Skills’ are the two main contributors, with the factors highlighted in red carrying enormous weight in determining whether someone should get the job/fellowship or not. Virtually no one cares about anything that is outside what is written here – as mentioned in the post. Directly applicable: Some technical skill that the funder/Professor thinks is essential for the job; Prestige of university: where you did your PhD and/or undergrad; Funded PhD: whether your PhD was fully funded or not; Female/BME: being female and/or of BME background – this can be an advantage or a disadvantage depending on the regulations/characteristics of the university/panel, as underrepresented groups can be subjected to both positive and negative discrimination. NB: this is a simplified version and there are many factors that affect outcomes such as “who you know” and “being at the right place at the right time“.

 

Added on 30/10/18: I just came across ‘No, it’s not The Incentives—it’s you‘ by Tal Yarkoni about the common malpractices in academic circles, and I think it’s well worth a read.

 

*Making sure there’s a gender balance and that academics from BME backgrounds are not excluded from the process – as they’ve usually had to overcome more obstacles to reach the same heights.

**I have been honest about this in my applications and put this publication under “Other Publications” in my CV.

***REF stands for the ‘Research Excellence Framework’, and is the UK’s system for assessing the quality of research in higher education institutions. The last REF cycle finished in 2014 and the next one will finish in 2021 (every 7 years). Universities start planning for this 3-4 years before the submission dates and the ones ranked high in the list will receive tens of millions of pounds from the government. For example, University of Oxford (1st) received ~£150m and University of Bristol (8th) received ~£80m.

****Sometimes it’s not their fault; people add senior authors on their papers to increase their chances of getting them accepted. It’s then human nature that they’re not going to decline authorship. It sounds nice when one’s introduced in a conference etc. as having “published >100 papers with >10,000 citations” – when in reality they’ve not made significant (if any!) contributions to most of them.

 

PS: I also propose that acknowledgements at the bottom of papers and PhD theses be screened in some way. I’ve had colleagues who’ve helped me out a lot when learning some concepts who then moved on and did not have the chance to be a co-author on my papers. I have acknowledged them in my PhD thesis and would love to see my comments be helpful to these colleagues in some way when they apply for postdoc jobs or fellowships. Some of them did not publish many papers and acknowledgements like these could show that they not only have the ability to be of help (e.g. statistical, computational expertise), but are also easy to work with and want to help their peers.

Read Full Post »

BBC_news_sperm_count

BBC news article published on the 18th March 2018. According to the article, men with low sperm counts are at a higher risk of disease/health problems. However, this is unlikely to be a causal relationship and more likely to be a spurious correlation. May even turn out to be the other way round due to “reverse causality”, a bias we encounter a lot in epidemiological studies. The following sounds more plausible (to me at least!): “Men with disease/health problems are likely to have low sperm counts” (likely cause: men with health problems tended to smoke more in general and this caused low sperm counts in those individuals).

As an enthusiastic genetic epidemiologist (keyword here: epidemiologist), I try to keep in touch with the latest developments in medicine and epidemiology. However, it is impossible to read all articles that come out as there is a lot of epidemiology and/or medicine papers published daily (in fact, too much!). For this reason, instead of reading the original academic papers (excluding papers in my specific field), I try to skim read from reputable news outlets such as the BBC, The Guardian and Medscape (mostly via Twitter). However, health news even in these respectable media outlets are full of wrong and/or oversensationalised titles: they either oversensationalise what the scientist has said or take the word of the scientist they contact – who are not infallible and can sometimes believe in their own hypotheses too much.

It wouldn’t harm us too much if the message of an astrophysics related publication is misinterpreted but we couldn’t say the same with health related news. Many people take these news articles as gospel truth and make lifestyle changes accordingly. Probably the best example for this is the Andrew Wakefield scandal in 1998 – where he claimed that the MMR vaccine caused autism and gastro-intestinal disease but later investigations showed that he had undeclared conflicts of interest and had faked most of the results (click here for a detailed article in the scandal). Many “anti-vaccination” (aka anti-vax) groups used his paper to strengthen their arguments and – although now retracted – the paper’s influence can still be felt today as many people, including my friends, do not allow their children to be vaccinated as they falsely think they might succumb to diseases like autism because of it.

The first thing we’re taught in our epidemiology course is “correlation does not mean causation.” However, a great deal of epidemiology papers published today report correlations (aka associations) without bringing in other lines of evidence to provide evidence for a causal relationship. Some of the “interesting ones” amongst these findings are then picked up by the media and we see a great deal of news articles with titles such as “coffee causes cancer” or “chocolate eaters are more successful in life”. There have been instances when I read the opposite in the same paper a couple of months later (example: wine drinking is protective/harmful for pregnant women). The problem isn’t caused only due to a lack of scientific method training on the media side, but also due to health scientists who are eager to make a name for themselves in the lay media without making sure that they have done everything they could to ensure that the message they’re giving is correct (e.g. triangulating using different methods). As a scientist who analyses a lot of genetic and phenotypic data, it is relatively easier for me to observe that the size of the data that we’re analysing has grown massively in the last 5-10 years. However, in general, we scientists haven’t been able to receive the computational and statistical training required to handle these ‘big data’. Today’s datasets are so massive that if we take the approach of “let’s analyse everything we got!”, we will find a tonne of correlations in our data whether they make sense or not.

To provide a simple example for illustrative purposes: let’s say that amongst the data we have in our hands, we also have each person’s coffee consumption and lung cancer diagnosis data. If we were to do a simple linear regression analysis between the two, we’d most probably find a positive correlation (i.e. increased coffee consumption means increased risk of lung cancer). 10 more scientists will identify the same correlation if they also get their hands on the same dataset; 3 of them will believe that the correlation is worthy of publication and submit a manuscript to a scientific journal; and one (other two are rejected) will make it past the “peer review” stage of the journal – and this will probably be picked up by a newspaper. Result: “coffee drinking causes lung cancer!”

However, there’s no causal relationship between coffee consumption and lung cancer (not that I know of anyway :D). The reason we find a positive correlation is because there is a third (confounding) factor that is associated with both of them: smoking. Since coffee drinkers smoke more in general and smoking causes lung cancer, if we do not control for smoking in our statistical model, we will find a correlation between coffee drinking and lung cancer. Unfortunately, it is not very easy to eliminate such spurious correlations, therefore health scientists must make sure they use several different methods to support their claims – and not try to publish everything they find (see “publish or perish” for an unfortunate pressure to publish more in scientific circles).

cikolata_ve_nobel_odulu

A figure showing the incredible correlation between countries’ annual per capita chocolate consumption and the number of Nobel laureates per 10 million population. Should we then give out chocolate in schools to ensure that the UK wins more Nobel prizes? However, this is likely not a causal relationship as it makes more sense that there is a (confounding) factor that is related to both of them: (most likely) GDP per capita at purchasing power parity. To view even quirkier correlations, I’d recommend this website (by Tyler Vigen). Image source: http://www.nejm.org/doi/full/10.1056/NEJMon1211064.

As a general rule, I keep repeating to friends: the more ‘interesting’ a ‘discovery’ sounds, the more likely it is to be false.

Hard to explain why I think like this but I’ll try: for a result to sound ‘interesting’ to me, it should be an unexpected finding as a result of a radical idea. There are just so many brilliant scientists today that finding unexpected things is becoming less and less likely – as almost every conceivable idea arises and is being tested in several groups around the world, especially in well researched areas such as cancer research. For this reason, the idea of a ‘discovery’ has changed from the days of Newtons and Einsteins. Today, ‘big discoveries’ (e.g. Mendel’s pea experimets, Einstein’s general relativity, Newton’s law of motion) have given way to incremental discoveries, which can be as valuable. So with each (well-designed) study, we’re getting closer and closer to cures/therapies or to a full understanding of underlying biology of diseases. There are still big discoveries made (e.g. CRISPR-Cas9 gene editing technique), but if they weren’t discovered by that respective group, they probably would have been discovered within a short space of time by another group as the discoverers built their research on a lot of other previously published papers. Before, elite scientists such as Newton and Einstein were generations ahead of their time and did most things on their own, but today, even the top scientists are probably not too ahead of a good postdoc as most science literature is out there for all to read in a timely manner (and more democratic compared to the not-so-distant past) and is advancing so fast that everyone is left behind – and we’re all dependent on each other to make discoveries. The days of lone wolves is virtually over as they will get left behind those who work in groups.

To conclude, without carefully reading the scientific paper that the newspaper article is referring to – hopefully they’ve included a link/citation at the bottom of the page! – or seeking what an impartial epidemiologist is saying about it, it’d be wise to take any health-related finding we read in newspapers with a pinch of salt as there are many things that can go wrong when looking for causal relationships – even scientists struggle to make the distinction between correlations and causal relationships.

power_posing

Amy Cuddy’s very famous ‘Power posing’ talk, which was the most watched video on the TED website for some time. In short, she states that if you give powerful/dominant looking poses, this will induce hormonal changes which will make you confident and relieve stress. However, subsequent studies showed that her ‘finding’ could not be replicated and she that did not analyse her data in the manner expected of a scientist. If a respectable scientist had found such a result, they would have tried to replicate their results; at least would have followed it up with studies which bring other lines of concrete evidence. What does she do? Write a book about it by bringing in anecdotal evidence at best and give a TED talk as if it’s all proven – as becoming famous (by any means necessary) is the ultimate aim for many people; and many academics are no different. Details can be found here. TED talk URL: https://www.ted.com/talks/amy_cuddy_your_body_language_shapes_who_you_are

PS: For readers interested in reading a bit more, I’d like to add a few more sentences. We should apply the below four criteria – as much as we can – to any health news that we read:

(i) Is it evidence based? (e.g. supported by a clinical trial, different experiments) – homeopathy is a bad example in this regard as they’re not supported by clinical trials, hence the name “alternative medicine” (not saying they’re all ineffective and further research is always required but most are very likely to be);

(ii) Does it make sense epidemiologically? (e.g. the example mentioned above i.e. the correlation observed between coffee consumption and lung cancer due to smoking);

(iii) Does it make sense biologically? (e.g. if gene “X” causes eye cancer but the gene is only expressed in the pancreatic cells, then we’ve most probably found the wrong gene)

(iv) Does it make sense statistically? (e.g. was the correct data quality control protocol and statistical method used? See figure below for a data quality problem and how it can cause a spurious correlation in a simple linear regression analysis)

graph-3

Wrong use of a statistical (linear regression) model. If we were to ignore the outlier data point at the top right of the plot, it becomes easy to see that there is no correlation between the two variables on the X and Y axes. However, since this outlier data point has been left in and a linear regression model has been used, the model identifies a positive correlation between the two variables – we would not have seen that this was a spurious correlation had we not visualised the data.

PPS: I’d recommend reading “Bad Science” by Ben Goldacre and/or “How to Read a Paper – The basics of evidence based medicine” by Trisha Greenhalgh – or if you’d like to read a much better article on this subject with a bit more technical jargon, have a look this highly influential paper by Prof. John Ioannidis: Why Most Published Research Findings Are False.

References:

Wakefield et al, 1998. Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. The Lancet. URL: http://www.thelancet.com/journals/lancet/article/PIIS0140-6736%2897%2911096-0/abstract

Editorial, 2011. Wakefield’s article linking MMR vaccine and autism was fraudulent. BMJ. URL: http://www.bmj.com/content/342/bmj.c7452

Read Full Post »

FfA_freedom_for_academia_report_2017_figure_1
Research outputs of Turkey-based academics in relation to the previous year. This Freedom for Academia (FfA) study identified a significant reduction (11.5% on average) in the research output of Turkey-based academics in 2017 compared to 2016. When the average increase of 6.7% per year observed in the research output of Turkey-based academics between 2008 and 2015 is taken into account, this translates to a decrease of over 7,000 papers than the expected figure in 2017 in journals indexed by SCOPUS – a bibliographic database of peer-reviewed literature. Image Source: freedomforacademia.org

Freedom for Academia (website), a group consisting of (incl. myself) “British and Turkish academics/researchers who are willing to lend a helping hand to our colleagues and bring the struggles that they face to the attention of the public and academic circles”, has just published an ‘Annual report 2017’ on the effects of the AKP government’s large-scale purges on the research output of Turkey-based academics, titled:

7,000 papers gone missing: the short-term effects of the large-scale purges carried out by the AKP government on the research output of Turkey-based academics

(click here to access full article with photos, or ‘print friendly’ version from here)

I gave an interview to Santiago Moreno of Chemistry World regarding this report (Source: Turkish crackdown takes toll on academic output. Aug 2017. Chemistry World)

Firstly, as a Turkish citizen living in the UK – who loves his country of origin (also a proud British citizen), I am heartbroken, disappointed and terrified, all at the same time, with what has been going on in Turkey for some time now. Within the last 18 months or so, thousands of academics – as well as tens of thousands of other civil servants – have lost their jobs due to decrees issued by the Turkish government. None of them have been told how they are linked to the “15th July 2016 coup attempt” and what their crime (by international standards) was.

FfA_freedom_for_academia_report_2017_figure_2
The percentage change in research outputs of 12 Turkish universities in relation to the previous year

These large-scale sackings have undoubtedly had an impact on the state of Turkey-based research and academia. The report tries to quantify the relative decreases in the research output of Turkey-based academics in different academic fields, and speculates on the causal factors. They find, on average, a ~12% decrease in the research output of Turkey-based academics in 2017. They also identified substantial decreases in the research outputs of some of Turkey’s top universities such as Bilkent (-9%), Hacettepe (-11%) and Gazi (-20%) in 2017 compared to 2016. Both Süleyman Demirel University and Pamukkale University, which lost nearly 200 academics each to governmental decrees issued by the AKP government, showed nearly a 30% decrease in 2017 compared to 2016.

I believe, a decrease in the number of publications is just one of the ways academia in Turkey has been affected overall. Turkey/Turkish academia wasn’t a place/group necessarily known for its work/scientific ethic and any ethics that was present before these large-scale dismissals has now definitely disappeared as the posts left by the dismissed academics is being filled by cronies (as I had stated in my Chemistry World interview in August 2017). These cronies are then going to hire individuals who are not necessarily good scientists but good bootlickers like themselves, and even if everything became relatively ‘normal’ (e.g. state of emergency lifted, academics in prison are acquitted) today, it would still take tens of years to change the academic circles that have been poisoned because of nepotism/cronyism, governmental suppression and political factionalism. In fact, academics in Turkey are so divided that not many cared when over eight thousand of their colleagues were dismissed as “members of a terrorist organisation”, as they did not belong to their ‘creed’ (e.g. to their ‘Kemalist’ or ‘Nationalist’ or ‘Islamist’ or ‘Pro-Kurdish’ groups). I try and follow many Turkey-based academics, and unfortunately, I barely see them talk about anything other than political issues – not on scientific and/or social advancements as academics/intellectuals should be doing. I tried to make my point in a short letter I wrote to Nature and in a (longer) blog post: Blame anyone but the government (Mar 2017).

Finally, I agree with the conclusions of the report that the sharp decrease of ~18%* in the research outputs of Turkey-based academics in relation to the expected 2017 figures is likely to be due to a combination of factors, especially psychological stresses endured by academics; and not just due to the absolute number of the purged academics (~6% of total), as outlined in the discussion section of the report.

*6.5% average increase every year between 2012 and 2016 + 11.5% decrease in 2017 figures compared to 2016 figures

References

1- FfA contributors. FfA Annual Report 2017. URL: http://www.freedomforacademia.org/ffa-annual-report-2017/. DOI: 10.13140/RG.2.2.16386.02244. Date accessed: 01/03/2018

2- Moreno, SS. Turkish crackdown takes toll on academic output. Chemistry World. 4 Aug 2017. URL: https://www.chemistryworld.com/news/turkish-crackdown-takes-toll-on-academic-output/3007804.article. Date accessed: 01/03/2018

3- Erzurumluoglu, A. Listen to accused Turkish scientists. Nature 543, 491 (2017). https://doi.org/10.1038/543491c

PS: To view a collection of my previous comments about the subject matter, please see my June 2017 post: Effects of the AKP government’s purges on the research output of Turkey-based academics (Jun 2017)

Read Full Post »

smoking-infographic_cancer_research_uk

We now know that, through studies carried out by many natural scientists over decades, smoking is a (considerable) risk factor for many cancers and respiratory diseases; but the public ignore these findings and keep smoking, which is where social scientists can help facilitate in getting the message across. Just one example of where the social sciences can have a massive (positive) impact on society. Image taken from stopcancer.support

Scientists focus relentlessly on the future. Once a fact is firmly established, the circuitous path that led to its discovery is seen as a distraction.” – Eric Lander in the Cell journal (Jan 2016)

 

As scientists in the ‘natural’ sciences (e.g. genetics, physics, chemistry, geology), we have to make observations in the real world and think of hypotheses and models to make sense of it all. To test our hypotheses, we then have to collect (sufficient amounts of) data and see if the data collected fit the results that our proposed model predicted. Our hypotheses could be described as our ‘prejudice’ towards the data. However, we then have to try and counteract (and hopefully eliminate) our biases towards the data by performing well-designed experiments. If the results backup our predictions, we of course become (very!) happy and try to (replicate and then) publish our results. Even then (i.e. after a paper has been submitted to a journal), there is a lot left to do as the publication process is a long-winded one with many rounds of ‘peer-reviewing’ (an important quality control mechanism), where we have to reply fully to all the questions, suggestions and concerns the reviewers throw at us about the importance of the results, reliability of the data, the methods used, and the language of the manuscript submitted (e.g. are the results presented in an easy-to-understand way, are we over-sensationalising the results?). If all goes well, the published results from the analyses can help us (as the research community) understand the mechanisms behind the phenomenon analysed (e.g. biological pathways relating to disease, underlying mechanism of a new technology) and provide a solid foundation for other scientists to take the work forward.

If the results are not what we expected, a true scientist also feels fortunate and becomes more driven as a new challenge has now been set, igniting the curious side of the scientist; and strives to understand if anything may have gone wrong with the analysis or that whether the hypothesis was wrong. A (natural) scientist who is conscious and aware of the evolution and history of science knows that many discoveries have been made through ‘happy accidents’ (e.g. penicillin, x-ray scan, microwave oven, post-it notes) since it is in the nature of science to be serendipitous; and that a wrong hypothesis and/or an unexpected result can also lead to a breakthrough. Hopefully without losing any of our excitement, we go back to square one and start off with a brand new hypothesis (NB: the research paradigm in some fields is also changing, with ‘hypothesis-free’ approaches already been, and are being developed). This process (i.e. from generating the hypothesis to data collection to analysis to publication of results) usually takes years, even with some of the brightest people collaborating and working full-time on a research question.

 

The first time you do something, it’s science. The second time, it’s engineering. A third time, it’s just being a technician. I’m a scientist. Once I do something, I do something else.” – Cliff Stoll in his TED talk (Feb 2006)

 

Natural scientists take great pride in exploring nature (living and non-living) and the laws that govern it in a creative, objective and transparent way. One of the most important characteristics of publications in the natural sciences is repeatability of the methods and replication of the results. I do not want to paint a picture where everything is perfect with regards to the literature in the natural sciences, as there has always been, and will be, problems in the way some research questions have been tackled (e.g. due to poor use of statistical methods, over-sensationalisation of results in lay media, fraud, selective reporting, sad truth of ‘publish or perish’, unnecessary number of co-authors on papers). However science evolves through mistakes, being open-minded about accepting new ideas and being transparent about the methods used. Natural scientists are especially blessed with regards to there being many respectable journals (with relatively high impact factors, 2 or more reviewers involved in the peer-reviewing process) in virtually all fields within the natural sciences, where a large number of great scientific papers are published; and these have clearly (positively) affected the quality of life of our species (e.g. increasing crop yield, facilitating understanding of diseases and preventive measures, curative drugs/therapies, underlying principles of modern technology).

I wrote all the above to come to the main point of this post: I believe the abovementioned ‘experiment-centric’ (well-designed, statistically well-powered), efficient (has real implications) and reliable (replicable and repeatable) characteristics of the studies carried out within the natural sciences should be made more use of in (and probably become a benchmark for) the social sciences. There should be a more stringent process before a paper/book is published similar to the natural sciences, and a social scientist must work harder (than they are doing at current) to alleviate their own prejudices before starting to write-up for publication (and not get away with papers which are full of speculation and sentences containing “may be due/related to”). I am not even going to delve into the technicalities of some of the horrendously implemented statistical methods and the bold inferences/claims made as a result of them (e.g. correlations/associations still being reported as ‘causation’, P-values of <0.05 used as 'proof').

Of course there are great social scientists out there who publish some policy-changing work and try to be as objective as a human being can possibly be, however I have to say that (from my experience at least!) they seem to be a great minority in an ocean of bad sociologists. Social sciences seem (to me!) to be characterised by subjective, incoherent and inconsistent findings (e.g. due to diverse ideologies, region-specific effects, lack of collaboration, lack of replication); and a comprehensive quality control mechanism does not seem to be in place to prevent bad literature from being published. A sociologist friend had once told me “you can find a reference for any idea in the social sciences”, which I think sums up the field's current state for me in one sentence.

 

The scientist is not a person who gives the right answers, he’s one who asks the right questions.” – Claude Lévi-Strauss, an anthropologist (I would humbly update it as “The scientist is not necessarily a person who gives the right answers, but one who asks the right questions”)

 

Social sciences should not be the place where ones who could not (get the grades and/or) be successful in the natural sciences go to and get a (relatively) easier ride; and publish tens of papers/books which go insufficiently peer-reviewed, unread and uncited for life; but get a lecturer post at a university much quicker in relation to a natural scientist. Social scientists should not be any different from natural scientists with regards to the general aspects of research, so they should also spend years (just like most natural scientists) trying to develop their hypotheses and debunk their own prejudices; work in collaboration with other talented social scientists who will guide them in the right way; and be held accountable to a stringent peer-reviewing process before they can claim to have made a contribution (via books/papers) to their respective fields. Instead of publishing loads of bad papers, they should be encouraged to and concentrate on publishing fewer but much better papers/books.

Social sciences have a lot to offer to society (see the above figure about smoking for an example), but unfortunately (in my opinion) the representatives have let the field down. I believe universities and maybe even the governments all around the world should make it their objective to develop great sociologists by not only engaging them with the techniques used in the social sciences (and its accompanying literature), but also by funding them to travel to other laboratories/research institutions and get a flavour of the way natural scientists work.

 

Addition to post: For an academically better (and much harsher!) criticism of the social sciences than mines, see Roberto Unger’s interview at the Social Science Bites website (click on link).

moon-suit

Moon landing – a momentous achievement of mankind, and the natural sciences (and engineering)

PS: I must state here that I have vastly generalised about the social sciences; and mostly cherry picked and pointed out the negative sides. However every sociologist knows within them whether they really are motivated to find out the truth about sociological phenomena; and are not just in it for the respect that being an academic brings, or for the titles (e.g. Dr., Prof.). I personally have many respectable sociologist friends/colleagues myself (including my father) who are driven to understand and dissect sociological problems/issues and look for ways to solve real-life problems. They give me hope in that sense…

PPS: I am not an expert in the natural sciences nor in the social sciences. Just sharing my (maybe not so!) humble opinions on the subject matter as I get increasingly frustrated with the lack of quality I observe throughout the social sciences. Many of my friends/colleagues in the social sciences would attest to some or all of the things I stated above (gathering from my personal communications). I value the social sciences a lot and want it to live up to its potential in making our communities better…

Read Full Post »