Monday, 31 December 2018

publications - What does the term "camera-ready" mean and why is it used?


I know that the term "camera-ready manuscript" is usually used to indicate a final version of the manuscript that will go to press. But what does the term actually mean and why is it used instead of some other more descriptive term?


Specifically, the term seems strange because in my opinion publishing has no connection to photography, at least in the modern age of digital publishing. Where does the term "camera-ready" come from?



Answer



From the Wikipedia article entitled "Camera-ready":




The term camera-ready was first used in the photo offset printing process, where the final layout of a document was attached to a "mechanical" or "paste up". Then, a stat camera was used to photograph the mechanical, and the final offset printing plates were created from the camera's negative.


In this system, a final paste-up that needed no further changes or additions was ready to be photographed by the process camera and subsequently printed. This final document was camera-ready.


In recent years, the use of paste-ups has been steadily replaced by desktop publishing software, which allows users to create entire document layouts on the computer. In the meantime, many printers now use technology to take these digital files and create printing plates from them without use of a camera and negative. Despite this, the term camera-ready continues to be used to signify that a document is ready to be made into a printing plate.



Sunday, 30 December 2018

research process - Is it appropriate to email an eminent researcher in your field?


Sometimes, I feel to contact some of the eminent researchers in my field for any of the following reasons:




  1. Appreciating their research publication (recent times). They publish in top conferences, which are usually not hosted in my country or nearby.




  2. Requesting comments on some of my research hypothesis





  3. Sometimes just because I am a die hard fan of them. For example, probably the only reason I continued with research in Computer Science was due to Don Knuth.




  4. Sometimes, to know what they think on some specific research area that has propagated due to there work. (Probably looks like some journalistic work)




  5. For knowing how did they tackle the pressures or certain situation during their PhD or research. (Yes, it's vague but these questions come to mind and probably should be answered by oneself or personal interaction, but adding it for the sake of completion)




Since, most of them are located outside my country, I can't visit or phone them. So, how are such emails perceived. Is it appropriate to send such emails, given that they are expected to have very busy schedule and it would probably waste their time?



Though I have mentioned my field as Computer Science, the question should be applicable to all the fields.



Answer



Here are some of my strayed thoughts.


Think from return of investments, of yours and your idol's


If you ever perceived that your e-mail would be a "waste of their time," then why send it? I feel that most eminent researchers have a trait of "ignore everyone and head for their goal;" getting acknowledgement and acceptance is probably not their primary concern. A specific e-mail describing how their work has inspired your study/project is probably fine, but I wouldn't go so far to expect they would reply and give specific comments on your hypotheses.


From your point of view, instead of using the energy and bandwidth to send the e-mails, there are a lot more you can do:


There are many ways to show your appreciation


First, they would probably like to see their work being formally cited and, more importantly, applied to the field or crossed into other fields. Each idea geminated from their work is an appreciation by itself, and in the mean time you can also enhance your publication and research paradigm. The plus is: if you have done enough of it, the big shot may actually contact you and give comments.


Second, you can help preaching the researcher's ideas and agenda. You can write blogs, answer other people questions, use their works in your journal clubs or lectures, etc. to subtly introduce the researcher's teaching to the public. Better yet, refine the researcher's ideas, and incorporate into yours. Become a spiritual successor with your own unique approach. And let your career be inspired by the researcher.


You can learn from someone without establishing communication



For some more senior researchers, look for their auto-biography, biography, interviews, and documentaries that feature them. I will probably never be able to talk to Itzhak Perlman, but I learned a lot about him through books, websites, documentaries, and musics that he plays. (And actually, he has a Youtube channel as well, but I am suffering from too much fanboy shyness to write any comment.)


For younger researchers, try look for their blogs, Youtube channels, open courses, or even biographies of their mentors. All these may help you become more familiar with them.


Another way is to indirectly know them. Most of these researchers would have a lot of students or proteges, who may be closer to your rank and more likely to communicate with you. You may build a relationship with them, and learn a thing or two about their interaction with their mentor.


Try technology


For their new publications... nowadays most online journals allow leaving comments online. You may try to say a nice thing or two there. If they write a blog, that's even easier. Some researchers maintain a LinkedIn page or a Twitter account, try connect with them and follow them. Hope for the best.


Use other famous people as leverage


If you really want to communicate with them, also try using other organizations. For instance, you can write to some online radio station and suggest an interview topic and some guests, which of course will include your idolized researcher. You can also write to some prominent podcast hosts and give them a couple reason to invite so and so for an interview. Make good use of crowdsourcing, invite your peers and friends to support your petition.


Some heroes/heroines are better left a bit mysterious


This is sad but occasionally painfully true. Some famous people are better left not known at personal level. They could be immensely arrogant, they may not have a nanogram of social skill, they may be a jerk... Unless I have reconstructed a pretty concrete and reliable image about the researcher from different sources, I would probably want to keep them as what they are in my mind, and as an inspiration for my work.


evolution - Examples of extant animals in a submature morphologically unstable evolutionary state?


I'm fascinated by evolutionary theory and the predictive aspect of it-the notion of an animal entering a strongly divergent state of evolution whereby it is evolving into a new form yet remains suboptimal, and is therefore undergoing rapid morphological change, and the idea that this allows us to project onto possible new variants of life.


To give a past example, the praying mantis is believed to have split from the cockroach around 250M years ago. During this period the mantis developed its upright stance, large range of vision and pronounced raptorial claws, and has seemingly stabilised into this optimal new form over the past 50M+ years.


The interesting thing is that this change could actually have been predicted by analysis of the animals evolutionary history, niche and further available advantageous adaptive forms.


Can someone give a good example of a current animal in this volatile state that we could make accurate projections for?


And please, no trivial answers like "everything is constantly in a state of evolution", I already know that and thats not I'm referring to.




online resource - Is it advisable to use a URL link shortening service when writing an academic article?



Is it OK to present a URL using a link shortening service such as bit.ly? The reason I'm asking is that I think it's a lot easier to enter this URL (e.g., if you read it in a paper) as opposed to full URL. Or is this a bad idea?



Answer



You should never offer a link shortener as the only option in an academic paper, for two reasons:




  1. It's adding another point of failure: if the shortening service is down, then the link cannot be followed. This is a particular worry over time, since the service may go out of business.




  2. One of the big reasons why link shorteners are so popular is that they keep track of usage statistics. I'd be offended if I thought an author was using this to monitor when the link was followed, where the people following it were located, etc.





So if you offer a shortened URL, it should only be in addition to the real URL, not in place of it. However, I'd tend to avoid even that. It doesn't look professional to me, and I don't think there's much savings for the reader. (Online papers should have clickable URLs, or at least ones that can be copied and pasted, so this only arises for someone who has a printed copy but no online copy. That can happen, but it's hardly a major issue.)


ornithology - How does taxonomy work? The case of the Avian Dinosaurs


I recently discovered that the class Aves (or Birds) has been renamed Avian Dinosaurs. My question is when this taxonomic denomination achieved the consensus of the scientific community and through witch process, this change was made.




evolution - Why even if all requirements for natural selection are met, it may not happen?


In the book written by John Endler Natural Selection in the Wild p. 4 Natural Selection in the Wild p. 4 it says that even if condition a, b and c are met, evolution by natural selection might occur,



[...] , but not necessarily, [...]



See last paragraph of the image. Endler, J. A. 1986. Natural Selection in the Wild. Princeton University Press.


I’m wondering in which case it’s possible that everything is in place for natural selection to occur, but is not having any effect.


It seems to me that if every condition of natural selection are met, there will absolutely a change in the phenotypic distribution? Why does the text say evolution is not guaranteed to occur?



Answer



What does the sentence below mean?




As a result of this process, but not necessarily, the trait distribution may change in a predictable way.



To my understanding, but not necessarily means that other processes than natural selection can affect the trait distribution in a predictable way.


Other than natural selection, what affects the trait distribution in a predictable way?


I am not trying to make an exhaustive list but I am just providing here two obvious examples.




  • An intense gene flow (migration) from a nearby population will also cause the trait distribution in a local population to change in a predictable way.





  • Genetic drift, will reduce genetic variance and therefore reduce the variance of the trait distribution. While the change of the mean of the trait distribution is not predictable under genetic drift, its change in variance is predictable.






Note that the book has been published in 1986 and the vocabulary and science is probably slightly outdated. If you are learning about evolutionary biology, you can probably find better source of knowledge. For example Understanding Evolution is a free online introduction to evolutionary biology. There are many textbooks that offer very good introductions to evolutionary biology.


publications - How shall I report an error in a given paper?


I am currently at my last MSc year in Chemical and Process Engineering (Italy), and our last course on Industrial Chemistry has a mandatory pre-exam exercise which reads:



Find an existing, published research paper in the field of Chemical Engineering and try to autonomously replicate its results (plots, data, equations et similia).



My chosen paper X was published in the Industrial & Engineering Chemistry Research journal, and I suppose it passed peer-review. X shows the application of a new algorithm to a chemical reactor with a given physical model.


However, after my revision and replication trial, I found that using their same model, I do not obtain the paper results. There is no presence of replication error by me, since I checked every passage with the TA.


The same TA pointed out flaws (of fundamental nature, for example F = mv instead of F = ma) in the model, and once corrected allowed me to obtain the results shown in the paper.


I do not think it was a overlook error, since the same reactor model was used in X and two cited sources of X. I want to inform of this error, but I do not know what to do now.



Answer





The same TA pointed out flaws (of fundamental nature, for example F = mv instead of F = ma) in the model, and once corrected allowed me to obtain the results shown in the paper.



The fact that after correcting the model you can actually reproduce the reported results seems to suggest that the authors did implement their model correctly, but made a mistake when writing the paper.


I think that it is best to first contact the authors, politely explain what you found, and give them the chance to take an appropriate action, e.g. publish an errata in the journal where they published their article (or on their website, if the mistake was really minor).


If they do not react at all, you might contact the journal directly. The editor will typically take care that an errata is published.


When nobody is responsive (I do not expect that in this case, but with a conference article or a fishy journal that might happen), and you consider it important that the scientific community is aware of the mistakes, you could decide to publish your correction somewhere (journal, conference, website, ...).


Saturday, 29 December 2018

physiology - Doesn't the sarcomere contract during isometric contraction?



During muscle contraction, the lenght of the sarcomere changes, length of myocyte changes and so does the length of muscle. However, if the length of muscle is not changing length as in isometric contraction does it mean that the sarcomeres are not contracting?



Answer



Yes, sarcomeres do contract during isometric contraction, they just don't change the overall length of the muscle. This is due to the tension being transferred to elastic filaments within the muscle that maintain the same muscle length but increase the muscle tension. Thus, this is how sarcomeres can contract while the muscle does not have an overall change in length. (See this link for a paper discussing these elastic elements within the muscle itself; also, this is discussed at this wikipedia link as well).


In an isometric contraction the sarcomere is increasing the tension in the muscle, but without an overall change in length. (Isometric Contraction = No change in length, but INCREASE in tension)


Once a muscle overcomes the tension needed to lift a weight/load, then the tension ceases to increase but the sarcomeres begin to shorten which moves/lifts the load. When the length of the sarcomeres change and the tension remains the same, that is called isotonic contraction. (Isotonic Contraction = No change in tension, but DECREASE in length)




The picture below is from a website discussing these muscle physiology principles and is freely available on the web.


Muscle Contraction Figure from Web


data - Are there known cases of leaks in academia?



There are many situations in academia where an entity is entrusted with confidential data belonging to a researcher. A very obvious one is the publication process: the journals are required to keep submitted papers confidential until they are published. But there are also other situations. For example, biologists can submit newly discovered nucleotide sequences to the EMBL database, and they have the option to request the new sequences to be kept confidential until publication. I guess that other disciplines can have similar arrangements.


In these cases, the entity has the responsibility of keeping the data confidential, and a leak can have highly negative consequences for the scientist who entrusted them with their not-yet-published results.


My question is: are there known cases of such leaks? I don't mean just a reviewer mentioning to his colleagues "I have a very interesting paper by X, if the results are confirmed, we may be looking at a cure for [type of cancer]" without further details. I mean high-profile cases where an institution or its employee is more or less "officially" accused of either willful wrongdoing, or of insufficient protection of the information so that e.g. a database was hacked and the information read out.


If there are such cases, what happened? How were they discovered, and what were the consequences for the scientist whose data was made public, and for the institution which should have kept it secret?




bacteriology - How does heat shock transformation work?



What exactly happens when competent cells like DH5É‘ are heatshocked with DNA present? How does the DNA get inside the cells?


Specifically, why are all the steps necessary? What if you heatshock right after adding DNA? What if you don't put on ice after heatshock? What does the calcium chloride do? What actually happens when cells are "competent"? What governs transformation efficiency (besides obvious things like amount of DNA or cells)?


To make it clear what I'm talking about, I use a protocol like the following:



  1. Take cells out of -80C and thaw on ice for 5 min.

  2. Add 1 ul (~500 ng) plasmid DNA to 50 ul cells, mix gently with pipette tip.

  3. Leave on ice for 30 min.

  4. Put in 42C water bath for 45 sec.

  5. Put on ice for 10 min.

  6. Add 950 ul LB, put in 37C for 1 hour.


  7. Spread 300 ul of the culture on LB-agar plates with appropriate selection.


I usually get thousands of colonies from this (in fact, often 1:10 or 1:100 dilution is necessary so I can actually get isolated colonies). Even if I skip the outgrowth, I still get hundreds.


I don't have my competent cell protocol, but basically I use that one rubidium chloride that everyone uses: DH5É‘ cells are washed with some buffers, suspended in a solution with CaCl2 and RbCl, then frozen in liquid nitrogen. I never actually measured, but usually the concentration of cells in the frozen aliquots is about 10 times as many as you would get from an overnight liquid culture (they are spun down and resuspended in a small volume).



Answer



Heat shock transformation alters membrane fluidity creating pores:



A sudden increase in temperature creates pores in the plasma membrane of the bacteria and allows for plasmid DNA to enter the bacterial cell.



Reference: Journal of Visualized Experiments. Bacterial Transformation: The Heat Shock Method. 2014. http://www.jove.com/science-education/5059/bacterial-transformation-the-heat-shock-method




The change in temperature alters the fluidity of the semi-crystalline membrane state achieved at 0oC thus allowing the DNA molecule to enter the cell through the zone of adhesion.



Reference: Anh-Hue T. Tu. Transformation of Escherichia coli Made Competent by Calcium Chloride Protocol. 2008-2013. American Society for Microbiology



... heat-pulse (0 degrees C42 degrees C) step of the standard transformation procedure had lowered considerably outer membrane fluidity of cells. The decrease in fluidity was caused by release of lipids from cell surface to extra-cellular medium. A subsequent cold-shock (42 degrees C0 degrees C) to the cells raised the fluidity further to its original value and this was caused by release of membrane proteins to extra-cellular medium.



Reference: Panja S, Aich P, Jana B, Basu T. How does plasmid DNA penetrate cell membranes in artificial transformation process of Escherichia coli? Mol. Membr. Biol. 2008 Aug;25(5):411-22. doi: 10.1080/09687680802187765. PubMed PMID: 18651316.


application - PhD funding - other sources


I found a lab offering the kind of research that I really want to do. I approached them and they were very happy to take me in. However, there is no funding and we all hoped I could get into a PhD programme at my university (their lab was on the list for rotation year). Sadly I was not selected for an interview, apparently the competition was very fierce this year. I contacted the lab again and now they are asking if I have other PhD funding? I wonder how I should reply to them, because I don't have any and I tried searching. From others I heard that you can get accepted and then look together for funding or there is a funding (the PhD programme that I was not selected for) and they just search for a suitable candidate. In my case they kind of ask me for the funding, is that right?


Self-funding would be very hard and I even tried looking if my country offers any government funding but I found none. I was told I could try getting maybe an assistant/technician job in their lab and hope that they might apply for grants? I'm very new to all the PhD issues so I'm not sure how I should deal with this but I would appreciate any help!




graduate admissions - How does funding at UK universities work?


The process for obtaining funding for graduate work in the UK seems much different than it is in the US. In particular, in the UK, funding decisions often come after offers of admission. What is the process by which a good, accepted graduate application at, say, Oxford, is given funding?



Answer



The funding depends on your nationality/residence by a lot, as well as the field, the university, and the supervisor.


UK or EEA residents can get funding from national research councils (such as ESPRC, STFC and many more) which give them (usually) the cost of tuition plus a small living stipend (when I was applying the living wage was about £13,000).


Non UK/EEA people generally have to self-fund outside of a few special circumstances (competitive international scholarships such as Rhodes, university bursaries for internationals). This is very very rare. However, it's not terribly uncommon to see students from an above-average financial situation self-funding, but then they need to pay not only their living costs, but also the cost of tuition (at my old university it was about £20k per year for international students).



I wrote an extensive response on all of these issues here and here and here. The other answers from other users on those posts are also excellent.


So what happens is that you apply, and then you will be (hopefully) asked for an interview. If you are able to compete for the 'normal' funding pot, you will be interviewed much the same way as everyone else. First a shortlist, then the interviews, and maybe follow-up interviews will be used to select the best candidates from the pool of applicants. If you're non-UK/EEA, you may or may not interview with everyone else; it's up to the university. If they don't have the money to fund you, many departments will make you an offer anyway (especially for foreign students) which you will usually have several months to accept or decline.


The research councils will grant the department a certain number of PhD studentship bursaries, and they will distribute them. From what I've seen, you don't have to do much self-application for scholarships unless you're foreign. Though you will always be told where the money is coming from, and there may be some requirements (a yearly seminar, for example) to get the money.


If you are given a stipend, it is tax-free, unlike in many other countries. Your funding is all-inclusive, not split up into RA or TA like in the US. You can normally make extra money marking coursework or demonstrating in labs, although the 'estimated number of hours' taken by these activities doesn't always match up with the actual number of hours you spend on them !


Friday, 28 December 2018

publications - Do unethical acts that are tangential to methodology taint research?


It's well-known that unethical acts can render research virtually unpublishable, but the cases we hear about tend to involve situations in which the methodology is tainted - for example, fabrication of data or unethical experimentation on humans.


Do unethical or unlawful acts that are tangential to actual research methodology likewise taint research?


For example:



  • A researcher is unlawfully present in the country in which they do the research, or lacks the legal status to do it there (e.g. overstaying visas, attending a university on a tourist visa, dodging a border checkpoint, etc.).

  • A researcher falsifies their academic credentials in order to get access to lab space, equipment, or grant money (e.g. the research itself is sound, but the researcher would not have been allowed unsupervised access to the university's microscopes had they known that he didn't actually have a PhD).


  • A researcher conducts research using stolen equipment or supplies.

  • A researcher commits a crime in order to hinder, delay, or incapacitate a rival researcher (e.g. sabotaging someone else's lab, murdering them, stealing their lab notes in order to deny them access to them, fabricating allegations to get them thrown in jail, deported from a country, expelled or fired from a university, etc.).

  • A researcher unlawfully uses performance-enhancing drugs without a prescription to maintain concentration on their work.

  • A researcher carries an unlawful weapon (e.g. concealed handgun without a permit, rifle unlawfully modified for full auto, etc.) for personal protection when gathering field data (e.g. where the researcher would have otherwise not gone there out of fear of violence).


Are there best practices or general principles on when such unethical acts taint research? Obviously, I'm not asking whether or not someone can, or should, get away with committing these acts, only whether or not they also would prevent publication. E.g. could someone say, "Yes, I blew up Dr. Smith's telescope and hit him with an axe so he couldn't beat me to publication, and I accept my 20-year prison sentence for that, but I was, in fact, the first to complete a spectrographic analysis of that planet and write it up so I should still be allowed to publish."?


This is not a request for personal advice! I'm just curious as to what actually happens or is supposed to happen in these cases.


My "guts" say that this would depend on the degree of non-academic misconduct - for example, that murdering a rival would prevent publication while parking unlawfully in a no-parking zone in order to make it to a meeting on time would not, but is this actually the case? Is every case evaluated independently, or is there a bright-line rule?




publications - How to acknowledge a deceased advisor’s contributions to a paper?



One of my advisors suddenly passed away while I was in graduate school. We had some discussions and ideas about future publications, but he passed away before any of the work was completed. When the work was finally completed and published, I and my co-authors were therefore presented with an ethical dilemma about how best to acknowledge his contributions to the ideas behind the paper. Should we list him as a co-author? Put him in the acknowledgements? Listing him as an author would give credit for the original idea, however, we would have no way of knowing if he actually approved of—and would want his name attached to—our methods and writing.


In the end my co-authors and I decided to list him as a co-author with a footnote stating that he passed away before publication.


I’m interested to hear from others who have been in similar situations and/or suggestions on what constitutes “co-authorship” when one of one’s collaborators passes away before the publication or work is complete.



Answer



I had a similar situation. In this case, we did exactly what you did: we indicated that the participant (not a team leader, but a team member in this case) was a co-author, but that he was deceased. I think this is the only fair way to recognize substantial contributions.


Of course, the difficult comes if there is a challenge to the work of the deceased. In our case, however, we had a very substantial paper trail which was audited and reviewed, so the individual work could have been sorted out and dealt with appropriately.


So, I think the best defense is generally to keep good working notes and use version control.


Master's and PhD at different Japanese universities


I have just finished a BSc degree in maths at ETH Zurich and I am now planning to continue with an MSc and a PhD in Japan.


My question is this: is it possible to do an MSc at one university and then do the PhD at a different university?



In Europe this is not only socially acceptable but nothing unusual. But as far as I know, Japan is different; how different I don't know. I would very much like to apply to one university for the MSc and then relocate to somewhere else for my PhD. But I'm rather worried that this may mean that the professor supervising my MSc will lose face. And of course, I can't just ask them directly because they might not tell me the truth out of politeness.




human biology - Why do people sing?



I was wondering the why people sing, but from a biological point of view. Is it necessary for our body? If it is so, then why can't everyone sing well?


Is it in direct relation with neurotransmitter secretion, like the adrenaline? Or is it only a way to express our emotions?




Thursday, 27 December 2018

publications - Whether to publish one big paper or many smaller papers for a given research project?


Let's say I invented a system to solve a problem. To run this system, I made my own algorithm.I also created some other things for that system. The main contribution was supposed to be the system.So is it a good idea to have as many research papers out of the project as I can? Or to publish a single research paper?


I have seen a lot of researchers where they were targeting a single problem and they proposed a single solution. Now, what they would do is write a research paper for every component separately. Then they would write a single research paper showing how all the components would fit together.


So is it a good idea to try and increase the number of publications you can have out of a single research project?


My own personal opinion is that the quality of your research matters not the quantity. But I have also seen a number of institutions requiring a specific number of publications to even apply for their jobs.



Answer



The way I see it, there are a number of factors at play:





  1. Your goals: Do you want your paper to be published in a high-impact journal? If so they will most likely be interested in the whole story rather than a small piece of it.




  2. Readability/General appeal: Can you make a coherent story with individual components? Will they all be interesting for wider audience on their own? In other words, if you opt for multiple papers out of one project; can you make sure these will be able to stand on their own? I personally think going for multiple papers is only valid when combining them into a single big paper would push aside some of your interesting results (or methods) to a metaphorical corner




  3. Limitations imposed by the target journal: Can you actually put together all that in a single manuscript? In biomedical research you always get a limit on the number of words in the manuscript, and there is only so much you can put in supplementary.





With regards to quality-vs-quantity, I heard that early on in your career quantity is more important while as you become more and more senior, quality becomes the main concern. I am often told that as a PhD student I can, and should try to get involved in as many papers as I can. Around the time I do post-doc, however, it's time I pay a lot of attention to where I put my name and try to work on a good paper, preferably in a high-impact journal.


Authorship issues in numerical mathematics


I have been working on a draft in the area of numerical mathematics for several months.


My PhD advisor recently discussed this draft with me. He wants to be mentioned as an author for (i) some discussions (ii) financing the research.


However, I am not sure about the authorship guidelines in my discipline. Numerical mathematics sits between pure mathematics, computer science, and the sciences, and it is comparatively young. So what is the appropriate norm on (i) choice and (ii) order of authors?




digestion - Which enzyme curdles milk in human infants?



Following, this question -


Do humans produce rennin? Rennin does not exist. And What inactivates pepsin in infants?


Rennin exist.


What do I know is-


Rennin is found in calves and acts on milk to curdle it.


Pepsin acts as rennin at pH of 6-6.5 and curdles milk.


Humans have pepsin, in infants as well as adults. Pepsin can perform both functions in humans.


So, in infants, is curdling of milk done by rennin or the rennin like activity of pepsin?




pathophysiology - Parallel Autonomic regulation of Cough and Runny Nose



I describe here the serous inflammation of runny nose. Cough is mediated by Cough center.


I think Runny nose is also controlled by the autonomic nervous system and probably by some reflex.


Assume you have mild runny nose under Montelukast medication. You have irritated cough (coughing so much you cannot sleep). You stop the cough by codeine before sleeping. You sleep at 30 degree angle. Sometimes, you also feel that the runny nose stops i.e. complete relieve of runny nose in both nostrils and whole nose for about 30 seconds to a minute - perfectly clear nose and no serous inflammation - no symptoms of imflammation in the nose.


Then, it (serous inflammation) starts again - not like mucose running back - but the feeling of uneasiness in the whole nose starts again - then mucose can also continue to migrate from one nostril to another. What is causing this kind of behaviour i.e complete relieve of of runny nose for about 30 seconds? What is causing the stopping of serous inflammation in the nose?


I think the Runny nose can also be controlled by medulla oblongata because Codeine is blocking the cough center there.


I tried to achieve this behaviour by stimulating by lymphatic system in regulating breathing by abdominal muscles - lymphatic drainage massage - breathing inside with abdomen (not with thorax) and deeply exhale with abdomen). At that time, I did not hear any creaky sounds in my throat. After trying to stop the serous inflammation again, I heard them without managing to stop serous inflammation in both nostrils at the same time.


How is the control of Cough and Runny nose regulated by the autonomic nervous system? Probably, some parallelism there.


Processes going on regulating Cough and Runny nose



  • reflex from medulla oblongata


  • sympaticus and parasympaticus (not sure about these)

  • possible parallelism of irritated cough and runny nose control




conference - Is Lecture Notes by Springer a journal?


Sorry for asking maybe a too-specific question. I've searched the internet and in this site but I still want a specific answer.


Is the Lecture Notes in (enter the name of field here) by Springer a journal?


Some universities require the PhD students to publish in journals in order to graduate, so I want to know. I am in Computer Science field.



Answer



I think the technical answer is no. Rather it is a series of research monographs.



Clearly there is a continuum here, and actually the SLN[X] series seems to have become more journal-like since the last time I checked. (The SLNM webpage lists an inmpact factor, for instance.) I think you can do no better than to consult the series homepages:


Springer Lecture Notes in Computer Science.


Springer Lecture Notes in Mathematics.


Springer Lecture Notes in Physics.


In CS the series seems to be organized into many subseries, however each volume gets a "global number".


However in all three cases you can check the language used and see that they talk about "monographs", "titles" and "texts", never "journals". If my memory is accurate, librarians view them this way as well: sometimes journals can be checked out for a much shorter time than books, and in my experience the SLNM have always been treated like books.


For the math series, from the linked page you can click to get a four page pdf file detailing the editorial policy for the productions of the LNM monographs. The following passage seems rather enlightening:



Monograph manuscripts should be reasonably self-contained and rounded off. Thus they may, and often will, present not only results of the author but also related work by other people. They may be based on specialised lecture courses. Furthermore, the manuscripts should provide sufficient motivation, examples and applications. This clearly distinguishes Lecture Notes from journal articles or technical reports which normally are very concise. Articles intended for a journal but too long to be accepted by most journals, usually do not have this "lecture notes" character. For similar reasons it is unusual for doctoral theses to be accepted for the Lecture Notes series, though habilitation theses may be appropriate.




I could not find the analogous file for either the SLNCS or SLNP.


In terms of the specific question:



Some universities require the PhD students to publish in journals in order to graduate, so I want to know. I am in Computer Science field.



This is a question about academic culture, both general CS culture and the culture of your specific department and university. You certainly need to ask people in your own local culture. As you can see above, in mathematics graduate students rarely publish in SLNM: PhD theses are generally not appropriate, and it is hard to see what other book-length partially expository high level research document it would be worth the time of a graduate student to write and publish. But it looks like CS does things a bit differently...


Wednesday, 26 December 2018

blog - Pitfalls of Academic Blogging



I am a junior academic (just finishing a PhD in computer science) and I'm considering starting an academic blog. My reasons for starting a blog are:



  • Fun.

  • Improving my visibility.

  • The benefit of science.


… basically, the same reasons I publish papers. (In a sense, academic blogging, is just a 21st century form of academic communication to supplement journals and conferences.)


However, I also have concerns:



  • Maintaining an active blog takes a lot of time and effort. (i.e. it may stop being fun.)


  • Fear of failure - many blogs die. (Does an inactive blog look bad?)

  • Being a junior researcher, I worry that people will not care about my blog or, worse, disagree with what I say.


So I want to ask for advice from people who have (or had) an academic blog:


Is academic blogging a good idea?
Does it become too much effort?
Is it worthwhile?
How likely is blog-death?
In general, what are pitfalls to watch out for when starting an academic blog?




publications - What are the moral and legal consequences of "not thanking" government for not providing viable grants?


Recently, on various social platforms, an image containing an unusual acknowledgement in some paper have been circulating. This is about the following paper RF fingerprint measurements for the identification of devices in wireless communication networks based on feature reduction and subspace transformation by J.L. Padilla, P. Padilla, J.F. Valenzuela-Valdés, J. Ramírez, J.M. Górriz, Measurement 58 (2014) 468–475, which has the following Acknowledgement:



"This work has been carried out despite the economical difficulties of the authors’ country. The authors want to overall remark the clear contribution of the Spanish Government in destroying the R&D horizon of Spain and the future of a complete generation."



Another interesting example provided by @Federico Poloni is the paper Rumour spreading and graph conductance by Flavio Chierichetti, Silvio Lattanzi, Alessandro Panconesi, where the authors write:



"Unacknowledgements

This work is ostensibly supported by the the Italian Ministry of University and Research under the FIRB program, project RBIN047MH9-000. The Ministry however has not paid its dues and it is not known whether it will ever do."



Another paper recently published offers a sad story in the acknowledgements,(I will just put the reference here "On functional representations of the conformal algebra", by Oliver J. Rosten, Eur. Phys. J. C (2017) 77:477 DOI 10.1140/epjc/s10052-017-5049-5)



In "A New Horned Dinosaur Reveals Convergent Evolution in Cranial Ornamentation in Ceratopsidae by Caleb M. Brown, Donald M. Henderson, Current Biology, Vol. 25, Issue 12, R494–R496 DOI http://dx.doi.org/10.1016/j.cub.2015.04.041, there is a proposal:



Funding for this research was provided by the Royal Tyrrell Museum of Palaeontology and the Royal Tyrrell Museum Cooperating Society. C.M.B. would specifically like to highlight the ongoing and unwavering support of Lorna O’Brien. Lorna, will you marry me?



In "Rotational splittings with CoRoT, expected number of detections and measurement accuracy" by M. Goupil, et al., Caleb M. Brown, Donald M. Henderson, Current Biology, Vol. 25, Issue 12, R494–R496 , where is the following text:



We do not gratefully thank T. Appourchaux for his useless and very mean comments



There are also more: Van Valen (1973). A new Evolutionary Law. Evolutionary Theory 1:1-30.




I thank the National Science Foundation for regularly rejecting my (honest) grant applications for work on real organisms (cf. Szent-Gyorgyi, 1972), thus forcing me into theoretical work



Although I fully empathize with the authors view, I wonder about the legal and moral consequences of considering adding of such an acknowledgement in one's future career.


P.S. If you have discovered a similar story in a different paper that it is not listed here, please feel free to add it. Thank you!



Answer



Academic publications are in general not the right place for politics. And especially not for personal vendettas. Doing the latter is deeply unprofessional (though I'd wager some great scientists did this as well).


However, government officials and institutions are public not private actors and as such are in democratic societies put to more scrutiny (in their official function) than private actors. Addressing StrongBad's concern of people starting to "not thank" your publisher,supervisor etc, this is definitely a difference I would see.


For contemporary major moral discussions - like "should we start a genocide or not" - I'd say the bigger moral question can trump the professionality aspect by far though.


Now that we covered the general and extreme cases, back to the dirty middle ground. Criticizing a political official or institution who acts in a researchers opinion damaging to science (or to society by his stance to science) or who likes to claim supporting science while his acts paint a different picture, I'd consider a borderline case. It's a public body and it affects science, thus to some degree relevant for the audience and the author. Especially when there is no official body properly representing the scientific community in that country, such a statement may be a reasonable approach to inform the public about the opinion of the broader scientific community. But it comes off as petty easily if it's just general critic for a particular agenda that doesn't suit your own ideas or if it seems you are just angry because you in particular didn't get funding while others did.


So legally, it's irrelevant in most countries, as long as you don't include libel or insult (depending on country). It is, in general, unprofessional, but sometimes the bigger issue at stake may still validate it morally.



It's basically your individual choice to break with the professionality in order to further your political agenda. It may hinder your scientific career and cost you reputation, while helping your agenda. However, if you're unlucky it may also have the opposite effect and along with your own reputation loss, cost your political movement reputation with conservative people/voters/observers. In the end, it is very context depending - who are you criticizing, for what reasons, who is your audience and how receptive may they be to your message. Same for your career. If your scientific peers agree with your opinion they may ignore your breach with professionality - but does that also hold for international colleagues who may have no idea what you are ranting about?


advisor - Should I leave out research in my NSF fellowship application if I'm not getting an LOR from the professor I worked with?


I'm pretty sure that my former advisor lied to me, manipulated me, and thinks I'm an idiot, so understandably I don't think it's a good idea to ask him for a letter of recommendation for the NSF graduate fellowship. However, my most recent academic research experience was with him, during the Spring quarter, and it's probably the most impressive research I've done, especially since he's a well known professor. I do have other research experience to talk about, but it's not quite at the same level.


But if I write about that project in my application, I imagine the committee is going to wonder why I don't have a letter of recommendation from that professor. Should I strive to minimize that research or even leave it out completely? Or is the benefit of talking about that research experience greater than the risk of being judged for not having a letter from him?


EDIT: This is why I think my advisor lied to me, manipulated me, and thinks I'm an idiot:


1) He said I couldn't work on a certain type of research project I wanted to work on because I hadn't taken a particular class. He then let two other students (undergraduates) who had not taken that class work on that type of project. One of those projects led to a publication.



2) I asked him why he let those other students work on those projects and he said it was because, while they were working on the projects, they learned stuff from that class on their own. I asked why I couldn't have also learned that stuff on my own, and he just said, "ehhhh" and refused to answer any further. [Response to comment below: those students, former classmates of mine, had not learned the material from that class on their own prior to being assigned the projects.]


3) There was only one project he said I was capable of working on. I asked if he thought it was publishable, and he said yes. I worked on it for months, and then met one of his collaborators, who said that it was not in fact publishable, but reassured me that they would post it on the internet somewhere. [Response to comment below: The collaborator didn't say this because something in the project was revealed to be less promising than they originally hoped. He said this because he viewed the piece I was working on as a small addendum to the larger project, which had already been published.]


UPDATE: In case anyone is curious, it turned out that my advisor did in fact lie to me and think I was an idiot. He did not, however, manipulate me -- that part was due to a misunderstanding.



Answer



As someone who has reviewed applications for the NSF Graduate Research Fellowship program, it would definitely be weird to see someone talk extensively about a research project and then not have the advisor write a supporting letter. It would certainly raise questions among the panelists assigned to review the application, especially if no other explanation were provided.


Normally I would not make such a suggestion, but this might be an instance where you should list the experience (if you can), but not talk about it so effusively or in great depth.


Research proposal for postdoc applications


I am a final year PhD student in Physics applying for postdoc. Usually institutes ask for statement of research in postdoc applications. But one of them asks for a research proposal (less than two pages). I already have a four page long research statement. I was thinking of omitting my past research part (I'll not be working on my thesis related work in future). Can someone suggest me how to modify a statement of research to make that a research proposal.




Tuesday, 25 December 2018

reference request - Percentage of total scholarly literature available in open access repositories by year of publication broken down by discipline


I am looking for the percentage of total scholarly literature available in open access repositories by year of publication broken down by discipline.



I am aware of the following figure, however it is old. Are there any more recent statistics?


Swan, Alma. Policy guidelines for the development and promotion of open access. UNESCO, 2012.


enter image description here




degree - What are the equivalencies between the higher education in France (Licence - Master - Doctorat) and the higher education in the United States?


In France the higher education is like this:



  • 3 years of "Licence"

  • 2 years of "Master"


  • 3 years of "Doctorat"


But what about in the United States?


I heard that they have things called "undergraduate", "graduate" and "postgraduate". How many years do each of those take? And what are the equivalencies between these and the "Licence - Master - Doctorat"?


Note: If the answer to this question depends on the field of studies, then the field is physics.




Monday, 24 December 2018

software - Can a lecturer force you to learn a specific programming syntax / language?


My class is currently further learning our relational database / SQL as part of the course.


The lecturer we have currently is restricting us to learning specifically Oracle SQL syntax rather than allowing us an option to using PostgreSQL, MySQL, SQLite, etc.


When we brought up our preference in the use of alternative systems due to working with those mentioned above in work placements or academic study from the year prior, we were talked down to for even discussing the matter.



Is this permissible within the UK to restrict students from using a preferred database syntax / software environment?




Edit to add: thank you for your insights and feedback. The post was made on behalf of myself and half a dozen class members who did not know the best route to address the matter, but now have a clearer understanding thanks to this community. For what it's worth, I can confidently say myself and my fellow students are confident with Oracle, MySQL and other SQL Basics from prior education, projects or employment and don't have an issue learning something new to keep progressing.




united kingdom - Possible to return to a PhD years later (at the same or a different uni?)


Here's my challenge: I took a suspension of studies about a year and a half ago from my UK PhD program (in ancient history) and moved back to the US after my funding ran out, and now have to decide whether to pony up for part-time tuition or just drop out entirely.


I have no hope or interest in pursuing the PhD for a career; I've got a career in an unrelated field. But my subject and my thesis add something to my life that I don't want to give up. But I'm not eager to dump $10,000 for just part-time tuition when right now that would be a doable but poor financial decision, especially since I doubt my rate of progress would improve. I've finished about 50% of my thesis.


Is it possible to just quit and then reapply again in the future? It's never been stated outright, but the vibe from the university is it's finish or quit forever. I don't really care if it's at the same university or not. (As long as it's not in the US; I have no interest in a longer than 3 or 4 year program).


Has anyone else left a program only to complete it or another one years later?





Sunday, 23 December 2018

biochemistry - Should there be separate Ramachandran plots for an amino acid in different contexts?



I understand the nomenclature of the phi and the psi angles of the alpha-Carbon atoms in protein stucture, but I am confused by the Ramachandran plot. Each alpha-Carbon atom (magenta) makes two peptide linkages and has two corresponding neighbouring alpha-Carbons (cyan) and side-chains. I would expect the psi and phi values to depend on the interactions of these side-chains, so I would expect that for a single amino acid one would need a separate graph for every possible combination of neighbouring amino acids.


Dihedral Angles


I am not clear whether this the case. The following graph (1) from Harper, Biochemistry, is for “many non-glycine residues from many proteins”. So, suppose a set of phi and psi is allowed for a right-handed helix (a value or data set taken from the plot) does it mean that it is allowed for any amino acid with any other amino acid? I woud expect an alanine adacent to an alanine to have a different interaction to an alanine adjacent to a bulky amino acid such as isoleucine.


Ramachandra plots


I have also seen plots for specific amino acid and their allowed angles, such as that for proline (2), above. This suggests that the neighbours are not being taken into account. If this is so, why not?



Answer



The phi and psi dihedrals describe the dihedral on both sides of the c-alpha of a single amino acid, and do not involve any angles of the neighboring amino acid.


The Ramachandran plot is something generated from a set of protein structures, an empirical data set. The top graph represents the dihedrals found for all non-glycine residues in a set of structures. You can filter this for proline only, and you'd get the bottom graph. The top cloud of dihedrals represents those found in beta-sheets, and the bottom cloud those for alpha-helices. Sequence (the amino acid before or after) doesn't really matter that much for what's allowed (although we cannot directly deduce this from the data in those graphs).


If you look a little bit more into the structure of helices and sheets you'll also find out why that's the case. In beta sheets the sidechain of the +1 residue is pointing completely the other way, and also in helices there's little interaction between the sidechains of subsequent residues. Secondary structures are built using the amides, not with the sidechains.


etiquette - Is it OK to call a professor by his first name when he/she signs emails by only first name?


If a professor in a North American country presents his- or herself by his/her first name in email messages, does this mean that students can refer to him/her by his/her first name? Or is this generally not a good idea, unless the professor has explicitly mentioned that he/she can be referred to by first name? I've noticed that most professors, who prefer to be referred to more formally, do not sign their emails with just their first names, but usually initials or first and last name.




How to concentrate during conference talks where the quality of the presentation is poor?



I have read a lot of tips on how to give a good conference talk: know your audience, give context, don't talk too fast, give your talk a clear structure or story, minimize text on slides, etc.


However, more often than not, presentations I've been to break most of these tips (quickly flipping through walls of text and equations while droning on in monotone), and I find it very difficult to concentrate on the talks.


Any tips on how to pay better attention and be able to learn something from these types of presentations?



Answer



For me the best approach depends on whether you can actively pay attention to the speaker’s voice or not. There are two very different methods to get the most out of those two scenarios:


Problems with the speaker


This is the case where the speaker is monotone, has nothing interesting to say, or at the very worst, just has absolutely no clue about public speaking (fills sentences with awkward "um"s, stutters, or loses focus and goes off on tangents).


In this case, it's probably best to get as much as you can from the presentation. Focus specifically on that and tone out the speaker, which if monotone shouldn't be all that difficult.


While doing this, take notes and work out some of the problems or exercises, if any, for yourself. Make up an exercise if none are given, or otherwise start trying to make sense of the material and put it to use in any way you can. This way you will have something to do and stay engaged, while still learning the material.


And there's always caffeine.



Problems with the presentation/content


Maybe you have a great speaker, but nothing being spoken goes with the slides, or (my personal pet peeve) the font on the slides is too tiny or poorly formatted to get anything from anyway. In this case, just ignore the slides altogether as they will only serve as a distraction.


Focus on the speaker, take good notes on what is being said, and really think about the concepts in your mind. Mull them over and write down questions you might have, even if it's not an open format where you can ask them during the lecture. This way your mind stays engaged. Take on thought experiments with the material – if this happened, what would the result be? Or: if I used this idea here, how might it help? Find ways to immediately apply what you are learning, and if that's not possible (if it's more theory-based stuff) just follow along the best you can.


If chewing gum helps you to think, do that. Grab an energy drink or whatever helps you stay focused. I find it's a lot harder to pay attention when the speaker is boring than when the presentation is boring.


Problems with both


In your worst case scenario, the speaker is lackluster and the content is dull and dry, with a presentation that is difficult to follow. You can try the following tips:




  • Use an actual pen and paper for notes. The act of writing can help you remember things better.





  • Try to remember earlier parts of the presentation, especially any bits that you found interesting or wanted to go back to later. The act of remembering solidifies concepts in your mind. If you go a week without using a password, that's when you forget it. The same applies to anything you learn.




  • If it's a Q&A format, ask questions. Don't make up nonsense if you can't think of any, but it will help you concentrate just by trying to come up with some.




  • Act as if you are the official meeting notes taker. Record any dialog that goes on if questions are asked. This can help you organize your notes, and will also serve to jog your memory when going over it later.





  • Put away your phone or anything that might provide a distraction. Turn your laptop or other device on airplane mode to keep it from buzzing or blinking unexpectedly.




  • Close your eyes. This is a known technique for helping your other senses get more information. Sure you might be lulled to sleep, but depending on the situation, it might help you concentrate on the speaker better. Listen to the speaker's exact tone and phrasing.




  • Breathe deeply, through your nose. If you are feeling sleepy, this can help you stay awake. Make sure when you breathe, your stomach expands and not your chest. When you take good, deep breaths through your nose from your diaphragm, you get much better breaths and feed your brain with oxygen.




  • Other answers mentioned taking a break. Go grab a bottle of water or energy drink, or take a short walk. Read some of your notes aloud while on your break.





cycle - How common is it to be pregnant with periods?



As you may know, some women claim that they have their period while they are pregnant. In the arab world they call it "Deer pregnancy", since when the deer gets pregnant, it still gets its' monthly cycle.


Their doctors do confirm that. The appearance of such: The bleeding comes at the times when the period is expected, but is usually without pain, or smell, and lasts for less days. It is also of different color. But many women (and doctors) still call it a period.


(Although we all know, a real period is impossible biologically).


How common is that?


This is, I think, different from implantation bleeding, since it mostly continues to three months and after that.



Answer




But many women (and doctors) still call it a period.



It is not a period:




Pregnant women can have some light irregular bleeding during pregnancy, but it should not be like a “normal” period. Some women can confuse this for their period because often it can come right around the time she was expecting her normal period. [...] It should not be enough bleeding to fill pads or tampons over a few days [1].



It is quite common in the first trimester:



Vaginal bleeding during pregnancy can occur frequently in the first trimester of pregnancy and may not be a sign of problems. But bleeding that occurs in the second and third trimester of pregnancy can often be a sign of a possible complication [2].



It happens to about 25 % of pregnant women:



Up to 1 in 4 women have vaginal bleeding at some time during their pregnancy. Bleeding is more commonin the first 3 months (first trimester), especially with twins [3].




Repetitive bleeding is associated with preterm birth:



Bleeding of multiple episodes, on multiple days, and with more total blood loss was associated with an approximate twofold increased risk of earlier preterm birth, PPROM (preterm premature rupture of the membranes), and preterm labor. In contrast, bleeding in the second trimester only, of a single episode, on a single day, and with less total blood loss was not associated with any category of preterm birth [4].





References:



  1. American Pregnancy Association. Am I Pregnant: FAQs On Early Pregnancy. Available from http://americanpregnancy.org/gettingpregnant/pregnancyfaq.htm (accessed 29.07.2014)

  2. American Pregnancy Association. Bleeding During Pregnancy. Available from http://americanpregnancy.org/pregnancycomplications/bleedingduringpreg.html (accessed 29.07.2014)


  3. U.S. National Library of Medicine. A.D.A.M. Medical Encyclopedia. Vaginal bleeding in pregnancy. Available from http://www.ncbi.nlm.nih.gov/pubmedhealth/PMH0003748/ (accessed 29.07.2014)

  4. Yang J, Hartmann KE, Savitz DA, Herring AH, Dole N, Olshan AF, Thorp JM. Vaginal bleeding during pregnancy and preterm birth. Am. J. Epidemiol. 2004 Jul 15;160(2):118-25. doi: 10.1093/aje/kwh180. PubMed PMID: 15234932.


evolution - Why do squirrels have twitchy bushy tails?


Whenever I see a squirrel in the woods, it is always the big bushy tail flipping around that gets my attention first. A pray animal with a big bushy flag calling attention to it's self seems to be counter survival, a big flag waving here I am, come eat me.


A Sciuridae in Taipei *Image from Wikisource


Video link, of tail twitching


Why do squirrels have twitchy bushy tails?


Edit to Clarify Squirrels do not have a high reproduction rate, there are multiple difference between species but most seem to average about half a dozen young per year.. While a pair of rabbits can (though unlikely) create 1300 offspring in a year Wild rabbit and squirrels have similar coloring and predators. There are of course multiple differences between rabbits and squirrels. Given the difference in reproduction and a big wavy flag that says come eat me. It seems that if there is a rabbit and a squirrel sitting on the forest floor eating breakfast, the one waving the flag is most likely to be seen and eaten.



Answer





The obvious answer is that having a balancing ballast is incredibly handy for some of the death defying jumps squirrels perform. The tail is needed for that.



You make the assumption that an inconspicuous tail would be bad for predation. I would challenge that assumption, but I am making an educated guess so feel free to debate this.


Tail flagging in ground squirrels is used to deter predators and communicate vigilance. Perhaps something similar occurs in tree-dwelling squirrels too.


Rabbits have a short white tail which isn't particularly camouflaged. It helps to confuse the predator allowing the rabbit to make sharp turns that the predator cannot anticipate.


In the case of this big bushy "twitchy" tail (which like rabbits, some squirrel tails have white tips, like in your picture), a predator probably cannot see the body and importantly cannot see or is distracted from the footing and pacing of the squirrel. If you've ever seen a dog chase a squirrel, the squirrel is easily able to outmanoeuvre the dog long enough to make it safely to a tree, probably because the dog can't predict the squirrel's trajectory due to the distracting and writhing tail.


Let's say a predator did get its jaws or claws near the squirrel and tried to take a chunk. There is a good chance all the predator will grasp is a bunch of tail fur. The squirrel will probably be sore but will live to tell the tail. (I couldn't resist)


degree - Do I have any potential in the academic field?



I live in a third-world country. Although I have a second-class degree in IT, I work in the general banking section in a private banking organization. This is possible because, in our country, private banks recruit engineering students (among others) as Management Trainees. They are rotated among various sections in the bank and are trained up for two years. After two years, they are promoted as Executives.


After joining the organization I found that there is huge amount of study involved to get confirmation and to get promotion. Moreover, I don't know Accounting.


Now I am thinking, since I have to study anyway, why not take the GRE and TOEFL, then study apply for a MSc in IT or computer science and then a PhD degree from USA.



I have already been away from academia for almost four years. And, since I am doing a full-time job, I think it will take 2 years more to prepare myself applying for higher education in the USA. At that point I will be 32 years old.


Is a PhD degree from USA enough to find a job at that age?




graduate school - How many research interests should be included in a statement of purpose/objective


I am wondering how many research interest should be included in a statement of purpose. Is it better to include just one core interest or to include up to three as in my case?



Answer



Be as specific as possible. Do not bluff.



Remember that admissions committees are looking for strong evidence of research potential. One of the markers of that potential is a deep interest in your intended research area. For that reason, it's important to describe your potential research interests in specific and credible detail. Why are you interested in field X? What specific problems are you interested in working on? What projects have you done? What papers have you read (or written)?


It doesn't matter all that much what you write about. We know that your interests will change over time. Nobody is going to limit you to the specific research topics you describe in your statement. Your statement is at least as much a demonstration of intellectual maturity as it is a description of research interests.


Eykanal's observations are correct. Most graduate school applicants "barely know what's being researched". A list of buzzwords mined from faculty web pages is not credible. You can't effectively describe what your interests are when you aren't familiar with the field. But I disagree with his conclusion; just because it's hard doesn't mean you shouldn't do it. Don't be most applicants. Know what's being researched. Don't be vague, and don't just list buzzwords. Make yourself familiar with the field.


After you've done that, writing about your research interests is easy, because you actually have some.


teaching - Private Git repositories for students, that don't become public later


Short version of my question: Is there a good way to provide students with free, private, Github-like git repositories, that won't become public later?


Background: In computer science, it is considered best practice to use Git (a version control repository) for software development. Many universities want to encourage students to follow those best practices as part of their coursework, so they'd like to encourage students to do the same. However, they don't want student work to be made public, as this creates a temptation for cheating.


Github in particular is popular, as it provides both Git repositories, an easy-to-use web front-end to Git, and a convenient workflow for collaborating using Git. Github is free if you make your repository public; normally, you have to pay for a private repository. Github has a special program for students: as long as you are a student, you can have private repositories for free, if you sign up for their student discount. However, this only lasts as long as you are a student. After two years, the student discount expires, and then the private repositories are locked; if the ex-student wants to retain access to their repositories, the ex-student has 30 days to either pay Github a monthly fee to keep it private, or they can tell Github to make it public and pay nothing.


One university I'm familiar with encourages CS students to sign up for a student Github account and put their projects and homeworks in private repositories. However, empirically, Github's policies seem to encourage a certain number of students to make all their repos public after they graduate and their student status expires -- and then solutions are available on the Internet. Because good software development projects are so labor-intensive to construct, many courses re-use projects for several years in a row. Current students have reported finding this by search and are a bit worried that it'd be so easy for other students to cheat. Therefore, recommending that students use a private Github account seems to create a two-year time bomb that will have unfortunate consequences for courses who plan to use their project for more than two years running.


Is there any good solution to this? Is there a better alternative than Github that can be recommended to students?


In particular, a better alternative should meet the following requirements: allow students to have Git repositories for software development and collaboration with project partners; is free; is private; will remain private over time, without encouraging/requiring alumni to make their solutions public if they want to retain access to their solutions without paying; doesn't create extra work for instructors to constantly search Github to look for inadvertently-public solutions from past semesters.


I've seen How to deal with student putting their (home)work on github, but the solutions there aren't workable in this context: creating new projects every semester is not a good solution (it's been tried and leads to pedagogically inferior results).





Saturday, 22 December 2018

dna - Chromosome and chromatid numbers during cell cycle phases


A diploid cell in G1 has 6 chromosomes. How many chromosomes and how many chromatids are present in each of the following stages?


Here is what I am guessing




  • G1: 6 chromosomes ; 6 chromatids





  • G2: 6 chromosomes ; 12 chromatids




  • Prophase: 6 chromosomes; 12 chromatids




  • Metaphase: 6 chromosomes; 12 chromatids




  • Anaphase: 12 chromosomes; 12 chromatids





  • Telophase: 12 chromosomes; 12 chromatids






Friday, 21 December 2018

etiquette - Lab colleague uses cracked software. Should I report it?


I've just (accidentally) found that one of our colleagues in the lab (who is a graduate student) uses a cracked piece of software on his personal laptop (We were talking near his station and a pop up went up and warned about the fake license of the software).


The software is an expensive one whose student version is freely available on our shared server; however, the student version's features are often not sufficient for our tasks. So I realize that he might have to do this to handle his job. I don't know if the supervisor is aware of this.


All in all, as it's not acceptable to ignore the copyright, should I report this issue to any responsible sector?




lab techniques - How sterile is sterile when working with nucleic acids to prevent contamination?



I am reading up on preparatory work on working with nucleic acids and a lot of the instructions speak on excessive procedures on cleaning environments with high %ethanol and making sure the equipment is nuclease free, and autoclaved.


Are these sterilization steps really necessary when doing research /running gels? Buying all of this equipment seems very excessive, and given that just one nuclease could compromise my results why bother?



Answer



Short answer: Yes


Long answer: Depends what you are working with.


DNA: If you are working with DNA, its pretty stable and you can usually get away with a 70% ethanol wash/autoclave (mainly to prevent contamination and obtain consistent results). EDIT: Read Chris's answer also below


RNA: If you are working with RNA well.. whatever you did for DNA doesnt apply anymore. You have to take it to the next level. You will need to replace everything, you'll need nuclease free water, tubes, reagents and whatnots. Throw out ethanol and bring in RNAZap, DEPC water or something like that.



An RNase free environment is essential when working with RNA samples. There are two main reasons for RNA degradation during RNA analysis.


First, RNA by its very structure is inherently weaker than DNA. RNA is made up of ribose units, which have a highly reactive hydroxyl group on C2 that takes part in RNA-mediated enzymatic events. This makes RNA more chemically labile than DNA. RNA is also more prone to heat degradation than DNA.



Secondly, enzymes that degrade RNA, ribonucleases (RNases) are so ubiquitous and hardy; removing them often proves to be nearly impossible. For example, autoclaving a solution containing bacteria will destroy the bacterial cells but not the RNases released from the cells. Furthermore, even trace amounts of RNases are able to degrade RNA. Note that RNAses are present everywhere (skin, reagents, normal plastic etc).


Therefore, it is essential to avoid inadvertently introducing RNases into the RNA sample during or after the isolation procedure.



Remember that following microbiological aseptical techniques is usually good practice (and a requirement) when working in a molecular biology lab.


There are excellent guides available online for working with RNA:


I took the above paragraph from here: http://www.bioline.com/us/rna-hints-and-tips


Life tech has a few more tips for you: https://www.lifetechnologies.com/uk/en/home/references/ambion-tech-support/nuclease-enzymes/general-articles/working-with-rna.html


Here is a good pdf to have in your labbook as a reference sheet:http://genomics.no/oslo/uploads/PDFs/workingwithrna.pdf


If this scares you , that's good because when working with RNA there are a million things that can go wrong the first few times you try it. You have to be very careful, and adapting a cavalier attitude such as not bothering is going to come back and haunt you at night...


Programming Interview for PhD Admission in Computer Science


A prospective supervisor is interested in me, and has asked me for programming interview. As I have been told, his research group does a lot of system programming, and he is seeking a good programmer.


I have no industry experience in programming, though I have programmed for assignments, projects, and a Master's thesis. I know that, as a computer scientist, it is essential to have good programming skills. I tried to search for some tips about programming interviews for prospective PhDs, but could not find anything.


Does it differ from industrial interview? Did anyone have similar experience? Any references or tips?



Answer




I had the interview yesterday. I would like to share the experience as it might be useful for anybody who might go through similar kind of interviews in the future.


The interview was via Skype with an outsourced software engineer (not the prospective supervisor), and we used shared .doc file to solve two programming problems about strings. By the way, most of programming interviews I had (mostly industrial and this academic one) pretty much involve strings manipulation and sometimes data structure.


The interview lasted for an hour whereby I was given 20 minutes to solve each problem and 10 minutes to discuss. The general impression was positive. The concept was always known, yet I needed some practicing to make my code works. I was given the choice to choose the programming language I like to write in.


I would say, bachelor level of programming is enough. You will just need to revise and practice a little bit your information. You might need to focus on the logic you follow more than the small details that differ from language to another.


etiquette - Should I let potential employers know I have a job offer?


I've interviewed for a few positions, one of which has made me an offer. I'm very pleased with the offer, but I prefer some of the other positions. I have the luxury of being able to take a lot of time to decide whether or not I accept this offer, so I'm certain I will hear back (either positively or negatively) from the other positions before a decision is required for the offer I've received.



I'm trying to decide between two courses of action, and I want to choose the one that will best help my candidacy with the positions I've yet to hear from. Plan 1 is to do nothing and simply wait to hear back from the other positions for which I've already interviewed. Plan 2 is to be in touch with the other positions to let them know I have an offer, and ask where I stand with them.


Would Plan 2 make me a more desirable candidate with the other positions, as they might see my offer as further validation of my credentials? Or, might initiating with the interviewers in this way be detrimental to my candidacy, perhaps making me seem pushy or self-centered? If so, I would rather just wait for them to run their course of action (Plan 1). As I've said, I have plenty of time, so I don't want impatience to hurt me.


If there is a good chance that my meddling will be detrimental, then I'd rather be hands-off here. However, if it's more likely that informing the other interviewers of my offer will give me a boost, then I'd want to do so.


Thanks for any clarity you can provide!


Edit: For a bit more information and context, the position I've been offered is a non-academic research position that my advisor has described as "prestigious." The other positions for which I've interviewed (and which I prefer) are mostly academic teaching-centered positions (visiting assistant professors, tenure-track at smaller four year institutions).




Thursday, 20 December 2018

teaching assistant - What are some ways to increase grading speed?


I sometimes spend too much time on grading students' homework in the class I am a TA for. I am asking if there are some ways to improve grading speed? I hope that I can learn some useful tips from experienced people here.


For example, which one do you think will be faster, grading student by student, or problem by problem?



Answer



I find it very helpful to write a grading rubric. It includes how many points I give for each part of each problem. I also include how many points of partial credit I give for various common errors. Typically, my key consists of a worked copy of the exam, and I make notes about each question on the actual exam. Generally, my rubric gets more detailed as I grade, since it's only then that I learn what are the common mistakes.


If I feel that I'm taking too long grading, I'll often start timing myself. Maybe I get at most 30 second per question (often less). This isn't a hard rule, but it helps me know what to aim for. (Recently I just graded a calculus exam, so most of the problems were pretty quick to grade.) Your mileage will vary from one subject to another.


networking - How can I get the most out of conferences?


I recently attended a very large conference in my field (SfN, ~30,000 attendees), and after I got back I was thinking about what I had gained from the trip, and I realized not that much. I listened to a few different talks, and I saw a whole bunch of relevant posters, but on reflection I don't think anything I did progressed my knowledge/career that much. What should I do in the future to ensure that I make the most out of conferences?



Answer



Read this and this.


My professor put forth 3 simple rules for networking:




  • Talk to the guy beside you

  • Talk to top 3 (sort by relevance or whatever you prefer) presenters

  • Mail them 5 days after the conference with some follow up content (questions/comments/invites for talks etc.)


Just to make this post "dead-link" proof, I present a gist of the content in the above links.




  • Start Early. You should begin preparing before the conference starts. Start reading on who will be there, emailing people you want to meet, and determining which events you will attend. You may want to contact the speakers whose talks you will be attending before the conference; try to set up a meeting, or if they are too busy, at least meet them and give them your business card.





  • Bring Business Cards. Make sure they're up-to-date and details your preferred mode of communication.




  • Research people and get involved in their networks. If a certain professor is giving a talk; read his previous research papers, frame interesting questions and get an excuse to meet him. If you do meet him, exploit the opportunity to interact with his peers and try to enter their network. Sometimes, this is the only way of getting to network with someone. I know of professors who refuse to take students for PhD or internships or Postdocs without a recommendation from someone in his network. A good impression might just get you that recommendation.




  • Note people with similar interests to yours. These people will be attending all the same presentations as you, talking to the same people, discussing similar topics. They are the potential spots for networking.




  • Prepare the elevator speech. A common question will be "So, what is your research about?" Make sure you have an answer for every audience. For e.g. If you are in Computational Science, the answer may vary depending on who you are talking with. Plus, make it interesting and digestible.





  • Organize an event of your own. This is especially useful is forming "lower" networks i.e. networks of people who lag in terms of age or experience such as graduate students. If not more, they could notify you of openings or interesting papers or whatever. They could be useful. (Plus it helps us :P )




  • Read "Never Eat Alone".




  • Follow Up. Prepare for this even before you leave for the conference. Have different modes of follow up ready. Will you have anything to say that is worth writing an email for? If not, think of something which will. If nothing works, make sure you click a photo of yourself with him and send it to him a few days after the conference.





teaching - Why do educators use curve to adjust the performance?


It bothers me when I tries to remember how many highschools or universities use curve to adjust the final grade for their student based on the class average.


I know failing the entire class would be unacceptable for the faculty that taught the class but watering down the materials or curving the grades to fit the standard was it necessary? What if there are few outstanding students that are ahead of the curve? wouldn't it be unfair to them if the curve was adjusted to the class average?


Sometimes, I started to question education standard for grading. What are the standard?


Knowing the materials and able to apply them?


Putting all relevant summary on one test paper?


Giving homework that count towards grade?


Quizzes to remind student how well they do for that subject?



If above are all taken into consideration. Why are instructors still applying the curve to adjust the class average.



Answer



Instructors often use some kind of curve to adjust for the varying difficulty of exams from one year to the next.


The general idea is that if (for example) the class average on an exam is lower than it has been for the same course in previous years but the quality of the students' work is the same, then the exam given this year might have been more difficult than the exam in previous years. It would be unfair to students (and also reduces the signaling power of grades) if their grade is strongly dependent on the year in which they happened to take the course. An instructor might choose to adjust the students' grades to account for this.


It's generally unwise to assign grades in a way that is strictly norm-referenced, without taking into account the students' demonstrated mastery of the course material, for reasons described in this answer. For example, if the class average on an exam is lower than it has been in previous years, but the average quality of work submitted by students is also worse than usual, then the average grade of the class should be lower (to preserve "fairness" and also the signaling power of grades.)


Some instructors might use a curve for other reasons, e.g. if their department has a policy about the maximum number of "A" grades an instructor can give out. (Like this example.)


Wednesday, 19 December 2018

evolution - Is there a name for the evolutionary loss of vestigial structures?


Consider a biological structure which no longer benefits an organism, such as the eyes of an organism whose population now lives in total darkness. I can think of three reasons why such a structure might disappear:



0) Random changes to the structure over time wouldn't be corrected by selection favoring the functional version of the structure, leading to a wider variation where most versions of the structure no longer effectively function.


1) The resources the structure demands could be better spent on structures which are actually being used; e.g. human eyes require a lot of blood that could be used elsewhere.


2) Perhaps the existence of a very complex structure leads to biological problems which would no longer be an issue if the structure were not present; e.g. human breasts plus breast hormones frequently leads to cancer.


Are these three examples reasonable means by which a feature would disappear? Are there any other possible reasons?


Is there a general name for the phenomenon of evolutionary removal of vestigial features due to those features no longer being useful to a population?



Answer



This phenomenon can be (and has been) described as regressive evolution (the loss of a phenotypic trait). There are several reasons why this occurs:



The eye degeneration example you chose is a good one because it is well studied in cavefish (which evolved from sighted surface fish and have degenerate eyes). Normal eye development is under the control of the Pax6 transcription factor. Expression of another transcription factor, Shh, reduces Pax6 expression. Shh expression along the embryonic midline is responsible for splitting the eye field bilaterally. Overexpression of Shh in surface fish leads to eye degeneration and, indeed, it was found that cavefish have an expanded Shh expression pattern.


Cavefish have also undergone a behavioural shift to bottom feeding and have become less aggressive to focus more on finding food. As it happens, the expanded Shh expression also causes a widening of the jaw and amplification of taste buds, both of which aid in scooping and sampling the river bottom. Furthermore, increased Shh expression during brain development influences a decrease in aggressiveness and a shift to foraging behaviour.



This is an example of pleiotropic antagonism: positive selection for jaw enhancements and behavioural changes via expanded Shh expression, which increase fitness in cave environments, can explain why the eye degenerated.


evolution - Are there any multicellular forms of life which exist without consuming other forms of life in some manner?

The title is the question. If additional specificity is needed I will add clarification here. Are there any multicellular forms of life whic...