Wednesday, 31 May 2017

teaching - How to help reduce students' anxiety in an oral exam?


Oral exams can have various pedagogical benefits in certain circumstances, but can also make some students feel anxious. After watching several students nervous-sweat their way through such an exam, I'm looking for ways to help them.


How can an instructor help students relax and feel less nervous during an oral exam?



Answer



The following assorted suggestions stem from standard procedure at my department, which has a long-lasting experience with oral exams, and positive experiences reported by fellow students suffering from anxiety.




  • The exam begins with the student giving a brief elaboration of a topic of their choice (within the subject of the course). They can talk uninterrupted for a while, and then the exam gradually shifts to question-and-answer mode on this subtopic, probing whether the student actually understood what they are talking about. This has several advantages:




    • The student knows how the exam will begin and they can prepare the first few minutes. This avoids anxiety due to uncertainty.

    • The student is not dropped into interactive mode instantly, but gradually.

    • The student knows which topic will be examined first, they can prepare for this, which boosts confidence.

    • Not related to the question, it allows the examiner to estimate how well the student can structure the chosen topic (which usually goes hand in hand with understanding) and put it into the context of the course’s subject.




  • Do not ask questions that require long and elaborate answers, but go step-by-step, guiding the student if necessary, but only if necessary (so they have room to shine). For example, do not ask:




    Please tell me about [central result of course].



    Instead ask a chain of questions, allowing the student to answer each of them, and adapt them depending on mistakes the student makes or similar. For example, the questions could be as follows:



    What does [central result of course] state?


    Why is it so important?


    Under which conditions does [central result] hold?


    Why is [some necessary condition] necessary?


    Why does [some sufficient condition] suffice for [central result]?




    In a typical case, these questions are raising in difficulty with the first questions being something that every serious candidate should be able to answer without much thinking. This allows the student to get some confidence and get tuned to the topic before the questions become more difficult. Also, this avoids the impression of time pressure and the student knowing that they have to get to that bit which they are not confident about, even if they can score by explaining a lot of basics before that.




  • If you ask questions where the student shall apply their knowledge and which require some thinking time, do this after they succeeded with some simpler tasks and announce that this is a advanced task and that they have some time to think. This avoids fear of failure and gives an extra boost of confidence if they succeed.




  • Give positive feedback whenever you can. This does not mean that you should be drowning the student in praise, but rather avoid fuelling their anxieties by forgetting to let them know that they are correct when they are.




  • If the student makes a mistake in their elaborations, let the student finish first, assert what was correct and then remark or possibly ask about the mistake. For example, if the student is elaborating on some equation and made a sign flip, do not remark upon it right away, but let the student finish. Then praise what was correct and ask them to elaborate their reasoning for that sign again. This has the following advantages:





    • It doesn’t break the student’s flow by interrupting their elaborations and lowering their confidence due to the mistake.




    • It gives the student an opportunity to detect the mistake themselves, e.g., by detecting it due to a follow-up problem.







  • Announce the examination mode beforehand as much as reasonable, to reduce any uncertainty due to an unknown situation as far as possible.




publications - How do I convert my PhD dissertation so that it can be published as a book?


I have just completed a PhD by research, which I think has some merits in being available as a book (This is, of course, my personal opinion). The research cuts across several fields (e.g. sociology, public policy).


I am wondering what I need to do to take the next step.


In particular:





  1. How do you turn an academic piece like the dissertation (which is written for the examiners) into a book (which is for the general public)?




  2. How do I find publishers who specialise in my field?






entomology - Insect identification, Chennai, India


Could someone tell me what the species of this beautiful insect is? Location: Chennai, India. enter image description here


Thank you!



Answer



To me, it looks like a Lepidoptera from the Sphingidae family (Hawk moths). It is similar to species in Hemaris and might belong to this genus.


One similar species from this genus (found in India) is Hemaris tityus, but the moth in your picture doesn't look exactly like this particular species. Another similar species is Hemaris saundersii, which is found in and around India.


enter image description here



(Hemaris tityus, pic from wikipedia)


Tuesday, 30 May 2017

zoology - Is life expectancy linked to intelligence in animals?


For example, animals that live only a few days or a few years are often not very intelligent. In contrast, the most intelligent animals seem to live longer.


Is this true? Are there any studies to prove or disprove this statement?




Monday, 29 May 2017

Should I start my CV by telling about myself?


Currently, in my CV, the first section is the About me section. It goes like this (I'll reserve the format of the text):



Broad knowledge, is why my friends are proud of me.
Never stops asking questions, is what my advisor values in me.



I wonder if writing like this gives makes me look bad. Will the recruiters see me as a confident person, get a better picture of me (which is the impression I want to convey), or will they see me as just arrogant, lacking self-esteem and paranoid?



Next sections are Education, Research Experience and Activities. They are about one page and a half long.


What do you think? Please be frank. Thank you so much.




Thanks to the many people who answered my question, I get that I should save it for the SOP. However, there are some occasions where I'm only asked to send my CV and not a cover letter with it. Should I still keep the "About me" section as a mini SOP in such cases? If it sounds like "platitudes, clichés, and self-compliments" (thanks for being frank, I do need it), how about this idea I just came up with?



I chose science because I want to know everything. I chose physics because I think it is the buttress of other disciplines.



I can make it better later.



Answer



1) This is nonstandard, so people are likely to view you as odd, or at the very least unfamiliar with academic norms.



2) On a CV, you should prioritize specific, tangible achievements over things that literally anyone could say about themselves. You say you have broad knowledge, but will anyone believe you? It doesn't do anything to differentiate you from people who could also claim to have broad knowledge. Save that for your letters of reference.


workplace - Reading material on working conditions of women in academia


Like many professions, academia is a challenging environment for women. In some disciplines (e.g. computer science), the number of women remains low despite efforts to increase it. Have there been any academic studies on the ways of improving the working conditions for women, specifically focussing on women in academia? As an academic working in hard sciences (i.e. not gender studies), what book or review could I read on the topic, to help me get a better understanding of these issues (and possibly improve my own behavior)?


I'm not interested in “advice” (in part because I am not a woman), but in studies of how effective are various possible ways of improving the working conditions for women (in academia). Like “we study universities implementing policies X and Y, and show that they do increase gender diversity bu xx%”




The question “Women in academia” is related, but I'm asking for material with a totally different perspective.



Answer



The most recent paper to make a big splash on this subject was "Science faculty's subtle gender biases favor male students", by Moss-Racusin et al. You can start there, and dig backwards through the references - you'll hit most of the major reports on this topic.


A few notes on the topic of this paper itself:



It is the same gender biases that academics have towards their students that they also demonstrate against their peers, so don't narrow your research too much. And if your question is "why are there so few academic women in the sciences?" you need to look at the problem from top to bottom. Women aren't going to want to become professors if they are already noticing the bias in undergrad.


Sunday, 28 May 2017

citations - Should I keep all my .bib files in a single folder, or one with each .tex file?


Self-explanatory. I would be most interested to hear from researchers who have published a fair amount, decided one way or the other wasn't working for them, so switched — and why.


Related advice / observations welcome: e.g. "why the hell would you need more than one .bib file?"



Answer




  • Several .bib files in the a single folder:


You might want to do this if you write about quite disjoint topics, or if you want to keep several sets of inherently different references in separate files (e.g. scientific publications in one file, technical standard documents in another, etc.). Overall, however, I see little reason to choose this approach.



  • One .bib file (obviously in one folder):



This allows you to build up a database with your personal literature collection. Given that BibTeX by default only shows cited references, this is one of the ways the system is supposed to be used. For someone working entirely alone, this might be a viable way to go.



  • Several .bib files in several folders:


This is the approach I follow (using JabRef), with the further restriction that the .bib files usually reside in precisely the folders where they are used by .tex documents.


Advantages:



  • The .bib file is a part of the source. When using a VCS, everything required to build the document should be in the VCS, and with one repository per paper/project, the appropriate .bib file needs to be stored in each repository.


    • This might be solvable by including repositories in repositories (such as with SVN externals), but that still assumes a central repository location that is accessible to all co-authors, which is not a given when collaborating with different groups.



  • As also remarked by Federico Poloni, when several authors work together, they need to use the same references. It wouldn't make any sense if each author had his or her personal large .bib file, rather than having one common .bib file for the paper/project.

  • Even when working as a single author, the .bib file sometimes needs to be submitted for the camera-ready version of a paper, to allow editors to build the document themselves. While I don't think it's usually explicitly forbidden, I'm quite sure it's not a very good idea to submit your complete multi-MB literature database every time you submit a CR version.

  • Different papers/projects need to be formatted according to different styleguides. While the actual layout of the bibliography is imposed by the BibTeX Style that usually comes with the paper template, some paper-specific tweaking is required more often than not in my experience:

    • Some styles show URLs, for other styles, the URL needs to be inserted into the howpublished or the comment field.

    • In some papers, you want to (or have the space to) show some redundant information such as publication months, publisher locations, or DOIs, in others, you don't.

    • In some papers, you may want to use full journal or conference proceedings names, in others, you may want to abbreviate them as far as possible


    • In some papers, you can use special packages (e.g. for correctly rendering a Latvian name with a comma accent), in others, this might not be allowed.

    • In some papers, the layout of the bibliography is such that you need to repair some ugly block formatting with additional \hskip commands, custom hyphenation, and the like.



  • When starting a new paper or project related to a topic I had written about before and want to grab some random related work for the introduction, I find it most convenient to open the .bib file of the previous document to get an overview of the ~30 references that I had used there. Of course, I could also look in the compiled PDF file, but I cannot directly copy the entries that seem suitable from there.

  • As mentioned by darij grinberg, it might not be desirable to have later changes to bibliography items retroactively show up in old documents. It would mean that the old sources compile to something else than what they did at the time of writing the document, and it may even ruin a carefully adjusted layout.


Disadvantages:



  • I regularly need to copy some references from one file to another when I want to reuse them.



molecular genetics - Can both the overlapping genes (in opposite strands) produce proteins?


I have recognized that both the forward and reverse transcripts from a genomic location code for protein products. Both do occur/express in the tissue of interest. In order to eliminate by chance probability, I have screened my entire list of genes and there are 6 different examples. I would appreciate if anyone having the knowledge about such exceptions could guide me further.


Is this phenomena of protein being coded from both directions (of a given genomic location) possible? Are there any known examples?



Answer



While overlapping antisense RNAs are quite well known, there are very few examples in which both the RNAs from the pair can code for proteins. However, it is not impossible. I can cite one validated example: RU2S (DCDC2) and RU2AS (KAAG1) pair.


While RU2S is constitutively and ubiquitously expressed, RU2AS is specifically expressed in kidney, bladder, liver, and testis. It is also expressed in certain tumours (Peltz and Dougherty, 1999).



enter image description here




Peltz SW, Dougherty JP. Antisense Translates into Sense. The Journal of Experimental Medicine. 1999;190(12):1729-1732.

graduate admissions - When applying for a phd, does a hiring committee cares about winning in various programming/data-mining contests?



When applying to a computer science PhD position in US, does it make sense to mention various significant achievements (finishing in top 5% percent) in programming/data-mining contests (topcoder, kaggle, hackerrank).


These competitions have their weight for tech giants, but does a PhD hiring committee takes them into consideration? If so, it would be nice to know how much weight do they have and what is the best way to present them?




titles - What is a Pro-Chancellor?


I was looking at a document describing setting arrangements for a ceremony and ran into the phrase:



The president of the Student Guild shall sit in the front row, with the Vice-chancellor, the Chancellor, and the Pro-Chancellor



What is a Pro-chancellor? I've never heard of one before.


In-case this is a regional term, I am at a Australian university.



Answer



A "pro-chancellor" (or similarly a "pro-rector," which we have in Germany), is a deputy to a chancellor (or rector), who handles certain duties on behalf of the chancellor (or rector). In the particular case of Commonwealth countries from which you hail, the pro-chancellor is the person who heads the university's executive council on behalf of the chancellor.



cardiology - How quickly can the human heart rate rise and fall?


How quickly can the human heart rate rise and lower?


For example lets say a human heart rate is rested and is at 60BPM and that person is suddenly scared to trigger their fight or flight reaction. Lets say their heart rate rises to double (120BPM).


From the above example their rate has gone from 1000ms between beats to 500ms between beats. Can the human heart instantly in one heart beat go from the 1000ms to 500ms between beats or does it need to ramp up? If yes how quickly can the heart rate ramp up?


I understand that each human heart is different, and that the speed increase and decrease will be different from person to person. What I'm looking for is a value that I can safely say the human heart won't exceed.



Similarly the same question goes to your heart going lower.



Answer



According to the Wikipedia page on Supraventricular tachycardia the heart can go to a new faster rate in the space of a single beat, and then come down again just as quickly, as shown in this image taken from the Wikipedia page.


enter image description here


citations - Recreating images from a source


In the field I work in, it is customary to use a graphical representation of some mathematical concepts. For my thesis, I have been creating a batch of figures containing these representations. These figures are very similar to those in one of my sources. I am wondering if I should cite that source in some way.


Some of my considerations:



  • The general form of the images is widely used in the field

  • I am making all figures myself, so no direct copying



Answer



According to SIAM:




Note: Figures or tables created by someone other than the author or borrowed from a previously published source, even those created by the author him- or herself, must carry an appropriate credit line at the end of the caption. (See section 5.2.5 for additional information.)



Elsevier and IEEE have similar statements, which explicitly include the recreation of the figures.


If you have a source in mind already, it is probably safest just to cite it in the caption. However, if the form of the figure is indisputably widespread, then there is no need to cite it. I would generally follow the same fair use guidelines as does Wikipedia in such cases.


Examples I would not cite (because they are indisputably widespread):



  • Using circles and lines connecting them to represent a graph, G

  • A Hasse Diagram to represent all possible bit combinations

  • The format of a UML diagram (although I would refer to it as such)

  • A figure where I cannot identify an appropriate source with a reasonable amount of effort



Examples I would cite:



  • Any figures that I created for a previous paper ("adapted from [2]")

  • A figure from another paper that demonstrates a specific point, where demonstrating that same point is the reason for recreating the figure

  • A figure reporting data when I have not added to/modified the underlying data

  • Any time that I have a source in mind and I am unclear under which category it falls


Saturday, 27 May 2017

publications - Is it plagiarism for my thesis advisor to publish a paper using content from my thesis without citation?



I did my master thesis last year and recently I found out that a group of four faculty members in my department, including my thesis advisor, have published an ACM paper based on that. (A publish subscribe system based on SDN)


I will be honest. The problem statement was put forward by the faculty. The implementation (design of algorithms and coding) was completely done by me in my thesis. Then they further extended it to a distributed SDN controller environment.


In the paper, an entire section is devoted to the algorithms and implementation. Where they have almost ripped off from my thesis. The sentence structures have been changed and some beautification done to the algorithms to make it look concise.


However, I have not been given any acknowledgement or citation. Anyone who'll read that paper will have an impression that the authors were the only brains behind the project.


The university holds the copyright of my thesis. So I am not sure if this qualifies for plagiarism. But certainly, I feel it is not fair to brush someone's contribution under the carpet.


What can I do about it ? Or am I mistaken and they have every right to do how they feel as I am no longer a student there and the copyright is with them ?



Answer



Whether or not the department holds the copyright to your thesis is irrelevant. Using someone else's ideas without appropriate attribution is plagiarism, period.


So, if your advisor used your original, non-trivial scientific ideas (or your non-trivial description of those ideas) in his paper without attributing them to you, then he has committed misconduct.


The only thing that may be questionable is whether or not your original intellectual ideas were actually used in the paper. What you describe definitely sounds pretty damning, and the more information you add, the worse it sounds; but as strangers on the Internet, we don't have the whole story.



For example: Given that the idea for the thesis was the advisor's, and the paper describes a non-trivial extension, it's possible (though perhaps not likely, depending on the scope of the work) that your advisor was working on the extended version himself independently of your thesis.


It's also possible that he considers your work to be a straightforward implementation of his idea, and not an intellectual contribution - that is, he believes you were doing the work of a staff programmer, not a scientist or engineer. In which case, an acknowledgement would probably have been appropriate, but it's not necessarily plagiarism to omit it.


The degree to which your work constitutes an intellectual contribution to your advisor's paper is impossible for strangers on the Internet to judge.


I suggest you email your former thesis advisor, tell him you've seen the paper, and ask (in a non-confrontational way) how it relates to your thesis work. Then decide how to proceed from there.


Note that pursuing the matter beyond that (i.e. formally accusing him of plagiarism) may involve some serious negative consequences for you, so consider this carefully before proceeding. The morally just course of action may or may not actually be in your best interests.


genetics - How can the number of genes increase through evolution?


I am aware of the basics of evolutionary theory, however I don't understand how mutations can add genes over time.


Am I correct in thinking that creatures within the same species who mutate to have an additional gene in their genome would normally be infertile? Or am I misunderstanding that?


Can someone explain the process of the creation of genetic material through evolution? Citation to a decent academic paper or book on the matter would be appreciated too.



Answer




Mutations can have all sorts of impacts on genetic architecture. A mutation can have a small impact on a genetic architecture such as



  • Substitution

  • Insertion

  • Deletion


or can have a much bigger impact on a genetic architecture such as



  • Amplification (incl. gene duplication)

  • Deletions


  • Chromosomal translocations

  • Interstitial deletions

  • Chromosomal inversions

  • Chromosome duplication

  • Change in ploidy number


Some of these mutations (gene duplication, Chromosome duplication, change in ploidy number) typically allows duplicating DNA segments. After duplication, the two copies of a gene can diverge through neofunctionalization or subfunctionalization. Have a look at wikipedia > gene duplication to understand what biochemical processes can cause such mutations (e.g. ectopic recombination, retrotransposition event, replication slippage).



Am I correct in thinking that creatures within the same species who mutate to have an additional gene in their genome would normally be infertile?




You are being mistaken. I understand the naive intuition that a change in copy number of a given gene would be extremely deleterious but in reality, living organisms seem much more resilient to such Copy-Number Variation (CNV) than you would think. Of course, some CNV are associated with diseases but it is not the case of all (McCarroll and Altshuler 2007).


Gene duplications are actually quite common whether in C. elegans (Lipinski et al. 2011) or in humans (Cotton and Page 2005). Chromosomal duplications are also common (Bowers et al. 2002). Even whole-genome duplications (see a classical example in Wolfe 2015) have played major roles in the evolution of many lineages (Whitton and Otto 2010) including vertebrates (Dehal and Boore 2005).


Below is a phylogenetic tree of sequenced green plant genomes highlighting some of the main events of whole-genome duplication.


enter image description here


For the record, some species of strawberries are decaploid (10 copies) (Hummer 2012). Then there are the extremes. In Entamoeba populations, there can be variation in ploidy level among individuals ranging from diploid (2 copies) to tetracontaploid (40 copies) (Mukherjee et al. 2008)!


teaching - Do I need to define all forms of cheating in the syllabus?


Every year, students seem to find new ways to "cheat" on the work. Every year, my course policies section grows longer and longer (a full page now) to match the newfound methods. I list all forms of plagiarism and exam rules and penalties. I additionally post similar rules on assignment instructions, particularly defining areas where I found students "cut corners" while still literally following the instructions. I teach many freshmen and foreign students who are not familiar with college expectations.


Do I need to define all forms of cheating on the syllabus? Is there some way to apply and enforce a blanket, "no other cheating permitted"?


Some examples include:



  1. Copying and pasting text from Web sites.


  2. Submitting classmate's returned assignment as late assignment with own name.

  3. Foreign students using machine translation exclusively to write essays for writing courses.

  4. Using TTS for speeches.

  5. Peering at other papers during exams.

  6. Copying from phone during exams.

  7. Submitting work they made in other courses.

  8. Adding names of extra non-contributors to group work.

  9. Doing work for a classmate.

  10. Pretending to be another student during an exam.

  11. Attending different sections during exams to preview the exam or try different versions.





Friday, 26 May 2017

human biology - What causes Paresthesia (Pins and Needles) at a cellular level?


I've looked it up in plenty of places like the Wikipedia page and such, and it is clear that the most common cause of Paresthesia is either a fair amount of pressure on a specific patch of skin causing lack of blood flow to the specific nerve endings in that limb (not to be confused with a stop in blood flow to the limb altogether) or a much stronger amount of pressure on that patch of skin for a shorter amount of time. This, although the most common cause, is not nearly the only one, which could be anywhere from simply sleeping on the wrong side of the bed to a lethal injection. I'm wondering what happens to the nerve cells that are affected at a cellular level, and what causes it at a cellular level. The level at which pounds per square inch aren't what is being noticed.



Answer




Underneath the superficial layers of your skin there are receptors which sense pressure, temperature and pain. These receptors are part of the peripheral nervous system which senses stimuli and they take the message conveying details about the stimulus to the somatosensory cortex of the brain. Here is where the perception of pain, burning, pressure etc is ultimately made. To take the simplest example, if you stop blood flow for a short amount of time in a limb, these receptors are activated, and will send signals to the brain that are interpreted as tingling or numbness. With more severe pain, different receptors are activated which , again, project to the same brain area but a different message is read out. If the pressure from one limb is removed, the receptors will go back to normal function as blood flow is restored.


https://en.wikipedia.org/wiki/Nociceptor


http://www.scholarpedia.org/article/Mammalian_mechanoreception


How to avoid thinking about research in free time?


I work on research, trying to get grants and publish papers and the like, I really like my job, could not think about doing anything else. When the weekend comes, or just free time I have the problem that I can not stop thinking on research, it does not matter what. Sometimes it is that I feel so relaxed during the weekend that new ideas come to my mind and then I can not avoid to write them down or think a bit more about them. My wife obviously does not like this, but I try my best.


Do you experience the same and do you know how to avoid this?



Answer



I agree with @PeterJanson's answer, but I'd like to add my 2 cents on it:


Often, I find myself "constantly" thinking about a problem when I'm stuck or I'm not sure how to approach it. Often, it's because I've got tunnel vision on the problem. That is, I'm only thinking about it from a limited number of perspectives and can't think of other possible approaches to the problem. I used to spend long hours thinking about a problem (even outside of normal work hours) without ever really getting anywhere on it.


When that happens, I find it's good to pursue other activities that stimulate my brain to think outside of the box. That could be anything from reading semi-related research papers to playing video games to engaging in stand up comedy! When I challenge my brain to approach other problems from new perspectives, I find those same skills help loosen my brain to engage in research in new ways as well.


While I never really stop thinking about research, I find these other activities help me engage in research with fresh eyes and renewed energy. And ultimately, that has lead me to better research results and liberates me from "overthinking" the problem during my leisure time.



Thursday, 25 May 2017

zoology - How can we know or measure pain in animals?


Is there any standard way to know how much pain an animal feels when it gets hurt like when a bird loses it's wing or hen when killed etc. All pain sensation points?


Hey I'm new to biology. :)



Answer



Pain is subjective


Pain is a subjective experience; you cannot even tell with certainty how much pain your fellow human is experiencing, which is why we ask people; they then can tell us. Pain relief (both physical and emotional) is a significant part of medicine, yet we still have "pain scales" for self-reported pain, one of the more common ones being the Wong-Baker Faces Pain Rating Scale:



enter image description here


Now, note well that even this scale must be interpreted for the patients. For instance, a patient might look like a 6, but be reporting a 10, in which case, a nurse must try to ascertain their actual level of pain.


My point is evaluating pain is contentious even in humans who can express themselves.


Definition of pain


The International Association for the Study of Pain (IASP; www.iasp-pain.org) defines pain in humans as



an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage.



Pain usually involves a noxious stimulus that activates nociceptors in the body that carry signals to the CNS where these signals are processed (and generate responses) including the “unpleasant sensory and emotional experience.”


The following are examples of common types of noxious stimuli for different tissues:





  • Skin: thermal (hot or cold), mechanical (cutting, pinching, crushing), and chemical (inflammatory and other mediators released from or synthesized by damaged skin, and exogenous chemical stimuli such as formalin, carrageenan, bee venom, capsaicin)

  • Joints: mechanical (rotation/torque beyond the joint’s normal range of motion) and chemical (inflammatory and other mediators released into or injected into the joint capsule)

  • Muscle: mechanical (blunt force, stretching, crushing, overuse) and chemical (inflammatory and other mediators released from or injected into muscle)

  • Viscera: mechanical (distension, traction on the mesentery) and chemical (inflammatory and other mediators released from inflamed or ischemic organs, inhaled irritants).



Animals need us to interpret their pain into language. For political and emotional reasons, humans are ill-disposed to do so (how could we then justify eating, exploiting, experimenting on animals, etc.). Even into the late '80's, veterinarians were taught that animals didn't perceive pain per se.


How we know animals understand pain



To determine whether animals can experience actual pain (not simply nociception), it is necessary to show that they



can discriminate painful from nonpainful states; make decisions based on this discrimination in a way that cannot arise from evolved nonconscious nociceptive responses; demonstrate motivations to avoid pain; and display affective states of fear or anxiety if threatened with noxious stimuli. In addition, animals experiencing pain might be expected to exhibit spontaneous behavioral changes including sustained signals of distress and impairments in normal behaviors such as sleep.



Animals show all these behaviors. Additionally, there is ample evidence that emotional pain in animals is real. The mere threat of foot shock (electric) induces signs of stress in rats and mice that can be alleviated with anxiolytics (drugs that reduce anxiety).


Even fish show responses to a painful event: guarding behaviors, unresponsiveness to external stimuli and increased respiration, all of which improve with morphine. This is called analogous evidence: evidence that they are, mechanistically at least, directly analogous to pain responses in more complex animals.


Humans are animals


The nervous system of mammals is fairly identical to our own (we are animals). A chimpanzee who is afraid of a pin after being stuck with one 20 times in a row clearly feels physical pain and fear. That a chimp can mourn the death of a parent by not eating and not moving for days clearly indicates that they can experience emotional pain.


In summary:


Pain is subjective. I cannot really tell exactly what you feel if your arm is ripped off (your bird-wing example) or (don't read if queasy type)




if you are given an electric shock, your throat is slit and, while still bleeding, you are dipped in boiling water to remove your body hair in a whole body hair plucker (which is how most poultry - usually males, not hens, btw - meet their makers).



By definition, only you can tell us.


But common sense and experience tells us it would be painful. And, yes, animals feel pain pretty much exactly like we do. Because we are exceedingly "more alike" than we are "slightly different".


Can animals feel pain?
Pain in Research Animals: General Principles and Considerations
Which Responses Indicate Pain and Which Nonhuman Vertebrates Display Them?<- read this if you really want to know
Evolution of pain


job search - What's the net income of a W1/W2 german professor?


I've seen a job announcement where the salary was said to be comparable to the assistant/associate professor level W1/W2 in the German system, and I was wondering how much it would exactly make in terms of net income (i.e. after ALL taxes). I found this link that states that for Western provinces, it was in 2007 between 3400 and 3900 euros gross per month. This other link states that in Hessen (which is where the position is based), the gross monthly salary is 5386 for W2 and 3901 for W1. Then, according to this calculator, a 3900 gross salary means 2167 euros net, after all taxes. So my questions are:



  1. Do these numbers make sense?

  2. Are there intermediary ranks between W1 and W2?

  3. How negotiable is the starting rank in general?

  4. Are there bonuses to take into account when you're single with no child?


  5. Is the tax system identical for german and non-german?



Answer



The rules for a German professor are quite different than for a typical German employee. The first thing to note is that you will be a Beamter, which is a very special class of government employees. In particular, you are automatically exempted from the public healthcare system, as well as having to make direct payments into the social security system out of your paycheck. Instead, the pension payments are covered by your employer (the government of the Land in which you work), while the Beihilfe system helps to defray part of your insurance costs (the rest of which you pay for through a private insurance contract). In other words: you get billed for your health care; your private insurance reimburses you for 50 percent of your costs, and 30 percent of the costs for other family members. You submit the remainder of the bill to the university, which reimburses the remaining 50 percent (70 percent).


The net result of this is that you get a much higher percentage of your income as take-home pay relative to a traditional government employee. Your tax status depends on your marital status, so as a W1, you can expect to take home somewhere between 2800 and 3200 euros per month, as reported by the Öffentlicher Dienst website. There are some differences due to cost of living in different states. However, expect to pay about 200 € per month for health care, or more if you have a family.


However, I should also note that just this week came down a new ruling from the German Federal Constitutional Court stating that the salaries for professors hired since 2005 are too low, and need to be adjusted. This has the potential to adjust salaries upwards somewhat—although it is not yet clear by how much. (The state of Hesse, against whom the ruling was made, has until January to adjust its salaries upwards.)


To answer your other questions:



  • There are no ranks between W1 and W2.

  • The base salary is nonnegotiable, as it's set by the state government for which you will work; however, you can negotiate some terms of your "package" (support for students and other workers), and you may be able to get some "performance bonuses" negotiated into the contract.


  • Teaching duties are set by federal law, and are similarly nonnegotiable (although you can negotiate the ability, for instance, to teach in English rather than German)

  • Bonuses are not available for single people with no children; instead, according to German law, they're actually taxed at a higher rate.

  • There is no difference in salary based on nationality.


publications - What license to use while putting papers on the Arxiv?



I am new to this and not sure what to make of all the options available. I am interested in putting a machine-learning-related preprint, which will potentially be submitted to places like NIPS/ICLR/ICML in the future.



  • arXiv.org perpetual, non-exclusive license to distribute this article (Minimal rights required by arXiv.org)

  • Creative Commons Attribution license (CC BY 4.0)

  • Creative Commons Attribution-ShareAlike license (CC BY-SA 4.0)

  • Creative Commons Attribution-Noncommercial-ShareAlike license (CC BY-NC-SA 4.0)

  • Creative Commons Public Domain Declaration (CC0 1.0)



Answer



This is the right license that you can always safely pick (assuming you can submit to arXiv at all):




arXiv.org perpetual, non-exclusive license to distribute this article (Minimal rights required by arXiv.org)



Wednesday, 24 May 2017

genetics - Why are hybrids infertile?


Let's take a quote from Wikipedia about zebroids.



Donkeys are closely related to zebras and both animals belong to the horse family. These zebra donkey hybrids are very rare. In South Africa, they occur where zebras and donkeys are found in proximity to each other. Like mules, however, they are generally genetically unable to breed, due to an odd number of chromosomes disrupting meiosis.



First, if I understand meiosis, the resulting cells don't actually end up with half the number of chromosomes, but closer to a full set of halves of chromosomes. How is the meiotic process disrupted?


Then,




A donkey has 62 chromosomes; the zebra has between 32 and 46 chromosomes.



Apparently this difference doesn't obstruct producing (infertile) offspring. How comes the process of recombination of such vastly different number of chromosomes in gametes is viable? What happens to chromosomes that don't find their 'pair'?


And then,



Horses have 64 chromosomes, while most zebroids end up with 54 chromosomes.



54 is an even number. How comes zebroids can't just normally produce fertile offspring with other zebroids of the same number of chromosomes?



Answer




A critical step in meiosis is the formation of tetrads. In diploid organisms like donkeys, they have a paternal version and a maternal version of each chromosome. Two chromosomes 2, for instance. During prophase 1, these two "matching" or homologous chromosomes form a tetrad and cross over, swapping alelles. There is no recombination and no tetrad formation in mitosis, which is used for growth and daily living.



How is the meiotic process disrupted?



In a hybrid, there are no matching maternal and paternal chromosomes. In the zebroid testes or ovary, a lonely donkey chromosome 2 is wandering around, looking for another donkey chromosome 2, and there isn't a homologous chromosome. This mucks up Prophase 1, and the first division pulls crazy numbers of chromosomes to each daughter cell, and the gametes from this division are likely full of extra chromosomes and missing some critical ones.



Apparently this difference doesn't obstruct producing (infertile) offspring. How comes the process of recombination of such vastly different number of chromosomes in gametes is viable?



An embryo needs a solid set of genes to grow up, but chromosomes don't have to be homologous for mitosis, and organisms use mitosis for growth and living, not meiosis. As long as the crazy mis-matched chromosomes in a zebroid have everything needed for a healthy organism, the hybrid will be healthy. It isn't until the zebroid starts to make gametes via meiosis that a problem occurs, as the homologues ONLY form tetrads in making eggs and sperm. The inability to recombine only occurs in the making of gametes.




54 is an even number. How comes zebroids can't just normally produce fertile offspring with other zebroids of the same number of chromosomes?



Because even chromosome number doesn't help if they are not each homologous with another chromosome.


dna - What is the justification of Chargaff's second parity rule?



First parity rule


The first rule holds that a double-stranded DNA molecule globally has percentage base pair equality: %A = %T and %G = %C. The rigorous validation of the rule constitutes the basis of Watson-Crick pairs in the DNA double helix model.


Second parity rule


The second rule holds that both %A ~ %T and %G ~ %C are valid for each of the two DNA strands. This describes only a global feature of the base composition in a single DNA strand.



[source]


It makes sense that in the context of dsDNA, that A = T and C = G, but I don't see an obvious reason why in a single strand of DNA, A ~ T and C ~ G.





undergraduate - Should I be concerned about the validity of my degree if my peers in other educational systems seem to have more difficult exams?


I'm an Italian student of mathematical physics at the university of Edinburgh. I have recently caught up with one of my friends and noticed that their exams in Linear algebra and calculus (analisi I, II and Geometria 1) are harder than those I have done here, which made me question how valid my university actually is. I am aware that at Edinburgh university the toughest year is supposed to be the third (unlike in Italy where the first is when most people drop out).


Does this mean that an Italian degree in physics provides a better preparation to become a theoretical physicist, or are the two qualifications equivalent?




Answer



Comparing different educational systems is frequently a futile exercise. Learning is like hiking to the top of a mountain from different trails: all of them lead to the same top, but one can be very steep at the beginning, a second one can have its steepest segment halfway to the top and a third one can have a final wall that needs an expert climber. Different educational systems can chose different paths according to different intermediate objectives.


Though it's true that some people prefer a certain type of trail with respect to another, and that along certain trails one can find more people with whom to share the joys and sorrows of hiking (learning), arriving to the top depends only on your efforts.


Now you are at the beginning of your trail. It's too early to decide whether it's a good trail or not: start hiking and enjoy the landscape, and if you think that the trail is not enough steep for your training, try to jog or run uphill. In other words, challenge yourself: you will learn much much more.




An anecdote about the appropriateness of certain steep starts. When I was a sophomore (2nd year) studying electronic engineering we had a mandatory class called Rational Mechanics about Lagrangian and Hamiltonian mechanics. The exam consisted in a set of problems on mechanical systems to be solved with those two formalisms, a home project in which we had to develop a numerical solver with different techniques (in Pascal), and a viva where we had to prove various theorems. It was a tough exam at the sophomore level, but mostly because no one could really get the grasp of it. I somehow managed to tunnel through the exam, even with a decent grade.


Two years later I attended a class on quantum mechanics. The first lecture the professor said something like: "Quantum mechanics is based on Hamiltonian mechanics, and since you already know it we can proceed quickly". I then timidly raised my hand and said "Er... no one in this room has the faintest idea of what Hamiltonian mechanics is. Yes, we passed the exam two years ago, but really... could you please give us a refresher?". She was astonished, but then agreed to spend a few lectures on Lagrangian and Hamiltonian mechanics. I integrated her lectures with a classic book, and since then those topics have been among my favourites.


The above example is to say that sometimes a tough exam at the beginning is just a misplaced exam, because some topics require a certain level of scientific maturity to be properly understood.


evolution - What is the difference between these terms: clade, monophyletic group and taxon?


Wikipedia definitions for these terms are pretty similar:



They sound like the terms are synonyms. Why there are separate articles for them then?


Is it true that every person is a reptilomorph and an eupelycosaur just like he/she is a mammal?


Bonus question: what is the difference between a cladogram and a phylogenetic tree?


PS Please don't just give definitions and/or quote Wikipedia. Point out the differences, if any. Even better, provide examples that can be considered one but not the other.




Answer



A taxon (Plur. taxa) is any (monophyletic or not) group of species. For example the group of yellow flowers is a taxon. The group of primates is a taxon. The taxon of aquatic animals is a taxon.


A clade is a monophyletic taxon or monophyletic group if you prefer. A monophyletic taxon (or a clade) is defined as a taxon that contains only all descendants of a common ancestor and the common ancestor. On the following picture, only the Taxon 1 is a clade.


enter image description here


To cite few examples of clades: Primates, Eukaryotes, Rosacea, Reptilomorpha, Rodentia. (The links yield to tolweb.org. See below for more info about this site)


A cladogram is some kind of reduced evolutionary tree that only shows the branching pattern but the length of the branches does not carry any information. At the opposite on a evolutionary tree the lengths of branches usually indicate the inferred time from the divergence.


I think the best tree you can find online is tolweb.org. You can find here the clade of Reptilomorpha and here the clade of Eupelycosaur. As you can see from these links, the mammals form a clade within the Eupelycosaur which itself is a clade within the Reptilomorpha. So yes, every human being is a Reptiliomorpha/Eupelycosaur/Mammalia. You can have fun going through this evolutionary tree and see what are the common ancestors between human and other mammals, human and turtles, human and jellyfish, human and plants, etc… to cite just a few relationships you might be interested to look at.


Tuesday, 23 May 2017

teaching - Resources for English speaking students with poor command of grammar and writing style



I sometimes read student assignments where the grammar and writing style is poor. Paragraphs don't really flow. Sentences go on for ever. The ordering of words is awkward. Instead of concisely introducing the subject of the sentence, it goes on forever.


I'd like to be able to recommend resources to such students. Ideally, such a resources would be available online so I could just give them a link or two to some writing style and grammar materials with exercises.


What is a good resource (preferably online) for teaching English speaking university students how to improve their writing style?




teaching - How to improve myself as a recitation (discussion,lab,tutorial) teacher?


This question is related to How to improve myself as a lecturer?. However, being a PhD student, my main teaching obligation is leading recitations, i.e. sessions for groups of ~20 students that take place each week after a lecture and the content of the lecture should be revised mostly via exercises.


If you do not use the term "recitation session", please see a related question: What is the equivalent of European "seminar" in US universities?


At my university (Charles University, Faculty of Mathematics and Physics, Czech Republic), in the mathematics/theoretical computer science classes, usually you have a 90 minute class where the recitation teacher reminds students of what was said in the lecture, hands out or writes exercises, and then, in shorter blocks, students go through the exercises and one student shows the solutions on a blackboard.



I'm not very fond of this structure and I would like to improve my classes with ideas that I cannot find at my university (where we usually do things the aforementioned way).


Some ideas that I've had in my previous years, which you can judge effectiveness of:




  • handing out "cheat sheets" containing the entire course notes (with compact proofs) beforehand: usually useful, as lecturers rarely have such compact notes beforehand, but very time consuming.




  • trying to ask each student how he is faring and offer personal advice: when I was a student, I preferred this model, but it takes a lot of time to go around 20 people and shortly talk to each one; my students (in an exit questionnaire) argued that they want more exercises done per class, so time is of the essence.





  • allow group work during a class -- this seems natural to me (science is mostly done in groups) but often results in people not being able to perform as well in final exams. Plus I am not yet sure how to allow groupwork so that groups don't delegate the work to the one enthusiastic student in the group.




  • use text questionnaires through and after a session to find out what students would like; I am very happy with the information in those and will use those in the future but students of one university tend to suggest improvements which they have noticed at the same university (especially where it is expected to go to one university for the 3-year Bc. and 2-year Master's).




  • filming yourself (as was suggested in the lecturer's question) is definitely a valid option but I feel it won't help me as much with recitations, especially since (I believe) there are not many great recitation sessions publicly available on the internet.






What is this "mixed" citation style that includes both numbered references and author-date formats?


Once or twice I had encountered papers which used quite convenient citation styles, which combined numbered and parenthetical references. The list of references at the end of the paper is sorted alphabetically, as in Harvard referencing, but also all references are numbered, so it looks like this:


[1] AuthorA (2015) ...



[2] AuthorB (2000) ...


[3] AuthorC (2010) ...


In the text, one might use numbers to save space ("as shown in [2],..."), or parenthetical reference: "as shown by AuthorB (2000),...". And it's really convenient, since depending on the situation you might choose how to cite. I failed to find the name of such citation style - is it at all standardized, or is it just the invention of those papers that I had encountered (as far as I remember, they were preprints from arXiv)?



Answer



Speaking about major citation styles that I've seen, I think that it could be the IEEE Citation Style, as your example matches the IEEE Style for electronic references (for non-electronic references, the in-text citation is the same, with the only difference, being related to reference list, is that the year is located at the end of the reference entry).


Alternatively, if it is not the IEEE Style, it might be either a publication-specific style, adopted by a particular journal or other publication outlet, or a hybrid style, manually developed by some authors.


Naturally, speaking about using LaTeX for biblographies, there are a couple of aspects that I'd like to mention. Firstly, according to Mori (2009), the citation style that you are curious about resembles the default reference formatting style for LaTeX. Secondly, if you use (or plan to use) LaTeX for producing your publications, the following sources, in addition to the paper by Mori, might be quite helpful for customizing bibliographic features to required or desired style: this guide by Patrick Daly (note that it describes a quite old version of natbib package - try to find a more up-to-date version or a similar detailed guide), this brief guide by Ki-Joo Kim and this excellent answer by Alan Munn.


References


Mori, L. F. (2009). Managing bibliographies with LaTeX. TUGboat, 30(1). Retrieved from https://www.tug.org/TUGboat/tb30-1/tb94mori.pdf


Does extracted DNA degrade after a certain time period?


For direct use as template in PCR runs. Chelex 100 5-10% w/v extraction. Without listing the whole protocol, in the end the supernate is decanted off and then stored at 4°C. I was under the impression that this could be stored and later used almost indefinately but two of four samples extracted several months back failed to produce a product (when it was known they should have).


Assuming no mistakes were made and the reactions were the same, is there a technical reason the template would degrade to an unusable point?


Is there a rule of thumb about how long it can reasonably be expected to last?



Answer



My rule of thumb for DNA samples (in TE or water) - if I have plans to use it that week, I store at 4C. If I plan on using it within a month, I store at -20C. If I'm not sure when I will use it, I keep multiple aliquots at -80C. DNA can degrade by acid hydrolysis in water, because of contaminating nucleases in the sample, and by multiple freeze/thaw cycles.


publications - Is it mandatory to have published papers while applying for PhD?



I've heard from people saying that while applying for PhD, you need to have past experience in researching something and should have international publications. I was like more concentrated towards practical experience and have't had any publications. If I am aspiring to do PhD and want to apply, it would be impossible if such a criteria exists!




Monday, 22 May 2017

research process - What makes a Bachelor's thesis different from Master's and PhD theses?



All the three types of research revolve around an argument, a thesis. They of course differ in terms of student level, that is complexity.


But, what makes a bachelor's thesis different from master's and PhD theses in terms of procedures of researching given that all of them may follow the same process of research, questions or hypotheses, review of the literature, methodology, results and discussion?




Answer



The PhD thesis should be on a much higher level than the Honours/Masters thesis, offering a contribution to human knowledge that is of a sufficient level of "significance" to warrant publication in a respected journal.


Significance is highly subjective, and you also do not necessarily have to publish to be awarded the PhD (sometimes the peer-review delay means that they come out afterwards, or there may be some intellectual property issues that make it beneficial to refrain from publication). It is awarded based on your supervisors consent and a review of academics in your field. So the "significance" would probably be judged by them in terms of how much original work they see as a reasonable expectation at that stage of your development (first 3 years of serious/committed research). Unfortunately it also means that some people who probably do not deserve PhD's are awarded them anyway for fulfilling grunt work for their easy-going supervisors.


It is possible that some Honours/Masters thesis might even be more significant/higher quality than a PhD thesis. Unfortunately, this does not mean that the submission of the thesis will award the degree that they deserve. The university may have a policy to upgrade the student's enrolment if the supervisor senses that such progress is being made. However, it is impossible to upgrade to a PhD without completing Honours and I believe nearly every single university has a policy of a minimum period of enrolment before submission is allowed. A subsequent question that you may have is how to gain a PhD without enrolling in one, which is another level of achievement completely.


As for the difference between Honours/Bachelor and Masters it would depend on your university, but both have no requirement for publication quality research and are usually small tasks/ideas that are not worth the supervisors time to think about alone, or involve a lot of labor. In fact, in my school, many Honours thesis are of a higher level than the Masters, because the smart Honours students will either graduate into the work force or go straight into a PhD. The Masters students are usually those who cannot find a job and are not suited to research. However, I believe some other universities may require a mandatory Masters degree to start the PhD.


You may get a better idea by looking at some titles/abstracts of completed theses. The PhD level will be something like a new method/observation/application whereas the Masters/Honours will be an application specific set of measurements/simulations or even simply a literature review to gauge the needs of future work. The word limits are also typically different (although note that quality is NOT proportional to the number of words), with PhD at 100K, Masters at 50K and Honours at 30K at my university.


evolution - Can any species be bred selectively/engineered to become as diverse looking as dogs?



I've done some research and it appears that dogs are the most diverse looking single species of mammals. The questions that interest me is - are dogs special in respect to genes/gene activation mechanisms related to appearance? Or does this dramatic difference in appearance have something to do with dog anatomy and how they give birth?


If dogs are not special, this makes me interested if other species of mammals can also be bred selectively (or genetically engineered) to produce such dramatic variation?



Answer



Dogs have a genomic structure that allows breeding with high variation in size, shape, coat quality, color and other qualities particular to each breed as well.


Other domesticated animals can be bred for as many qualities, but dogs in particular show a wide level of morphological traits - varying in size from just over a pound to the size of a wolf, from which dogs are derived and genetically are still compatible. But more interesting than just size or coat color/texture and even their intelligence and personalities, the proportions of their bodies, of their skull length and breadth, are remarkable.


There are over 160 registered breeds of dogs, but this is only a measure of how much time people have put into them. I think its possible to get nearly anything you want with animals, if you are patient enough - its not clear what is and is not possible with enough genetic manipulating. For instance, horses can be bred over nearly as great a size range for instance (the miniature horse the size of a large dog, the Shire is 3,300 pounds), but it would not be as easy to get both the size and muscularity and shape of a bulldog in a horse. Breeding a mouse of various colors can be done, and so can interesting behaviors, but body shape seems to be harder: a Weimaraner mouse could take a tremendous amount of time and animals.


graduate school - Would going to Teacher's College be useful, if my ultimate goal is to become a university math professor?


I am close to entering the fifth year of a five-year Concurrent Education program. What the program involves is four years spent working on a bachelor’s degree, while “on the side” taking education courses and doing placements contributing to a bachelor of education and teaching certificate. The final year of the program is regular Teacher’s College, which is a combination of education/curriculum courses and, in total, about 14 weeks of placement.



I’ve thought about dropping the education part of the program a few times since it started, but stuck with it because it left teaching open as an option, while only requiring one education course and a 3-week placement each year (which is good experience anyway). About 2 years ago, I started considering going to grad school for math, but by the end of last summer, still wasn’t 100% sure, so I didn’t write the GREs. This year I took a graduate-level algebra course, and about halfway through, realized that grad school for math is definitely something I want to pursue.


As the deadlines for applying to grad schools had passed / were soon approaching, my plan then became to finish the program and get my teaching degree, and apply for math graduate programs this December/January for entrance in Fall 2017. Spending the next year in Teacher’s College, I’ve been thinking, might actually be a good thing, as a way to set me apart on applications for grad school and for jobs in the future, as well as a way to develop good teaching and other related skills. On top of that, it would leave teaching as a back-up just in case.


I would sincerely appreciate hearing the opinions of those who are in or who have been through graduate school in math. Do you think it will it be worth it, in terms of a future as a mathematician (ideally as a math professor), to finish the teaching program? I fear that the final year of this program will be painful for me, as it will require a lot of placements, which I already mostly dread. Alternatively, if I didn’t do Teacher’s College, I could spend the next year taking more graduate courses at my university, learn more math independently, focus on the GREs and honing my applications, and try to see if any professors at my university are willing to supervise me in some research experience before entering grad school for math.


In case it’s relevant, I come from a smallish university in Canada, which is probably not very well-known in the US, and I am graduating top of my class in math.


I’m sorry if this question is not general enough.



Answer



From the body of your question, the genuine question is about getting into a good-enough graduate program in mathematics, and whether the "education" program would be more helpful than taking higher-level math courses. Especially if the latter two are essentially mutually exclusive, taking more math courses (and interacting with math faculty as you suggest) is vastly more relevant to admissions criteria for grad math programs. "Education" work itself is not. If the education work displaces math work, that would hurt your application.


Nevertheless, experiences beyond the typical can distinguish applicants. When I am on our grad admissions committee, often the interest, commitment, drive, and maturity of slightly older people (as opposed to fresh baccalaureate degrees) is visible in their applications. Being able to think about grad school in comparison to other things (e.g., real-world jobs) can be clarifying, and this clarity can sometimes be seen in applications.


It is true that most funding of grad students in math in the U.S. (and, I think, Canada) is as Teaching Assistants, and it is certainly true that part of the job of math professor is teaching. But teaching experience is rarely decisive in post-doc or tenure-track hiring (post post grad school). A really awful teaching record can have a bad marginal effect, but still would not be likely to be the dominant effect, in comparison to "research". Further, "math education" or "eduction" coursework and/or training is not at all necessarily the same thing as experience in teaching. It's an academic subject, aimed mostly at k-12, licensure, etc.


To my perception, the significant point in your situation is the either/or. If pursuing the "education" stuff excludes some math, that would have an adverse effect on your applications to math grad school.



changing fields - "Catching up" after two years in the wrong school


I'm a student in a mid-ranked computer engineering school in France. This is unsatisfying to me for several reasons:



  • I recently discovered that I strongly prefer the mathematical aspects of computer science to the programming side of it, and I would like to continue my career working on machine learning/statistics, which is not an option in my current school. I've done pure mathematics and physics in my two years of preparatory classes, which ended a year and a half ago, but haven't had a proper math course since then, except for some probabilities and Boolean logic.

  • The school's ranking and course offering are disappointing to me. I am there as a result of a bad personal situation during my first undergrad years, and I would have aimed for a completely different path if I could.


I'm in my first year of Master's degree (4th year after high school), and considering my options. I wonder if, by working hard enough, it's still possible to get a more theoretical degree from a good school, or if it's already too late to try again and my academic path is set.


Additionally, I would like to go finish my studies abroad at some point, and already have a few cities and universities in mind. I would like a researcher or research engineer's position in the private sector.



I am considering the following options:



  • Finishing my Master's degree while catching up on maths on the side, then trying to get into a PhD programme abroad.

  • Finishing my Master's, finding work in a machine learning- or statistics-related field for one year, then applying to a PhD programme. The year off would give me more time to get the required level in maths, and the professional experience could be useful when applying.

  • Trying right away to get into another school's Master's programme that would be better adapted to my plans.


For all I know, all of these may be unrealistic (starting a PhD after two years without math courses would at least require a lot of work on the side, and I don't see how my application could have a chance without a math-related degree to prove I'm competent; getting into a foreign school's master's programme is probably a very selective and/or expensive transition, especially if I'm coming from a lower-ranked institution).


The ideal way would be to go back to high school and do the preparatory classes again, this time without external problems to discourage me. Of course, this isn't possible.


I realize that what I want to do is hard. I kind of regret the low motivation I had and the choices I made a few years ago. I am mostly trying to get an overview of what I can aim for and what is off the table. What would be a good way to proceed from here?



Answer




First of all, nothing is off the table. You are still in the very beginning of your academic path and nothing is closed off irrevocably. You are correct in thinking that a CS degree will require you to catch up on fundamentals. The question is how much time and effort would you like to invest in this endeavor. I don't think that dropping out of the engineering program is necessary. A lot of people transition from engineering to CS, you are by no means the first to want to make the switch.


So what are the options of a master of engineering holder to get into a CS PhD?



  1. Doing nothing: You can probably get into some CS PhD program right now if your research statement/references are good, though you are not likely to get into one of the top international programs. School rank does not matter in PhD as much as your ability to find a good advisor with whom you are able to publish! This is a function of your mutual fit and joint capability, not the school's ranking.

  2. Getting a master in CS: if you want to get some foundational knowledge and have time to familiarize yourself with the field, you can just take 2 years to complete another master's degree. The downside here is that it takes time and money, the upside is that you get another degree to your name and time to gain the necessary experience.

  3. Getting a CS undergraduate degree: this is really resetting your academic clock to some degree. It will take you a while (though you may have the option of getting credits for your existing classes, depending on where you take your undergraduate degree). There's nothing wrong with going back to basics, but it is a bit extreme.


To conclude, what you want to do is not easy, but certainly within the realm of plausibility. I would say that the current hype around AI/ML is not playing in your favor. The competition for these programs can be quite fierce, especially in top schools, so if your life dream is joining them it may require you getting some professional training via a masters at least.


citations - Citing into the future on ArXiv – good or bad idea?


I have coauthored two papers, which are strongly related. The younger paper is thus citing the first and the older paper is annoucing the second along the lines of “a study of aspect X will be published elsewhere“. As ArXiv enables you to update your papers, it would be possible to include a citation to the new paper in the old paper after the aforementioned sentence. This might save a reader of the old paper some time with finding the new paper.


However, this breaks some paradigms that were inherently fulfilled by any pre-internet citation, i.e., that you could not cite future work¹ and that there are no loops in citation graphs (i.e., there can be no papers A₁, …, such that A₁ cites A₂, which cites A₃, which cites …, which cites A₁). Thus I find it conceivable that such a citation into the future may cause some problems, for example some weird software behavior (ignoring for the example’s sake that this would arguably be the software’s fault).


Is there any such issue, which would make the aforementioned citation into the future a problem?




¹ Of course, will be published elswhere existed before, but it could not be accompanied by a regular citation.




Answer



Occasionally two related articles are published simultaneously and cite each other, so loops in the citation graph are OK. See for example this article in The Scientist which describes two papers which do so. I was able to verify that they both cite each other through my university's library.


Recommendation Letter from a non-PhD referee


I am applying for a PhD program and I am in the process of choosing people for recommendation letters. Being someone in the final year of master's program (and not been involved with many people in research), one of my potential referee is my internship guide. I had worked with him closely for 3 months and a paper came out of the work.


He does not hold a PhD. He is not involved with research in a big way. He holds a Senior Manager post and possesses 20 years of experience. Although he can write a good recommendation letter, will it carry weight considering his background?




molecular biology - How to find miRNA binding sites on a specific gene?


I am trying to find miRNAs that bind to the 3'UTR of a specific gene. What is the best way of doing that (that is, with a good scoring analysis that is most commonly used by researchers in this area)? I would also like to know the other possible methods if there are multiple ways of doing this.



Answer



There are some tools for predicting the binding:



  1. TargetScan (based on seed match [primary], extra pairing, sequence context 1 — nucleotide composition around the site etc [secondary])

  2. miRanda (based on hybridization stability and seed match[primary] and sequence context [secondary])

  3. PicTar (adds a layer of evolutionary conservation criteria)



1 Context means the position of the target site in the 3'UTR and the surrounding nucleotide composition (which is also considered an indirect metric of secondary structure)


miRanda and TargetScan also have classes called conserved and non-conserved targets. miRanda reports miRSVR scores and TargetScan reports context scores but these measures are based on few experimental results and may not always be meaningful. miRSVR is based on change in target expression upon miRNA overexpression/knockdown. It also uses metrics for target site context. Context+ score of TargetScan includes many metrics which include target abundance and conservation along with the regular context scoring. Some of these metrics may be more useful than the others. But scores based just on miRNA OE/KD experiments can be misleading- especially if target expression is quantified just at the RNA level. Target abundance also varies between different cell types. For more details on these metrics refer to the papers corresponding to these tools.


There are some experimental procedures for target determination which mainly involve Protein-RNA crosslinking followed by immunoprecipitation of Ago and subsequent quantification by RNA sequencing. See HITS-CLIP and PAR-CLIP. These techniques do not really find the targets. All they do is to correlate the levels of miRNA and their predicted targets attached to Ago (you won't exactly know if the ternary mRNA-miRNA-Ago complex really formed or not).


CLASH is a more recent technique which tries to address this issue by ligating mRNA with miRNA. This way you can capture the miRNA-mRNA interaction. However, I am a little skeptical of CLASH; I myself was working on this principle some time back and faced this one challenge. CLASH roughly involves the following steps:



  • Crosslink protein-RNA

  • Immunoprecipitate Ago

  • Use RNAse to digest unbound regions of mRNA (and make it small enough for a short read sequencing experiment)

  • Ligate miRNA and mRNA bound to Ago using RNA ligase


  • Degrade Ago using protease

  • Sequence the miRNA-mRNA ligated pair


My doubt was how would miRNA and mRNA be ligated when the footprint of Ago is bigger than an miRNA (reported ~50-60 nt in HITS-CLIP and PAR-CLIP). When the RNA is nuclease protected then how can it be accessible to ligase. When I was thinking about it then I thought that a partial protein degradation was necessary (after RNAse step and before ligase step) to give some space for ligase to act. Eventually I did not work on it further. Three years later CLASH paper was published and I was happy that someone made it work. But the ligase issue was not addressed (It seems to have worked but I don't know how!!).


To test the predictions you can use a reporter assay (such as GFP or Luciferase) with the 3'UTR cloned downstream of the reporter. These can have artifacts too. Overexpressing the miRNA should be avoided and seed mutations should be done to ascertain the targetability.


Another technique to determine a target mRNA is to mask its predicted miRNA binding site and see the effect on the expression. This has been done in zebrafish using morpholinos complementary to target sites but not on other models AFAIK. This seems to me as an elegant assay- no overexpression and you can precisely determine the target site.


human biology - Why Does Salt Water Help Sore Throats?


I am having some trouble understanding how salt water, a simple solution, could so effectively remove the pains of a sore throat.


I do believe that the answer is closely related to hypo/hyper-tonic solutions, but why is this so, and how does this work?



Answer



Salt water may have anti-septic properties due to the effect it has on water potential. Pure water has a water potential (Ψ) of zero. A concentrated salt solution has a lower (more-negative) water potential. The water potential of the salt solution is likely to be more negative than that of the pathogen's cytoplasm; the salt solution is therefore referred to as hypertonic. Therefore water osmoses out of the cell (osmosis being the net movement of water from a higher water potential to a lower water potential across a semi-permeable membrane). The loss of water from the pathogenic cells causes osmotic crenation - the cell becomes shrivelled and dies.


A hypotonic solution (for example cells placed into pure water) would cause the opposite effect - osmotic lysis. This is the bursting of the cell due to the movement of water into the cell. The bacterial cell wall would first have to be damaged (e.g. by penicillin). This would not be the process by which a salt solution has effect, however.


The fact that the salt water is warm in order to improve solubility may also have the side-effect of causing vasodilation around the infection, increasing the rate at which white blood cells can arrive at the infection site.


It has been more difficult to find a theory as to why a salt solution would have analgesic properties, see the comments below & previous versions of this answer.


Sunday, 21 May 2017

Good book on English for academic writing and speaking for non-native speakers


I'm not native speaker so I always have some problems with academic writing and speaking. Any good book for English in the academic context generally or for Computer Science in particular (writing or speaking)?




etiquette - When asking research-based questions, what are some good practices to maximize the rates at which people reply to emails?


Matthew Might has an excellent article on his website.


But what are some other good suggestions? One important thing is identification (some professors are simply happier to reply to emails than others). Surprisingly enough, I find that famous scientists in my field (for whatever reason) tend to be more responsive to emails than non-famous scientists. I don't know why this is the case, but maybe it could be that they simply know more, so it doesn't take them as much effort to give information/advice?



Sometimes, I even go as far as to look at Rate my Professors site ratings, since the ones with exceptional ratings often tend to be happy replying to emails.


Another thing: maybe taking a class with the professor (in which case they may feel a bit more obligated to reply to them?) Or maybe simply distributing one's emails in such a way that one wouldn't have to email each professor more than an interval of once every few weeks?




Saturday, 20 May 2017

job - Accepted post-doc and have subsequently received offers for full time faculty position - quandary


After my PhD (Mechanical engineering) in 2013, I have been employed as a "visiting assistant professor-TERM" for 2013-2014 at the same university (in the USA). In this time I have been applying for several post-doc and faculty positions all around the world. The following fortuitous situation has now developed:



  • I was interviewed for a post doc position at a famed lab in Europe and received a job offer. This job offer is contingent on me getting security clearance for this lab and getting a long term visa. One of the ground rules laid out was I would not accept other post-doc positions.

  • Prior to this post-doc interview, I had interviewed for faculty position at US universities. Fortuitously a few weeks after this post-doc job offer, I have been offered full time faculty position at two other universities in the USA.

  • Now the reason I did apply for post-doc jobs is that they would help me build my network, publish more and help with an eventual faculty position!

  • I am in some moral quandary now: I know that I have given the post-doc PI my word and I will not renege on it. However, the faculty positions are definitely more lucrative and long term.

  • I accepted the post-doc job because I was asked to make a decision soon and since I am a foreigner, timing is everything for me and a "job in hand is worth two in the bush" Groans at quotation.


The options (likely and unlikely) that present themselves to me are:




  • Unlikely: Postpone the faculty positions to Fall of 2015. I don't think these universities would want to do that.

  • Likely: Angle for better pay/better title at the post-doc jobs.


I am hoping that given the experience in this forum, people could throw some light on this situation.


Edit: Advantages and disadvantages of these positions


Advantages and Disadv. of Postdoc:




  • (+) Great change of work, reputable lab, exposure to different work culture, deadlines, work pressure, expanding professional network on both sides of atlantic





  • (-)1-2 years only, relatively poor pay




Adv. and Disadv. of faculty position




  • (+) Faculty position nuf' said. Much Better pay, "long" term





  • (-) Will miss out on once in a lifetime post doc at great lab





Answer



A job offer which is contingent on a visa and (especially) on a security clearance is not a present job offer but the promise of a future job offer if certain conditions are met. I had a PhD student who wanted to accept such a job, but his security clearance didn't come through in time for him to start the job (note: I'm not saying that he failed his security clearance; it just wasn't resolved in time, even though he started the procedure months in advance). Thank goodness my student was also pursuing other job offers: he is now in a one-year temporary position with the intent to start the aforementioned postdoc next year...still assuming his security clearance comes through.


I am a little confused about the "no other postdoc offers" clause. Surely it cannot be that just by applying for that job you promised not to apply for other postdocs? (Why would anyone apply for a job under those conditions??) And as I understand what you've wrote, you haven't signed any forms or officially accepted anything but only given your word to someone that you intend to take the job. (If you did intend to take this job, then as Ben Webster writes, you certainly should have written back to other jobs that interviewed you and informed them that you are off the market. That was a mistake. I wouldn't beat yourself up about it too much though: none of us gets much experience in these matters from the point of the job applicant. Later we get the rest of our career looking at things from the other side, and "the right thing to do" becomes increasingly clear.)


If you haven't formally accepted the postdoc -- and you can't do so before a security clearance comes through, in my understanding -- and the tenure-track job is much more desirable to you, than I think you are legally 100% in the clear in taking the tenure-track job. Ethically speaking: well, you haven't acted in the best possible way, as mentioned above, and I would not lightly go back on my word to a senior academic who did me a great service....so it shouldn't be a light decision, but in my opinion it would still be understandable and ultimately acceptable if you took the tenure-track job under these circumstances.


It would indeed be a classier move to explore the option of deferring the tenure track job and taking the postdoc for one academic year, or even one semester. Deferring a tenure track offer is quite common in the contemporary academic world: in my department (mathematics, University of Georgia) about half of our recent hires have completed a postdoc and arrived one year later, and recently we had someone start a one-year postdoc at UGA with a tenure-track job waiting for her afterwards (which she did then go on to take). You should understand though that that simply may not be possible for reasons having little or nothing to do with their desire to have you: the decision will probably be made rather on their ability to find personnel to cover your academic responsibilities.


Finally, it may also be a good idea to communicate your thoughts to your putative supervisor. Maybe she will be totally okay with it, and with her blessing your conscience should be pretty clear. Or maybe changing your mind will cause trouble for her in a way that you don't see. Either way it seems respectful to keep her informed.



evolution - Are there any multicellular forms of life which exist without consuming other forms of life in some manner?

The title is the question. If additional specificity is needed I will add clarification here. Are there any multicellular forms of life whic...