Wednesday, 31 October 2018

teaching - How should I handle students who ask questions that are "beyond the scope of the course" as a TA


One of the most common replies I have gotten as a student in engineering is the phrase "what you are asking is beyond the scope of this course". But I always found it a bit funny coming from the prof since he or she most reasonable have a thing or two to say about the subject. Furthermore, whether something is considered "beyond" sometimes depends on a prof's temperament, on a good day a topic that is beyond will be addressed, or a busy day that topic is hold off indefinitely.


Now I am a TA for an introductory calculus class. Often there would be a handful of students come into the class with years worth of experience in calculus. Sometimes they will ask a question that is addressed in an upper year course, sometimes the question would have to resort to complex variables, sometimes it relates to physics.


How should I handle students who are interested but asks question beyond the course in the sense it requires an additional course or two to truly appreciate its importance or at least to see how the actual calculations are performed.


I could tell them the answer but sometimes it can lead a student down a rabbit hole which can be devastating given how busy first year students are.



Further, I don't want to disrupt their "natural course" by saying something that may prevent independent self discovery.


Lastly, I don't want to say something which could be misconstrued as a test topic.


At the end of the day, how should I address the questions that are deemed beyond the scope while not withhold information.



Answer



"This is beyond the scope of the course" is not a great answer without further clarification or commentary. As you say, the ethos of the university is that your instructor is someone whose qualifications and expertise lie far beyond the scope of any undergraduate course. The answer is justified for a student who asks a certain kind of question in class, because class time is limited and one must exercise judgment about what to say and cover in that limited time. Going off on a lengthy digression that is likely to be of interest to only one student and perhaps not even well understood by her is not a good use of class time. So I would expect an instructor to say "Come talk to me after class if you are interested in that."


If a student comes to talk to you in your office hours or your spare time, I think that she deserves some kind of answer. The answer may in fact be that the question lies beyond your expertise (and there is nothing inherently wrong with that; there is a lot of stuff out there...), but in that case you should still spend at least a little while trying to direct the student elsewhere, either to the appropriate reading materials or to some other faculty member who can better help them out.


If you do feel that you know the answer to the question -- or at least, enough of the answer to the question -- then, sure, take a crack at answering it. It takes a lot of expertise -- subject expertise, pedagogical expertise, and practice -- to be able to give answers to such questions which occupy a reasonable amount of time and are at least somewhat meaningful to the student. This may involve for instance asking some quick questions of your own, trying to understand the student's background and the true direction and depth of their interest. One mistake that even seasoned pros make is to open up the gates and flood the student with information of a quantity, density and sophistication that is beyond what they can be expected to process in the moment. If someone asks you about the example of a conservative vector field on the punctured plane which is not a gradient field that you discussed in class, you should probably not respond by giving them a half hour lecture on DeRham cohomology. (At least not at first. One of the amazing things about teaching is that the chance that a multivariable calculus student really is looking for a lecture on DeRham cohomology in answer to their question is very, very small...but it is positive!)



How should I handle students who are interested but asks question beyond the course in the sense it requires an additional course or two to truly appreciate its importance or at least to see how the actual calculations are performed.




You take a shot at it. Make your first shot very brief: just drop some terminology and try to give a sentence or two expressing one of the main ideas in broadest terms. In the above case, you might say "Whether every conservative field is a gradient field depends on the domain. We saw that this is the case when the vector field is defined on the entire plane [or all of three-dimensional space...]. It is also true if the domain is something like an open disk or ball. It turns out though that 'holes in the domain' lead to conservative vector fields which are not gradient fields." (By the way, I first wrote more and then deleted some of it! Restraint is truly hard.)



I could tell them the answer but sometimes it can lead a student down a rabbit hole which can be devastating given how busy first year students are.



I'm not really sure what you mean by this. The type of personality that is going to be "devastated" by learning that things go deeper than they currently know does not seem well suited to higher education. If anything I feel exactly the opposite way: as an educator at any level, showing students the rabbit hole is one of the most important things that you can do. Especially, getting an undergraduate degree is all about learning just enough to get an awareness of the true depth of knowledge and acquiring a sound foundation upon which more knowledge can be built.



Further, I don't want to disrupt their "natural course" by saying something that may prevent independent self discovery.



Again, I don't really buy into this. Students who want to be sufficiently well insulated from being taught things by other people do not belong in a university. Self discovery is a wonderful and important thing, but it is enriched and reinforced by prior knowledge and coursework, not ruined by it. There are plenty of things to discover for oneself, and anyway you are only telling them a little. It seems very likely that such a conversation would if anything trigger the student's independent learning and self discovery, not inhibit it.




Lastly, I don't want to say something which could be misconstrued as a test topic.



This is why it's best to address such questions outside of the classroom, or at least outside of the class session, and ideally with only the students who are explicitly interested. It should then be much clearer that what you are telling them is not a test topic. If there is any ambiguity about that then you should resolve it. For instance, maybe the question actually is closely related to a test topic but comes from a direction in which the student does not see that. In that case you should point out the connection to them.


How can I best edit a paper to help get it published?


My friend wrote a fantastic paper in their scientific field. I believe it is truly ground-breaking but it calls a lot of existing theory into question. If he's correct it will force many accepted articles to have to be rewritten.


Perhaps because this paper is controversial, then, my friend has faced a very uphill battle to get it published. It deals with quite some difficult-to-grasp mathematical models and concepts. It's in a field of science and an area of that field where experiments to prove things are simply not feasible, and instead, hypotheses rely on models to explain observations of large-scale real-world processes.


I'm deliberately avoiding mentioning what area of science this paper is in because I do not want them to know it's about them, in case they see this. Because while I feel that while their paper is great, on the other hand the writing needs some love. If the writing is improved, this paper could make this person's career. I'm a published writer and I have been paid to edit many things, but not scientific writing. I want to help them.


I would like to know what the best approach is for preparing myself to be able to edit papers for submission to any given scientific journal in any given field. I would think one great approach would be to read lots of articles in such journals. Do you know of any good guides? Are there any online sites where people can publish papers prior to submitting them to journals in order to get public comments and feedback to hone their work? What are some novel steps that could be taken? I just want to help.



Answer



You've already talked about reading up journal articles in the field, so I'll skip that. On top of that, there are a few ways.


Follow the journal's format guide


Ask your friend which journal is the next target. Go to visit the journal website and look for the "instruction to authors." You can find format-related instruction there. A format compliant article is less likely to trigger an instant rejection/return.



Read about scientific writing


There are a few guides that I consider pretty useful:




  1. The Craft of Scientific Writing by Alley is perhaps a classic for engineer-type of writing. It also provide a good collection of tips and gems for different sections.




  2. Essentials of Writing Biomedical Research Paper by Zeiger is a wonderful desktop reference for biochemical type of writing. It also provides a lot of good vs. bad examples.





  3. Writing Science: How to Write Papers That Get Cited and Proposals That Get Funded by Schimel is a bit of a black sheep. It does not teach you how to write, but it gives an excellent account on how to chain up or arrange ideas for maximal impact, done in the levels of the whole paper, to section, paragraph, sentence, and syntax. It also draws heavily from techniques used in fiction writing, which is quite intuitive.




  4. The Craft of Research by Booth et. al. does not purely focus on writing, but also discusses how to set up arguments, present concepts. It may be a bit more hands on for you, probably more suitable for your friend who is doing the writing.




  5. A Manual for Writers of Research Papers, Theses, and Dissertations, Eighth Edition: Chicago Style for Students and Researchers (Chicago Guides to Writing, Editing, and Publishing) by Turabian is an overall very useful desktop reference. It complements the Craft of Research.




Talk to the specialists



If the paper is really that controversial, I think you should talk to some people who have a good command in that particular field and get a gist of how to present or package the ideas with maximal chance of being considered.


Hire a professional editor


It's also prudent to know your limit. If you feel this is too much, then you should ask your friend to get help from the institute's English language support or hire a professional scientific editor. Editors can come with different specialties, some are experienced in medical writing, some are in science. Check their portfolio and try to match the article type as best as you can.


I said this because there is a problem in your question, if you feel that you're not capable of editing a scientific paper, how come you feel confident to evaluate his work with certainly such as: "My friend wrote a fantastic paper in their scientific field. I believe it is truly ground-breaking but it calls a lot of existing theory into question. If he's correct it will force many accepted articles to have to be rewritten?"


I don't mean to be insulting, just wish to point out that professional works sometimes are best left to professionals, especially when we don't have time to become one.


Best of luck, and I wish your friend a successful publishing process.




Disclaimer for everyone:


When reading/evaluating my answer, please be mindful that in no way I am agreeing that the paper is ground-breaking or fantastic. I merely provide resources to the questioner to improve his/her ability to comprehend and edit a scientific paper.


Whether someone with limited experience or capability can do ground-breaking work is not in the scope of this answer, and I have no comment either way. I just want to point out that I have not read the paper so I can't comment.



If you want to do a PhD in mathematics, how important is it to start immediately after finishing undergraduate studies?


I'm about to finish my undergraduates studies; I majored in mathematics and minored in physics, and I always intended on going to grad school to pursue a PhD in mathematics, but I've been having doubts recently. I did well in all my courses (3.92 GPA), but I'm trying to seriously consider if my background is strong enough now and if I'd truly have the motivation to stick it out. I've also been thinking even if I decided to give it a shot, it might be nice to take some time off for rest and to improve on some of my weaker areas. But I've been told by a few people that if you want to do a PhD in mathematics, you have to go pretty much right after undergrad, mainly because recent letters of recommendation are so important, and professors forget you after a time. So I wanted to know if this is true, and also thought I'd ask for advice if anyone has been in a similar situation.




Answer



The other answers don't really address the issue of recommendations, so let me, at least briefly. I've been on our math PhD admission committee several times and we get many applications from people who've gotten their undergrad some time ago.


First, yes there is some truth to it being easier to get in right after your undergrad degree. The letters of recommendation are important. If your professors know you quite well, and the department is relatively small, they should still be able to write you decent letters after a year or two hiatus, but if it gets to 5-10 years, they may not, and with that kind of time lapse, their letters won't count for as much anyway.


My advice would be to consider what else you want to do. Is there something else you really want to do for awhile (peace corps, travel, interesting job opportunity)? If so, it won't kill your chances for grad school, but you may have to apply to more backup schools. If you're out for longer, it might be best to do a masters first before getting into a PhD program.


If you don't have any definite ideas, why don't you try applying to a few masters programs (Vladhagan's suggestion of trying a masters first is a good idea to give you a sense of what you want to do and give yourself a better background) and a few PhD programs that seem interesting to you? At the same time, maybe go to a career fair and send out a few job applications in the spring? The PhD programs that accept you (at least if you're in the US) at least should give you an opportunity to visit, so even if you're undecided about a PhD in the spring, visiting these schools (and similarly any job interview impressions) may help you make a decision.


biochemistry - What role does a protein's size have on protein-protein interactions?


Protein-protein interactions are when two or more proteins bind together, possibly for some important biological function. Recently, I'm starting to look more into proteins, and in particular, networks related to proteins, such as protein-protein interaction networks. I found myself thinking about the following question:




Question: Does the relative size of two proteins significantly affect how likely they are to interact? Does it affect how they interact?



It would be plausible for me to download a protein-protein interaction network, and calculate their relative size from the proteins' PDB files. However, I predict that the results from such a computation would not offer much insight into why (or why not) the protein sizes affect binding.




My new advisor didn't reply to my email, is it a sign that she doesn't care?


I am a 2nd year graduate student in physics, at one of the top 20 universities in the US. This semester, I am supposed to start a research project with a professor who agreed already to be my thesis advisor.


So, I emailed her at the beginning of the semester -10 days ago- but I didn't get a response yet. Nothing. I emailed her again yesterday and I hope for a response. I tried to go to her office, but I couldn't find her. She comes to the university only to teach and then she disappears.


I am really worried. Is this normal or is this a sign that the advisor doesn't care at all? Should I try and change advisor? She is a top scientist but I am very disappointed...



Answer



Flippant version: It is normal AND it is a sign that the advisor doesn't care at all.


Less flippant version: I would wait a couple more days. Then, if still no response, I would ask around to see if the professor is sick/having a major life crisis/etc. If not, then I would conclude that they do not meet a reasonable standard of availability, and I would choose to work with someone else. There are few things more depressing in academia than colleagues who drop the ball and ignore you, especially when they are senior to you and you rely on them to make progress on your work. Life is too short to be working with someone who is not interested in working with you.


species identification - Please help to identify this insect


It is approx 1.5 inch (3 cm to 4.5 cm) long


The insect



The insect's back and wings.


Insect from its front


Insect from its forward side


Just few hours ago (around night 10 to 10.30 pm local time), this insect entered into our home, flying in high-speed and hitting to wall, objects, and to us. To keep it calm we turned the room-light off. Then it became steady on my father's shirt even after the light again turned on, and it was a beautiful moth-like insect. After taking photo it gently freed out of the room.


Here in India, it is rainy season (after summer).




P.S. I've very very little-experience in practical-zoology, however after few hours google search it seems to match with those moths tagged as "hawk-moths", and another notable feature when it was flying it looked like a hummingbird or a small bird, that was written in the description of some "hawk-moths" in different sites. Initially we're taking time to decide whether it is an insect or a bird.



Answer



As MattDMo suggested, this is a hawk moth. Given your location and season, this is why I thought it was either the Impatiens hawkmoth (Theretra oldenlandiae) or the White-edged hunter hawkmoth (Theretra lycetus). And finally, why I now think it is the Brown-Banded Hunter (Theretra silhetensis).




The Theretra silhetensis exhibits a solid white line along the upperside of its abdomen, and more of a faded banding pattern on the forethat corresponds with your picture. These moths are also common in India.



It is differs from T. oldenlandiae in being very much paler in color and with white line down center of abdomen.



enter image description here


When it comes down to it, that solid single white line is the biggest indicator it is a T. silhetensis over a T. oldenlandiae or T. lycetus.




While my final answer is the brown-banded hunter hawkmoth, below are the original two species I suspected it was and why. I've included them below simply for reference / alternative comparisons.



Theretra oldenlandiae was my original instinct, since it is a more common in your area and the banding/stripe pattern matches to a T.



enter image description here


But, upon closer inspection, there are two things that don't quite align:



  • Your Specimen exhibits pink suffusion, especially on the back edge of the fore wing.

  • Your specimen has a single dorsal line, not two.


For these two reasons I think it is not the Impatiens Hawkmoth and thought it was more likely the...



Theretra lycetus is not as common on the T. oldenlandiae for your location, but the pink suffusion supports this idea. See below for a picture:


enter image description here



This still doesn't explain the double vs. single white stripe along the upper abdomen. Which is why I know believe it is the...




As an aside, this could be an example of polymorphism vs. identifying characters. Welcome to the world of taxonomic lumpers and splitters...


human biology - How does a fetus retain a blood group different from its mother?


It's a well-established fact that blood group is decided by genotype. But, when a new child starts its journey in the womb, the mother's blood (along with it's agglutinins and agglutinogens) flows into the baby's heart. So, how does a baby (having a blood group different from that of the mother) retains it's own blood group (with it's own agglutinins and agglutinogens) instead of the mother's blood group, which moves in and out of the baby through the umbilical cord.


Do both kinds of blood flow in the baby's body constantly in such a case?


Apart from that, I know of rhesus incompatibility. But there are also incompatible blood group types. How does a baby survive agglutination in those cases?



Answer



The maternal and fetal blood circulation systems are completely separate. The embryo's blood cells start developing at around week 5 gestational age (3 weeks after conception), the same way any other tissue is developed by the fetus itself. By around week 7 gestational age, a circulatory system has developed and the heart has started beating.



All nutrition goes through the placenta, which keeps the two circulatory systems separate by the so-called placental blood barrier.


Rhesus incompability is only a concern after the first birth (or miscarriage) because then the maternal and fetal blood had the opportunity to mix and lead to the mother developing antibodies.


Also, as AMR pointed out in the comments, blood group incompatibility is mediated through IgG antibodies - those don't start to be produced at all until birth and at higher levels not until six months to a year after birth. See also At what age do babies begin to synthesize their own antibodies?


masters - How common is it for women to drop out of graduate school because they have children?


I am a 29 year old female. I want to go back to school. I have been told that I have a very good chance at getting into a very good graduate computer science program (machine learning, AI, robotics, theoretical cs, so many algorithms... it's the real deal here, you guys). Due to cost, the fact I need to work full-time and giving myself the best chances for success, I will be doing this at a very slow, part-time pace.


As an undergraduate, I saw woman drop out for family, so my question is:


How common is it for women to drop out of graduate school because they have children?


I'm preferentially looking for answers that draw on personal experience or statistics.





Some additional background you might find relevant:


My boyfriend/fiance/partner of 10+ years died about a year ago. I don't know what I'm doing about all that yet.


I have been out of school for a while. You don't really need a degree in IT and I hadn't really considered going back because it didn't seem manageable or practical with the rest of my life. But... now that life is gone, it's cool because it has to be, and I would have never even dared to dreamed about being able to enroll in this graduate program before.


I don't necessarily care if I have a family and/or kids but I'm fairly positive it will not happen on accident. To me, being with someone or married does not automatically mean having kids, either. For the sake of the question, when I am ready, I feel it will be very logistically easy for me to date again.


Articles like these motivate my question:



Helpful Articles:





Tuesday, 30 October 2018

lipids - Why don't McDonald's fries decompose?


So I was cleaning out my car and found a McDonalds French fry. as I don't eat anything in my car I know exactly who and when this fry is a result of. The when is + 10 months old and it could pass off as one just prepared. How could this be?




Answer



This controlled experiment of burger decomposition explains in detail why fast food burgers do not decompose easily. The same can be applied to fries, which are smaller and come dehydrated from the frying.


The main take-aways from this experiment are:


1: Dehydration is the main reason why fast food fries/burgers do not decompose easily. Placing the burgers in a ziplock bag, preventing dehydration, causes the burgers to decompose.


2: Since an unsalted, home-made burger did not decompose, preservatives, chemicals, saturated fats, and other components are unlikely to be the case of the fast food burgers not rotting.


publications - Should I do anything if I am cited for something that wasn't in my paper?


I’ve just read a paper in which a previous paper of mine has been cited. The line in which the citation happens is something like:



It has been shown that technique X is successful in this problem [citation for my paper].



However, in my paper, I never mention technique X.


Should I do anything? The paper that cites mine is otherwise fine and really doesn’t need a reference for their use of technique X, since they spend a lot of time developing it anyway.



Answer



There really isn't any action worth pursuing here. You could write the editors and ask them to issue a corrigendum stating that the reference was incorrect, but you'll probably waste a lot of time and effort for what is likely a very minor issue.



publications - What do I do when a co-author takes too long to give feedback during the peer review process?


Two years ago I did a piece of research for a journal's especial edition. I got the reviewer's comments, did all the corrections and sent it to my co-authors. One of them took so long to return his comments that the paper wasn't included in the EE. I then tried to submit it to another journal but again the same co-author took long time to provide his comments. Finally my boss suggested me to submit it to another journal (good one) without waiting for my co-author's opinion. I got the reviewer's comments back, I did all the corrections (I have 45 days), sent it to my co-authors and gave them a week to send me their comments. The same co-author is now telling me he's not happy I didn't tell him I submitted to that journal and that he won't be able to make comments in a week. I'm again in a catch-22. What do you do?




economics - Is it possible to become a professor in a field where you don't have a PhD degree?


I am a economics major student who will be applying for PhD programmes pretty soon. I am quite interested in economics, but I find myself even more interested things related to probability theory, such as stochastic processes. And if I ever get into a econ PhD programme, I'm pretty sure that I should be looking for co-advising from the math/statistics department if allowed.


So I am curious about what happens in academia if you ever find yourself more interested in a topic that you don't have a degree on. If you can write some good papers during your PhD, can you find good career opportunities in that field in academia even if you did not start out as a "professional" scholar in that field?


Any idea would be very much appreciated!!



Answer



Yes, it is possible to cross over between different fields following the PhD-level studies. However, in general, this tends to be more applicable to "interdisciplinary" fields that can fall into multiple disciplines. For instance, the engineering department I studied at hired people with PhD's in applied mathematics and physics, because their research fields—in fluid mechanics and interfacial science, respectively—meshed well with the research interests of the department.



To give a counterexample, however, it will be much harder to make the case for a high-energy physics person to move into another discipline, just because it is so strongly identified with physics.


Thus, if you have a PhD in a field such as mathematical economics or econometrics, it will be a lot easier to make the lateral shift. However, if you're in a more "traditional" subfield, that move becomes much more challenging.


Monday, 29 October 2018

dna sequencing - Why does one combine PCR and cloning as ways for amplification of sequences?


Why does one combine PCR and cloning as ways for amplification of sequences? Don't they produce the same result? I was reading the paper https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1864885/ and got confused by the following passage:



Cloning and Bisulfite Genomic Sequencing


Each bisulfite-converted DNA sample was subjected to PCR by primers that did not discriminate between methylated and unmethylated sequences, using a GeneAmp PCR core reagent kit (Applied Biosystems). The primer sequences are listed in Table 1, and reaction conditions are listed in Supplemental Tables 1C and 1D (available online at http://ajp.amjpathol.org). Each subsequent PCR product was TA-cloned into pGEM-Teasy vector25 (Promega) for transformation into Escherichia coli strain JM109, according to the manufacturer’s instructions. Clones were picked randomly and colony PCR was then performed using vector primers T7 and SP6 to amplify the cloned inserts. Cycle sequencing was performed using BigDye version 1.1 (Applied Biosystems) and an automated capillary DNA sequencer Genetic Analyzer 3100 (Applied Biosystems). The sequences obtained were aligned and compared using SeqScape software (Applied Biosystems). The completeness of bisulfite conversion was first confirmed before scoring. The CpG sites sequenced as cytosine or thymine residues were scored as methylated or unmethylated, respectively. The methylated site frequency was calculated for each sample by dividing the total number of methylated sites over all cloned CpG sites.



Can someone comment on what purpose the cloning serves when the amplification is already done by PCR?



Answer



There are several reasons to clone a PCR product but the main reasons include expressing the product and maintaining the library inexpensively.


In the study you mentioned, the cloning and transformation is apparently done for obtaining single clones (from single colonies).




Clones were picked randomly and colony PCR was then performed using vector primers T7 and SP6 to amplify the cloned inserts. Cycle sequencing was performed using BigDye version 1.1 (Applied Biosystems) and an automated capillary DNA sequencer Genetic Analyzer 3100 (Applied Biosystems).



Since the original PCR would have a mix of different sequences, cloning is done to separate them out for the ease of studying them by sequencing. If they don't separate the sequences, they would have to use NGS (or more expensive methods like PacBio for longer reads). They have used Sanger sequencing. This article was published in 2007. At that time NGS was not as cheap as it is today and was hence not used by everyone. Also, the NGS machines of that time had even smaller read lengths (~40 for Solexa/Illumina GA and ~75 for GA-II). Considering these, Sanger sequencing may have been preferable to the authors of this paper. However, as I mentioned before, Sanger's method cannot be used for sequencing pooled samples; you need pure templates or else there will be mixed signals and base calling would be inaccurate.


(See the example chromatogram below: look at the position with the Y which has mixed T and C signals. It is not from a pooled sample, though).
enter image description here


Therefore cloning, transformation and colony picking was necessary.


Moreover, in many cases picking individual clones would be necessary for performing functional assays to associate phenotype with genotype.


publications - Writing homework essay anonymously to avoid controversy


I am attending a course at a neighboring institute purely out of interest. Neither credits, nor grading of any kind are involved.


As part of the course, we are required to work in pairs (assigned, not chosen) and write an essay on a controversial/contentious topic. My partner and I find ourselves on opposite sides of the debate.


On account of the controversial nature of the topic, its connection with my academic work and my own contributions to the field, I am not comfortable diluting my stand. However, compromise is inevitable in pairs/groups, and I realise the point of this exercise might be to reinforce exactly that.


Nevertheless, I don't wish to sign off on something that I don't believe in 100%. I also don't wish to opt out of the assignment as my partner is being graded on it and it seems unethical to leave him hanging.


So I propose submitting the essay titled: (Title),(Partner name and affiliation), (Anonymous, independent).



Is there any precedent to do this, or is there a strong reason not to? Do I use a pseudonym instead?


Any advice on alternatives or other points of view would be appreciated.



Answer



I do not know what was in the minds of those who set the exercise that is now causing trouble for you, but I find it hard to imagine any future career for you that will not involve collaborating with other people with whom you do not entirely agree. It certainly makes sense for an educational institution to set exercises to give you experience of that.


I have experienced another version of such an exercise in which you and the other person are given, say, 5 minutes, to state your case. A bell is then rung, and you are both required to present, fairly, for a further 5 minutes what the other side had previously argued.


It is an important life skill to listen to the other person's point of view, to understand it so fully that you could explain it to somebody else, and then, if possible, and it usually is, agree a statement of what you both agree on with clear indication of where you differ.


If I had set the task I would regard separate submissions from you and your colleague as failure to engage properly with the task.


Sunday, 28 October 2018

Evolution of the (phenotypic) facial features of the indigenous people of (west/central) Africa



I am not even remotely an expert in this field; I just got curious, so help me.


First of all, let us remove all human sentimental attributes from this question (such as connotations of scientific racism or emotional reactions thereof) and ask it from a purely scientific point of view. The typical facial feature of a West/Central African person (the so-called broad African type) exhibits a wide nose, a lack of nasal bridge projection (similar to many Asian noses) and a prognathous skull shape that resembles more closely to our ancestor Homo Heidelbergensis than many other indigenous phenotypes from other parts of the world do.


I agree that this claim sounds anecdotal, and is certainly not true of all Africans (for example, Ethiopians), but one cannot dismiss it simply on grounds of ethical policing. I cannot help but notice that many of my African friends remind me of our shared ancestry (with no disrespect meant whatsoever), as is clear from the photographs below, one of a supposed Homo Heidelbergensis and one of a random African Homo Sapiens of today.


Homo Heidelbergensis Homo Sapiens


My question is the following. What is the reason behind this seeming resemblance? Is this appearance an illusion and possibly a construct of a prejudiced mind, or does this have a valid scientific answer? For example, are most Africans closer to our ancestors than the rest of humanity is? How does evolution answer this question?


Thank you for your help.



Answer



Note that I had never heard of the expression "broad african type" and could not find much reference to this term when googling. But anyway, I'm happy to use it to refer to a particular set of typical facial features.



What is the reason behind this seeming resemblance?




Note that the Homo Heidelbergensis image is not a picture (obviously) and much of what is being depicted was open to artistic interpretation. It is possible that some of the feature you are thinking of are not real. Have a look at the post In reconstructions, how are various shapes of facial features determined from skull only?.


You list several phenotypic traits and it would be too broad of a question if you were expecting an explanation about the evolution of each of these traits. I will just pick one classic trait; the nose.


One of the role of the nose is too even up the temperature of the air that we're inhaling. As such, the shape of the nose is under differential selection pressures depending on the environment. So if the nose of Homo Heidelbergensis looked alike (assuming it really looked like on the facial reconstruction) today's "broad African type", it is probably because of shared environment. Note however that nose shape vary a lot among african lineages and also among non-african lineages. You might want to have a look at this popular article.


A big point of ressemblance is the skin color that you don't talk about. Have a look at the posts Why do we assume that the first humans were dark-skinned? and How did some humans evolve to be white? to learn more about skin color



Is this appearance an illusion and possibly a construct of a prejudiced mind, or does this have a valid scientific answer?



First, see my above point about facial reconstruction.


H. Heidelbergensis lived in Africa and in this resepct shared an environment that is like the environment that modern african are encountering that what modern Han Chinese are encountering. In this respect, we could expect some ressemblance.




For example, are most Africans closer to our ancestors than the rest of humanity is?



No, they're not and they cannot be. The MRCA of all humans today is more recent than the MRCA between any human and H. Heidelbergensis. You should have a look at this intro to phylogeny to understand a little bit better how different lineages relate evolutionarily speaking.


immunology - How plasma cells switches secreting different Ig classes?


In Type 1 hypersensitivity how do B lymphocytes switch Ig classes, from synthesizing IgG to IgE? What is the mechanism? I studied multiple pathology books, it says the same as for IgG secreting pathway. So how does the body know how to change the Ig secretions in response to allergens?



Answer



In T-dependent antigen recognition, the B cell can switch the isotype by first altering the heavy chain gene. This is driven by Tfh cells. The specific cytokine stimulating the class switch determines the isotype of the heavy chain to be synthesized. For your example, interleukin 4 (IL-4) drives the class switch to IgE. You also need a costimulatory signal from the CD40/CD40L interaction between the B cell and the Tfh cell. This induces activation-induced deaminase (AID), see bottom paragraph on this.


There are switch regions (denoted S) in the introns between J and C segments at the 5' end of each CH locus, and upstream of that is an I exon (region denoted I) w/ it's own promoter. Induction by CD40 and whatever cytokine is present starts transcription of the I exon, switch region and CH chain of the H-chain the B cell is meant to switch to. This germline transcript facilitates dsDNA breaks at the switch regions Sµ for the IgM (switch region mu) and the switch region for the class to be switched to, Sε in our case for IgE. The intervening DNA is deleted and the coding region Cµ is replaced by Cε coding for the IgE heavy chain BUT, it ends up with the same V domain as the B cell originally had!



Some enzymes involved are AID, which deaminates cytosines and allows the germline transcript to hybridize in the switch region, Uracil N glycoylase UNG which removes uracils from the DNA, and APE I endonuclease that generates a nick at the abasic sites generated. These nicks lead to dsDNA breakages in the switch regions. Text shows that the switch region DNA is G-C rich and can form stable RNA:DNA hybrids with the coding strand of the switch region and the germline transcript. This is what frees up the noncoding strand for alteration by UNG and cutting by APE. The result is that the V region becomes associated w/ a new constant region.


I'd also be careful with the terminology plasma cell as used in the title because we aren't yet at the high-affinity secreting stage when we're talking about class switching. This is mediated at the activation of the actual B cell by Tfh cells.


enter image description here


enter image description here


All information above is summarized from Cellular and Molecular Immunology, 8th ED


biophysics - Why doesn't the cell membrane just...break apart?


Forgive me if this is a silly question. I can't understand the basics. Why doesn't the cell membrane just break apart? What's keeping the layers in the phospholipid bilayer together? I know that the membrane is embedded with proteins and lipids, but I still can't wrap my head around the "why". Are the hydrophobic interactions in the middle "stronger" than the hydrophilic interactions on the outside? What's keeping the individual phosphate heads together instead of, say, one of them just drifting away due to a nearby water molecule?



Answer



The membrane bilayer is held together by hydrophobic forces. This is an entropy driven process. When a greasy or hydrophobic molecule is suspended in water, the water molecules form an organized "cage" around the hydrophobic molecule. When two hydrophobic molecules come into contact, they force the water between them out. This increases the entropy because the freed waters don't need to be organized into the cage. Lipid bilayers have many many many hydrophobic lipids that squeeze out a lot of water and greatly increase entropy. The polar phosphates allow the water to interact with the surface of the membrane, without a polar head group the lipids would form a spherical blob instead of a membrane.


Read this section on wikipedia for more.


Saturday, 27 October 2018

How do you get a bad transcript past Ph.D. admissions?


I have a master's degree in International Studies, and a double major with Computer Science from undergrad. My transcripts suck. There's no other way to dress it up. I have pretty good teaching experience, and my GRE scores are awesome, and I suspect my recommendations are as bland as everyone else's. Basically, to an admissions committee, I suspect I'm the model of a student who is probably smart enough but didn't work hard enough.


I want to do a Ph.D. in Political Science, but the response from my applications is looking pretty grim. Am I permanently out of the running, or is there something I can do for the next few years which will help to counterbalance my unfavorable GPA?




carbohydrates - Why do animals use glycogen for their polysaccharide storage whereas plants use starch?


The polysaccharide storage form of glucose in animals is glycogen, whereas in plants it is starch. Both of these are polymers of α-glucose with α-l,4 glycosidic linkages and α-l,6 glycosidic branch points (Wikipedia article on polysaccharides). The only difference that most sources mention (e.g. Berg et al.) is that glycogen contains more branches than starch.


It is not clear to me from this information what effect the different branching would have on the structures of the polysaccharides, nor why one rather than the other would be preferred in animals and plants.



Answer



Summary


The key difference between glycogen and amylopectin (the main constituent of starch) is not the number of α l,6-glycosidic branches, but their arrangement.


In glycogen branches are successively subdivided, producing a relatively small globular structure that is unable to grow further. It is soluble in an aqueous environment and, with its numerous exposed ends, can be metabolized rapidly — appropriate for animal cells in which energy reserves must be mobilized in response to immediate demands, e.g. for muscle contraction.


In amylopectin there is a long central polysaccharide chain from which branches of limited size extend at intervals. This produces much larger semi-crystalline particles (starch grains), a form especially suited to long-term bulk storage in seeds and tubers.



The Chemistry


Chemistry of glycogen and starch


This is the common feature of glycogen and the amylopectin portion of starch. (The amylose portion is unbranched.) In glycogen there is approx. one branch point per 10 glucose units, whereas in amylopectin the figure is 1 per 24–30 (source: Wikipedia).


The Topography


The contrasting branching topography of the two polysaccharides, mentioned above, is illustrated diagrammatically below:


Topography of branching in glycogen and starch


This is a two-dimensional representation. In three dimensions the glycogen spreads out in all directions from a central point — actually the primer enzyme, glycogenin. In three-dimensions the amylopectin strands mainly lay side by side.


The Macro-structure


The illustration below, modified from Bell et al., shows the different shapes and sizes of the macromolecular structures. It should be mentioned that semi-crystalline nature of amylopectin is aided by the helical conformation of the chains.


Macrostructure of glycogen and starch



Rather than providing a précis of the review of Bell et al. (Journal of Experimental Botany, Vol. 62, pp. 1775–1801, 2011) I shall quote from them directly (omitting their citations).


As regards glycogen they write:



Each chain, with the exception of the outer unbranched chains, supports two branches. This branching pattern allows for spherical growth of the particle generating tiers (a tier corresponds to the spherical space separating two consecutive branches from all chains located at similar distance from the center of the particle). This type of growth leads to an increase in the density of chains in each tier leading to a progressively more crowded structure towards the periphery.


Mathematical modelling predicts a maximal value for the particle size above which further growth is impossible as there would not be sufficient space for interaction of the chains with the catalytic sites of glycogen metabolism enzymes. This generates a particle consisting of 12 tiers corresponding to a 42 nm maximal diameter including 55,000 glucose residues. 36% of this total number rests in the outer (unbranched) shell and is thus readily accessible to glycogen catabolism without debranching. In vivo, glycogen particles are thus present in the form of these limit size granules (macroglycogen) and also smaller granules representing intermediate states of glycogen biosynthesis and degradation (proglycogen). Glycogen particles are entirely hydrosoluble and, therefore, define a state where the glucose is rendered less active osmotically yet readily accessible to rapid mobilization through the enzymes of glycogen catabolism as if it were in the soluble phase.



Regarding amylopectin they write:



Amylopectin defines one of, if not the largest, biological polymer known and contains from 105–106 glucose residues. There is no theoretical upper limit to the size reached by individual amylopectin molecules. This is not due to the slightly lesser degree of overall branching of the molecule when compared to glycogen. Rather it is due to the way the branches distribute within the structure. The branches are concentrated in sections of the amylopectin molecule leading to clusters of chains that allow for indefinite growth of the polysaccharide. Another major feature of the amylopectin cluster structure consists of the dense packing of chains generated at the root of the clusters where the density of branches locally reaches or exceeds that of glycogen. This dense packing of branches generates tightly packed glucan chains that are close enough to align and form parallel double helical structures. The helices within a single cluster and neighbouring clusters align and form sections of crystalline structures separated by sections of amorphous material (containing the branches) thereby generating the semi-crystalline nature of amylopectin and of the ensuing starch granule. Indeed the crystallized chains become insoluble and typically collapse into a macrogranular solid. This osmotically inert starch granule allows for the storage of unlimited amounts of glucose that become metabolically unavailable. Indeed the enzymes of starch synthesis and mobilization are unable to interact directly with the solid structure with the noticeable exception of granule-bound starch synthase the sole enzyme required for amylose synthesis.




Coda


The paucity of information on plant starch metabolism would seem to reflect a combination of their being less is known about plant biochemistry, and less general interest because of a general focus on medical and animal biochemistry. Although an animal biochemist myself (and, thus, previously ignorant of the information in this answer) I feel that it is time to redress this imbalance.


life - Are mitochondria dead?


In the video "What is Life? Is Death Real?", the subject of mitochondria is raised at 2:58. At 3:12, the narrator says "[mitochondria] are not alive any more: they are dead."


What currents of thought lead to this affirmation?


When I search for mitochondria are dead on Google, I get many links about the role of mitochondria in cell death, but I don't see anywhere that the assertion in this video is discussed.





publications - Can I patent work after I have published a paper about it?




I've published my scientific results one year ago, but I didn't apply for patent at that time. Can I still patent it?




evolution - Could cancer be in itself a evolutionary process?


Could cancer be in itself a evolutionary process? Maybe in some way could it be a process of variation? Or would this idea be completely without support, if so, why?



I don't mean that each case would lead to evolution, but that within an entire group of organisms, it could lead to case of of individuals developing ways to resist what caused the cancer, maybe in cases of individuals beating the cancer or in cases of successive generations of off spring from individuals with cancer?


Could it be possible that in rare cases it could lead to the development of new organs or specialized cells?



Answer



Interesting question. I believe it definitely is an evolutionary process. unicellularity breaking away from a multicellular life.


There are two examples that I can think of, which can support this argument:



  1. Hela cells: Hela cells have been classified as a different organism because they have the ability to grow outside the host indefinitely and their genome is different from Homo sapiens. Hela cells also have a new scientific name- Helacyton gartleri

  2. Transmissible tumors: two tumors have been known to infect other hosts and therefore can be called obligate pathogenic lifeforms. One is Devil Facial Tumor Disease which infects Tasmanian devils and gets transmitted by biting. Another is Canine Transmissible Venereal Tumor which infects dogs and spreads via sexual route.




Could it be possible that in rare cases it could lead to the development of new organs or specialized cells?



Cancer as of now has a pathogenic identity. It is a simplistic process which in a thermodynamic point of view can be described as maximization of entropy(disorder) and thereby increasing the stability [like the way solutes diffuse]. If a cancer has to give rise to a new functional organ then it has to be supportive of the ordered lifestyle which it is trying to break away from in the first place. It may adopt a pathogenic lifestyle initially and then may redevelop order. I don't know of any such evolutionary event but it may nonetheless be possible.


Friday, 26 October 2018

job search - Accepting European math postdoc offers and leaving after a year


I am a finishing Ph.D student in pure mathematics in the US. In November, I received a 3 year postdoc offer in Europe on a PI's grant with a December reply deadline and accepted starting in 2015. Recently, I was offered an NSF MSPRF at an American school, starting in 2014. My plan is to use the NSF in 2014-2015 and 2016-2018 and go to Europe for 2015-2016--the NSF and my US host institution are OK with it. However, when I accepted the European offer I did not specify that I would be staying for only a year (I had no other offers at that point). A what point am I obligated to tell the European PI that I will stay for only a year? Now, before accepting the NSF? After starting the position in 2015? More generally, in Europe is it considered normal or unethical/breach of contract to leave multi year (mathematics) postdoc positions after a year? I know it's considered normal in the US but the postdoc hiring here is done at a departmental rather than individual level...




data - How do I get a DOI for a dataset?


A DOI is a commonly used digital object identifier to refer to an electronic document. How do I secure a DOI for a research dataset, to help share and identify it?



Answer



There are a number of ways in which a DOI could be linked to a dataset:



  • Figshare will provide a DOI for any deposited work, which includes data.

  • Zenodo also provides DOIs for any kind of research output, including datasets

  • Dryad provides DOIs for data submissions linked to papers (for a fee, which includes storage, curation, archival, & checks for best-practice [HT @DaisieHuang for pointing out my lazy description of fee previously])



You might also look to see if a domain-specific repository provides a DOI for data. The journal Scientific Data maintains a good list of repositories you can look at.


Choosing a DOI provider will depend on particular circumstances. Both Figshare and Zenodo are both free to use services for example whereas Dryad charges for the service they offer to cover ongoing storage and archival costs.


Some points of comparison between Figshare, Dryad, and Zenodo, etc


Dryad accepts data relating to publications; if it isn't associated with a paper then they won't accept it. Figshare and Zenodo will accept any research output, whether linked to publication or not. In that sense Figshare and Zenodo are more broadly applicable to any research outputs.


Data may be more discoverable in Dryad or domain-specific repositories (DSRs) than in general purpose ones likes Figshare & Zenodo. Data is likely to be formatted in standard ways and more easily searched by online or other tools if they are archived in Dryad or DSRs. This is likely to encourage reuse.


Figshare is a for-profit commercial entity, Zenodo is run by CERN and was supported by the EU OpenAIRE project at one point, whilst Dryad is an not-for-profit entity supported by research grants and membership fees for organisations.


evolution - Were we able to create vitamin B12 in past?


All herbivores produce vitamin B12 de novo. Gorillas, for example, are "vegans" so I suppose some human ancestor was also herbivore.


Have we ever been B12 self-producers? If so, why have we lost that ability and do we have to obtain B12 from our diet? Have we just gotten "used to" being omnivores (i.e., we don't need self-production anymore)?




formatting - Is it advisable to have many clickable hyperlinks in an academic CV?


I have recently been updating my academic CV. I realise that some items are not as internationally recognised as to that every reader will be aware of their importance without googling them. Hence, I am thinking about inserting hyperlinks to those items lest the reader wish to learn more.


Now, many items on my CV are clickable, and a reader particularly interested in one item may now just click on it to see a webpage with more details to pop up.


However, I rarely see a CV with many hyperlinks behind those words/phrases. Is this a taboo, or I am OK to do so?



Answer



I agree: most academic CVs do not have a festival of hyperlinks. I don't see a problem with it, unless -- as @Taladris says -- the large amount of hyperlinking creates clutter in the document. For something like a CV, where the spacing on the page is highly adjustable, I would think that you could probably have even a highly hyperlinked CV and take a little care to make sure that it does not look too busy to the eye.


I can tell you though why I don't feel the need to hyperlink my CV (and I imagine the reason holds more generally). It's simple: I also have a webpage, and anything which appears on my CV which could get linked to also appears on my webpage. Further, the translation between the two is straightforward: I have a section of my CV listing papers, and I also have an immediately visible link to a subpage containing papers from my main webpage (which, as you can see, is no frills to say the least, but it seems to get the job done).



Although I am certainly no expert on the visual display of information, webpage design (you'll know that immediately if you clicked on the above link) or anything like that, it is my opinion that a webpage is a more natural medium to have clickable content than a CV.


I think every young academic should have a professional webpage. If they do have one, I'm not sure that a heavily hyperlinked CV is necessary, although again I see no harm in it.


Thursday, 25 October 2018

genetics - Why has grey hair evolved?


A vast majority of humans get at least some grey hair as they age. As far as I know this applies to both genders and all races. Presumably this means that at least some grey haired humans have noticeable reproductive advantage, or maybe they had it in the recent past.


Theoretically, because this feature is so prevalent, there must be a strong evolutionary pressure to keep it. Am I right? If so, what is it?



Answer





Presumably this means that at least some grey haired humans have noticeable reproductive advantage, or maybe they had it in the recent past.



No it doesn't. Natural selection is not that strong, it doesn't optimize every single possible physical trait towards maximum reproducing.


And as others have mentioned, having lots of grey hair usually happens after reproduction is over. Historically, lots of women did a lot of reproducing before they had any grey hair.


ethics - What was offensive about the "ladies lingerie department" joke, and how can I avoid offending people in a similar way?





Moderator note: A follow-on question about how to avoid the behavior described here has been posted.




In recent news, two academics are at odds over this incident full article here:



The fuss started when [Prof. X] and [Prof. Y] ended up in the same crowded elevator during a conference at a Hilton in San Francisco last month. [Prof. Y] said she offered to press the floor buttons for people in the elevator, whom she described as mostly conference attendees and all, except one other woman, white middle-aged men. Instead of saying a floor, [Prof. X] smiled and asked for the women’s lingerie department "and all his buddies laughed," [Prof. Y] wrote in a complaint, the details of which he disputed, to the association later that day.




This incident has escalated to the point that the academic organization that organized the conference has decided to sanction Prof. X.


I don't understand why the joke was funny, but that's not really important. I would like to understand why it was offensive. Specifically, I'm wondering



  • In what way was this comment offensive?


The bullets above are not rhetorical or sarcastic; I am completely sincere. I am worried because I don't understand precisely what was offensive, so I fear that I might do something similar. I have wondered whether the remark was offensive because:



  • It referred to underwear

  • It referred to women (in any way) and was cause for laughter


  • There is some unstated assumption about his reason for supposedly going to a lingerie department


But I really have no idea, and I want to understand. I could not find an answer in any of the news pieces on this incident.


I realize that this question might get closed as off-topic. However, I think it is wrong to assume that no part of this is specific to academic culture (if that's the case, that's part of the answer). Certainly it occurred in a uniquely academic environment, and is a dispute between academics and an academic society, that seems to jeopardize at least one academic career.


Please refrain from using this as a place to express your opinion on who is right in this dispute. That's not what I'm asking.




Advertising one's publication



After posting a preprint on arXiv, or after an accepted paper appears online, how to bring other's attention to it?


(Unless one is already a big name in one's field, or one proves a long-standing and well-known open problem, I doubt that just waiting for things to happen is going to suffice.)


For sure one can give talks or present posters on relevant conferences. But are there any other methods of bringing it to the attention of others who might be interested and for whom it may be beneficial?


The question is both about "classical" methods and any relevant Internet tools.




Wednesday, 24 October 2018

metabolism - Does Glycolysis produce lactate, or pyruvate?


EDIT- Somebody suggested that this is the same question as this, it isn't. This one is asking about the definition of glycolysis. That one was asking about the definition of fermentation.


Does Glycolysis produce lactate, or pyruvate?


I'm aware that ultimately in the human body, after sugar is converted into pyruvate, then if fermentation happens it will be converted into lactate, or if aerobic respiration happens then it won't.


My question is on the term Glycolysis


I notice that most sources seem to say glycolysis ends with pyruvate


e.g.



Glycolysis.. is the metabolic pathway that converts glucose.. into pyruvate


The only time lactate comes into it is After glycolysis. (so in this wikipedia page on Glycolysis, in the Post Glycolysis part it mentions this) https://en.wikipedia.org/wiki/Glycolysis#Post-glycolysis_processes "pyruvate is converted to lactate "


However, on the other hand, I see some sources, even another wikipedia article titled "anaerobic glycolysis", which the page says isn't well sourced, it has glycolysis as ending in lactate.


https://en.wikipedia.org/wiki/Anaerobic_glycolysis "Anaerobic glycolysis is the transformation of glucose to lactate when limited amounts of oxygen (O2) are available"


Also here https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4343186/ "we contend that La− is always the end product of glycolysis"
(putting aside their controversial claim that it is always the end product, i'm interested here in the idea of theirs that it is ever an end product, so, their use of the term glycolysis)


So it seems that there are those two positions


For the purposes of this question i'll call one position the lactate position and the other position the pyruvate position.


One position, call it the lactate position, which is those sources that place lactate, as the end product of glycolysis.. thus counting not just sugar->pyruvate, but sugar->pyruvate + the whole fermentation process, as glycolysis.


While others, call this the Pyruvate position, count purely sugar->pyruvate those sources count just that, as Glycolysis.



I'm wondering if both definitions are correct.. / both usages are valid.. Or if one of those e.g. if the lactate one, is an odd one out and if most academic texts wouldn't use that definition, and would use the pyruvate position for their definition of Glycolysis.


Note- I had written that glycolysis ends with pyruvate {or pyruvic acid which dissociates into pyruvate}, and the lactic acid fermentation ends in lactate {or lactic acid which dissociates into lactate}. But what is in those curly braces there is wrong and I have been corrected on that one. Glycolysis produces pyruvate, and Lactic Acid Fermentation produces lactate. The reason (answerer David explains), why Lactic acid fermentation bears that name, is it is named after what are called "lactic acid bacteria" "Lactic acid bacteria are named for their effect on the medium in which such bacteria grow, not for the ionization state of lactic acid in the cell and when bound to enzymes (about which the namers could have had no knowledge)." and they do that type of fermentation that produces lactate. There are two types of "lactic acid fermentation", https://en.wikipedia.org/wiki/Lactic_acid_fermentation homolactic fermentation, and heretolactic fermentation. Humans do homolactic fermentation that produces lactate and no ethanol, as opposed to heterolactic fermentation that produces lactate and ethanol https://www.onlinebiologynotes.com/different-fermentation-pathway-bacteria/


Also, note that Muscle cells do what is called "lactic acid fermentation", but the idea that they produce lactic acid is a myth that has been commonly propagated in sports science(I guess perhaps partly as a result of what I think is biology's poor nomenclature, the fact that the process is called lactic acid fermentation). Muscles don't produce lactic acid, in fact, it's not lactic acid and they don't produce it. Muscle cells use lactate, they don't produce lactic acid. This is verifiable from googling humans produce lactate not lactic acid eg the first line here mentions that myth and calls it out as a myth. The body of the following question and its answers here are related and very interesting.



Answer



I think you will find all text books (e.g. Berg et al. Ch 16) describe glycolysis as the conversion of glucose to pyruvate, as this is how it has been defined and considered in countless biochemical papers. The subsequent reactions of pyruvate are regarded as separate metabolic steps or pathways.


The title of the short review article you cite (“Lactate is always the end product of glycolysis”) has mislead you — it was obviously meant to be controversial. It is the ambiguous term “end product” that is the (deliberate?) cause of the problem. What the article suggest is that the product of glycolysis — pyruvate — is always, at least partially, converted to lactate in animal cells. It would have been better entitled “Lactate is always produced from the pyruvate generated in glycolysis”. Whether or not that is true (and that is not your question as I understand it), the conversion of pyruvate to lactate is not considered to be part of glycolysis any more than its conversion to acetate.


There may be ambiguity in the use of the ancient term ‘fermentation’, but not with glycolysis and other metabolic pathways established in the twentieth century.


evolution - Why and how does complexity usually tend to increase through time?



The question of complexity is classic in the very first lectures of evolutionary biology where the teacher usually tries to tell the students that complexity does not necessarily increase and that humans are not more complex than other organisms.


My questions are:



  • Why does complexity tend to increase through evolutionary time?

  • What are the different hypotheses to explain this pattern?


When writing "Mass extinction" on google image, we find many graphs displaying the number of families (or other taxa) through evolutionary times with the five mass extinctions. What would it look like to draw such graph replacing the family richness in the y-axis by :



  • Mean complexity among all living things?

  • Complexity of the most complex taxon?





I suppose that anyone who wishes to answer to this post will necessarily need to define the words "complexity". He or she might define it in terms of number of genes, number of metabolic pathways, length of DNA sequence, number of cell types, some kind of index taken from information theory. When asking my questions I had in mind a definition close to "number of genes" or "number of metabolic pathways".



Answer



This is actually a very interesting yet difficult question to give a single precise answer to. I will try and summarize for you a "meta answer":


Complexity Science Some consider complexity not to be a Biological topic as such, since it is a property that accumulates in non-biological systems e.g. economics, technology, music, language - in fact anything that "evolves" through time. This new field of science is called "Complexity Science" or "Complex Systems" and is primarily a field of mathematics or information theory: Complex systems


Complexity in Biology What I can say is that these kinds of questions have started an almost new field in Biology, called "diversity evolution", here is a very interesting paper: Diversity Evolution.


Defining Complexity You were right to mention Complexity first needs to be defined, and as an early field of science this is where much focus has been recently. There are quite a few definitions but it is perhaps too controversial to list any particulars... HERE is a whole UCL lecture dedicated to defining Complexity!


Topics of Complexity Science I have written enough, so perhaps this lovely diagram of the topics of Complexity Science will be of some help:


enter image description here



publications - Publishing while changing institutions



I'm submitting an mathematics article concerning research that was done entirely while I was an undergraduate at University A. This August I will be a graduate student at University B. The only funding I received came from NSF through a ten week program at University C at the very beginning of the project, after which I finished the research on my own while attending University A. On the other hand, while University B has nothing to do with the publication, it is the most up to date institution as far as contact information goes.



Should I put University A or University B as my affiliation (or both)?



Answer



The standard solution for your question is to indicate your current address in a footnote:



Alexander Gruber a,†, Another O. Tor a


a University A, Department of Criminology, 221B Baker Street, London (UK)
Currently at University B, Logic Department, Whitehaven Mansions, Sandhurst Sq, London (UK)



If you didn't use any of the resources of University B, that is the way to go. Otherwise, University B should be listed as an affiliation.


The situation regarding your University C is less clear to me: if you were affiliated with them, you need to list them. If they only “handled” the money given out by NSF, then simply mention them in the funding or acknowledgments section:




Acknowledgments
A. G. is grateful to NSF program #132-1237 for funding, administered by University C.



graduate school - How should I deal with discouragement looking at others success?


I'm a master CS student. I had so many troubles in my life. I didn't have proper schooling and were in difficult situations which led to me being not so good compared to my peers now at the graduate school. For example I had problems with math and so and when I got into my new graduate school everyone was almost far better than me. Then I started working day and nights to improve my skills and after a year of really hard work now I achieved almost their skills. However in that time my peers were also developing their skills and doing fancy projects and so on. Sometimes when I get to look at their success and projects I feel stressed and discouraged that I really still have way too long to achieve that. Or that I'm really putting so much work into myself because of the problems I went through but at the end I find myself still far. Whereas my peers are enjoying their lifes and at the same time achieving something. I know life is not fair nor I'm jealous, but sometimes I just feel sorry about myself that I work really hard but without much difference :(. This is also is leading so some self-confidence issue, that whenever I see one of my peers I get stressed and sometimes afraid to discuss a topic with them because I don't want to look bad not knowing that easy stuff for them.


This is always leads to a voice in my head saying: oh if only a professor in MIT or Stanford sees how hard working you are, you might be there now. But of course I won't because I will always be far from the students there because of what I went through.



What to do to overcome this?



Answer



I will apologize in advance, because this answer won't give you what you are probably looking for; but it might give some perspective so I will reply anyways hoping that helps somewhat.


First off, know this: you are not alone! It's actually pretty common to look at your peers (at the office and elsewhere worldwide) and feel shitty about the "insignificance" of your accomplishments compared to those of others. To further strengthen the point, I can say that I am battling with this every single day for instance, despite what I hear from others about the quality or importance of my work when I look around and see what others achieve I feel depressed...


Secondly it's also good to try and remember that life isn't a competition. Well, some aspects of life are competitive, for sure, but you cannot go about living your day competing with others in every single aspect of your life. This is a simple but a very powerful insight, also very hard to digest it properly and take it to heart.


Think of all the aspects your life, from research to parking your car, from buying groceries to whatever sport you enjoy the most... I can guarantee you that there will be several (if not more) people within your immediate surrounding that will be "better" than you in each one and single aspect, if you isolate them one at a time. But I can also assure you that they won't be the same people if you consider different aspects. Overall, you are the person you are and constantly comparing yourself to others in single aspects (and focusing on your shortcomings) will only drive you towards unhappiness.




So does that mean you should just relax and go with the flow? Absolutely not! You have to play catch up, if you can identify your shortcomings in particular fields (like maths, or programming experience). It'll be frustration, it'll be long hours, it'll be effort... Try to focus on setting goals for yourself when you are in catch-up phase.


I strongly recommend checking out S.M.A.R.T goals concept which helps in getting things done and bagging that sweet feeling of accomplishment, little by little.


Hope this answer helps to some extent and it all works out in the end!



job search - Is it commonplace for a recruiter to ask to see an unpublished PhD thesis?


I would have thought the document and overall findings are to be a closely guarded secret until defense or publication, so you can imagine my horror that a hiring professor would ask if he can have a pdf of my dissertation. This is in the context of a job application, whilst he decides whether or not to invite me for an interview. I have already sent them the other standard documentation that was requested in the advert. Is his request as unorthodox as it seems to me?



Answer



In a word, yes. It is very common for academic employers to want to know about a candidate's research in progress, and they often ask for research plans, unpublished manuscripts, reports on ongoing projects, etc. From the employer's point of view, they want to know as much as possible about what the candidate is doing, so as to evaluate the promise of their research program and their productivity. This is especially true for junior candidates who do not already have a large body of published work. So a request for a draft of a PhD thesis would not be out of line.


When a candidate shares such material as part of their application, the hiring professor or committee has an ethical obligation to hold it in confidence. They should not circulate it beyond those people within the department who are involved in the hiring decision. Also, it would be ethically inappropriate for anyone with access to this material to exploit it for the gain of their own research program (e.g. by trying to solve the candidate's thesis problem before they do, or giving it to one of their own students). As the candidate, you have the right to expect that this will not happen.


Of course, as a matter of practice, if you want the job, you don't have much choice but to give them what they ask. But I wouldn't see such a request as unusual or unreasonable, and I don't think you need worry about them using it unfairly. If you are still worried, you could send them the thesis along with a note saying "since this is work in progress, I would ask that you keep it in confidence".


Also, I would say it's an exaggeration to say a thesis should be a "closely guarded secret" or to react with "horror" to a request to share it. It's generally prudent not to share unpublished work indiscriminately, but it's not as if it were missile launch codes. If there is something to be gained by sharing it with someone (e.g. useful input from an expert, a potential collaborator, a job) then often that's a good idea. It seems to be pretty common for people starting out in academia to overestimate the risks of people stealing their work: yes, there are horror stories, but in the long run, you usually have more to lose from excessive secrecy than from reasonable openness. Paranoia is generally not a helpful trait for an academic.


genetics - What makes a gene dominant or recessive



We all carry two copies of each gene (outside of male sex chromosomes). If the two differ from each other often one is dominant and one recessive. How does this mechanism work on a molecular level? What mechanism determines which gene gets expressed?




Tuesday, 23 October 2018

productivity - What productive academic work can you do with minimal attention in a small (



Ok, here is a major issue I have with much of my work (which is experimental/computational research): I am a graduate student, and spend much of my day doing 5, 10, 20 and 30 minute experiments (both biological and computational), which require set up and monitoring, but result in small to medium size hunks of time when I'm really not doing much (essentially just checking every 1-2 minutes to make sure everything is still working). During these periods of time I either a) attempt to do other work or b) procrastinate. Both are not great, since either a) I'm constantly shifting my focus from the latter project and end up making mistakes in it or b) I'm procrastinate (usually by reading articles, twitter, etc.)


Does anyone have advice for how to deal with these small, awkward period of times (< 30 minutes) - is there something which you find useful to do that you can also shift your focus from incessantly?


As an example: http://xkcd.com/303/



Answer



ff524's answer is awesome as usual, but the core problem for you may be that most of these suggestions are not, or at least not directly, useful to your research. If, as you say, most of your day is spent in this way, even "productive procrastination" may be too much procrastination and too little actual progress.


In that case, you have two options:



  1. Learn how to get actual work done in those short chunks. Being able to context-switch without getting thrown off completely is definitely a skill that can be learned. You will probably never get as efficient as when you can devote your full attention to the task, and you will likely need to double-check what you did while multi-tasking, but getting things done slower than usual is much better than not getting anything done at all.

  2. Automate better (and, hence, increase the time between needing to check up on your results). In my experience, if you need to actually check every other minute or so what your experiments are doing, then your tooling is not good enough. Many things can be scripted so that they basically run from beginning to end on their own. Further, you can configure a system monitoring tool so that it (for instance) sends you an email when something abnormal happens. Of course this requires non-trivial IT skills, but in my experience most students working in experimental sciences are able to grok these things quite quickly if they devote a few days to it (believe me, the time necessary to learn how to automate pays off manifold in the long run).



In practice, you probably want to go for a combination of both of these options. Try to increase the time your experiments are chugging along on their own. In parallel, train making the best use out of this time.


Edit: This question has been added by the OP in a comment. I think it is interesting, hence I added it to my answer:



what can you do in the case of active code development? For example, when I am actively developing a piece of code to analyze data, that code may take a few minutes (up to 10) to run, after which I assess if it is working. How do you automate that process?



This has actually more to do with standard software engineering practices than the sciences, but I think it is a helpful concept nonetheless. When you are trying out different implementations, with a large possibility of error, make sure that your application fails fast. That is, make it so that your application does not take ten minutes to fail, but does the complex, error-prone stuff directly in the beginning. Two simple examples from my own research:


Example 1: Say you do research in Machine Learning. Your application first trains an artificial neural network (ANN) on your data (easy as you are using an external library, but takes ~15 minutes due to algorithmic complexity), after which you do some postprocessing (trivial, executes fast) and a statistical analysis of the results (executes fast as well, but relatively complex, error-prone code). If you now always run the entire application and have the code fail during the statistical analysis, you are always losing 15 minutes for every run for a step that you already know works. A better solution would be to train the model once, and store it to disk. Then write code that only loads the ANN from disk and fails directly after that. Almost no dead time anymore. When the statistical analysis is working, you can revert to do everything in the expected order.


Example 2: You have written a complex, multi-threaded testbed, which is running distributed over multiple physical servers. You know you have a synchronization issue somewhere, as your application non-deterministically dies every couple of minutes. You have no idea where exactly. Hence, you repeatedly execute the entire application, wait for the error to happen, and then debug from there in different directions. Given that the error only happens every few minutes, you spend most of your time waiting. A better way is to take a page out of good software engineering practices. Make sure to unit test all components in isolation before throwing everything together. Specifically try to cover exceptional cases. Learn how to write good mock objects. Some amount of debugging of the integration system will still be necessary, but you will not spend hours debugging components that are fundamentally broken.


Are there any journals or conferences that take into account the availability of the source code when selecting the papers to publish?


Quite too many times I have read in a research article claiming that the source code will be made available, and, when I look for it, it turns out that the source code still hasn't been released.



Are there any journals or conferences that take into account the availability of the source code when selecting the papers to publish? By availability I mean present availability, not some vague promise of code release sometime in the future somewhere on Internet.


Now code availability is one thing, clarity is another. I have seen a lot of emphasis on the papers' clarity in the paper selection criteria, do some publication venues pay attention to code clarity during the paper selection process?


Obviously, I have the same issue with datasets, so I am wondering the same for them, i.e. are there any journals or conferences that take into account the availability of the dataset(s) when selecting the papers to publish?




human biology - Why do people and animals stretch out their bodies and what is causing this behaviour?


I noticed that my cat which is only 6 months old has started stretching its body from time to time. Then I thought that this motion doesn't seem very natural from another cat's viewpoint, so my cat probably didn't try to imitate some other cat all of a sudden and learned it like that. This action must have been caused by something else then. Or at least that's what I think.


Both animals and people stretch their muscles from time to time, but it doesn't seem to be caused by memory of others doing it. So how does this action originate in them?



Answer



Humans and other animals have lots of innate behaviors that are not learned from observation, i.e. behaviors that are hard-wired into our nervous system, and this is one of them. Suckling reflexes in mammals and the Moro reflex is human babies (which we grow out of) are other simple examples.


The stretching behaviours you are referring to are usually labelled pandiculation in humans (defined as involuntary stretching of the soft tissues), and yawning is often considered a special case of this. These kinds of behaviours are also normally related to transition periods between high-low activity in animals (Walusinskie, 2006). In practice, stretching functions as a way to reverse the muscular atonia during REM sleep, and is in this sense a way to restore homeostatic functions (Fraser, 1989; Walusinskie, 2006).


A paper by Rial et al (2010) deals with the evolution of sleep and wakefulness in mammals from our reptile predecessors, and indicates that stretching behaviours might have originated/evolved from post-basking activities, more specifically risk-assessment behaviours, such as:




...risk assessment behaviour (RAB) and consists of the suspension of current behaviour, to be replaced by head dipping movements, eye scanning, rearing and adopting stretch attending postures...



There is most likely much more to be said about this, and the paper cited above contains many references that can provide further clarification and evidence. A short comparative discussion of pandiculation is found in Frasier (1989).


evolution - Are there any multicellular forms of life which exist without consuming other forms of life in some manner?

The title is the question. If additional specificity is needed I will add clarification here. Are there any multicellular forms of life whic...