Tuesday 23 October 2018

productivity - What productive academic work can you do with minimal attention in a small (



Ok, here is a major issue I have with much of my work (which is experimental/computational research): I am a graduate student, and spend much of my day doing 5, 10, 20 and 30 minute experiments (both biological and computational), which require set up and monitoring, but result in small to medium size hunks of time when I'm really not doing much (essentially just checking every 1-2 minutes to make sure everything is still working). During these periods of time I either a) attempt to do other work or b) procrastinate. Both are not great, since either a) I'm constantly shifting my focus from the latter project and end up making mistakes in it or b) I'm procrastinate (usually by reading articles, twitter, etc.)


Does anyone have advice for how to deal with these small, awkward period of times (< 30 minutes) - is there something which you find useful to do that you can also shift your focus from incessantly?


As an example: http://xkcd.com/303/



Answer



ff524's answer is awesome as usual, but the core problem for you may be that most of these suggestions are not, or at least not directly, useful to your research. If, as you say, most of your day is spent in this way, even "productive procrastination" may be too much procrastination and too little actual progress.


In that case, you have two options:



  1. Learn how to get actual work done in those short chunks. Being able to context-switch without getting thrown off completely is definitely a skill that can be learned. You will probably never get as efficient as when you can devote your full attention to the task, and you will likely need to double-check what you did while multi-tasking, but getting things done slower than usual is much better than not getting anything done at all.

  2. Automate better (and, hence, increase the time between needing to check up on your results). In my experience, if you need to actually check every other minute or so what your experiments are doing, then your tooling is not good enough. Many things can be scripted so that they basically run from beginning to end on their own. Further, you can configure a system monitoring tool so that it (for instance) sends you an email when something abnormal happens. Of course this requires non-trivial IT skills, but in my experience most students working in experimental sciences are able to grok these things quite quickly if they devote a few days to it (believe me, the time necessary to learn how to automate pays off manifold in the long run).



In practice, you probably want to go for a combination of both of these options. Try to increase the time your experiments are chugging along on their own. In parallel, train making the best use out of this time.


Edit: This question has been added by the OP in a comment. I think it is interesting, hence I added it to my answer:



what can you do in the case of active code development? For example, when I am actively developing a piece of code to analyze data, that code may take a few minutes (up to 10) to run, after which I assess if it is working. How do you automate that process?



This has actually more to do with standard software engineering practices than the sciences, but I think it is a helpful concept nonetheless. When you are trying out different implementations, with a large possibility of error, make sure that your application fails fast. That is, make it so that your application does not take ten minutes to fail, but does the complex, error-prone stuff directly in the beginning. Two simple examples from my own research:


Example 1: Say you do research in Machine Learning. Your application first trains an artificial neural network (ANN) on your data (easy as you are using an external library, but takes ~15 minutes due to algorithmic complexity), after which you do some postprocessing (trivial, executes fast) and a statistical analysis of the results (executes fast as well, but relatively complex, error-prone code). If you now always run the entire application and have the code fail during the statistical analysis, you are always losing 15 minutes for every run for a step that you already know works. A better solution would be to train the model once, and store it to disk. Then write code that only loads the ANN from disk and fails directly after that. Almost no dead time anymore. When the statistical analysis is working, you can revert to do everything in the expected order.


Example 2: You have written a complex, multi-threaded testbed, which is running distributed over multiple physical servers. You know you have a synchronization issue somewhere, as your application non-deterministically dies every couple of minutes. You have no idea where exactly. Hence, you repeatedly execute the entire application, wait for the error to happen, and then debug from there in different directions. Given that the error only happens every few minutes, you spend most of your time waiting. A better way is to take a page out of good software engineering practices. Make sure to unit test all components in isolation before throwing everything together. Specifically try to cover exceptional cases. Learn how to write good mock objects. Some amount of debugging of the integration system will still be necessary, but you will not spend hours debugging components that are fundamentally broken.


No comments:

Post a Comment

evolution - Are there any multicellular forms of life which exist without consuming other forms of life in some manner?

The title is the question. If additional specificity is needed I will add clarification here. Are there any multicellular forms of life whic...