I have been a professional software developer for a number of years, I'm also an academic researcher - and my research has involved lots of software development.
I sometimes feel as though my industrial experience has been a hindrance in my research, as the goals of writing software in a research context feel contradictory to the goals in industry.
In industry, code needs to be (ideally): maintainable, bug-free, refactored, well-documented, rigorously tested - good quality - best practice says that these things are worth the time (I agree).
In academia, the goal is to write as many quality research papers in the shortest possible time. In this context, code is written to run the experiment, and might never be looked at again (we are judged on our papers - not our code). There seems to be no motivation to write tested, maintainable, documented code - I just need to run it and get the result in my paper or whatever ASAP. Consequently, the "academic" code I've written is poor quality - from a software engineering perspective.
The problem is that I either spend too long making (unnecessarily) getting my "research" code to industry-quality, or I publish work based on "bad quality" code, and I feel like a fraud.
My career progression is dependent on me writing "bad" code!?
The "craft" of software development is a huge subject - but where is the best practice for academic research? Nobody writes unit tests for conference paper code!
Does anyone find them in a similar situation? Does anyone know of formal methodologies for "research" code?
No comments:
Post a Comment