date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2014/02/26 | 740 | 2,985 | <issue_start>username_0: I plan to be applying to graduate school next year in Statistics or Biostatistics and due to some financial constrictions, I will not be able to attend unless the tuition is waived at the the minimum. I am hesitant to commit to a 5 year PhD program for a funded education since I may run into some financial issues a few years in and may not be able to complete it. I know that most schools use their Masters program as a cash cow and thus are usually not funded. I was wondering if anyone here knew of any statistics programs that go against this and actually do fund their Master students?<issue_comment>username_1: If there where an article related to the presentation of the tool, framework or library then a proper citation should be used.
If you are looking for a good way to acknowledge tools, frameworks or libraries not associated to any article, (such as the case of Python) then you can do this in a footnote.
Upvotes: 0 <issue_comment>username_2: I'd put emphasis on the **literature review** section of your article and/or thesis. From this perspective, there are two possible citation styles. First, instead of referencing a programming language, reference the concept that you are writing about. For instance, instead of saying
```
C++ (Stroustrup, 1986) is a programming language.
```
say
```
Stroustrup (1986) extends C to develop object-oriented programming by doing so and so.
```
In this way, you enrich your literature review and not simply accumulate references.
On the other hand, if the tool is quite novel and not used anywhere in literature yet, then cite who and where it was developed. For instance, SuperComp has developed SuperLang that you want to cite. It could look like this:
```
SuperComps (2014) develops Superlang for this and that so on and so forth.
```
The reference for it could be an online resource, book, manual, etc and will simply follow your referencing style e.g., APA, Harvard, etc.
So, you can simply cite OpenCV, VLFeat as either website, online resource, related paper, or patenting or licensing author(s).
Upvotes: 1 <issue_comment>username_3: A good citation has the following properties:
* Gives credit where it is due for an (idea, tool, dataset, etc.) that is not your own.
* Directs the reader of your paper where to look, if he/she wants to verify that your claims about the (idea, tool, dataset, etc.) are correct.
Any of the following can be used to cite a tool, as long as the above properties are satisfied:
* If the authors of a tool explain how they would like it to be cited, follow those recommendations.
* If there is a paper or tech report about the tool, cite that, because that is what the authors would probably want (if they didn't specify).
* If there is no paper or TR, cite the website of the tool.
Of course, in most cases, you're not the first person to cite the tool - go search Google Scholar for the name of the tool, and find out how others cited it.
Upvotes: 3 |
2014/02/26 | 577 | 2,243 | <issue_start>username_0: What is the proper way to address a person who is an office in the military (USA) who at the same time has a Ph.D.? Would it be Dr. <NAME> or General Dr. <NAME>?<issue_comment>username_1: Everything I've seen suggests that "<NAME>" without any "Dr." is correct (you've never heard of "General Doctor David Petraeus" right?); for example, military doctors are usually addressed using their rank, not as "Doctor." In part it just sounds clunky to try to use both titles. Similarly at Virginia Military Academy (bizarrely, in my opinion) all of the faculty are officers in the Virginia state militia, and are listed on the website with military titles (<http://www.vmi.edu/Content.aspx?id=4294974313>), not with the title "Professor" or "Doctor." I think "<NAME>, Ph.D." is more common, though discouraged in some sources I read. I think it's hard to go wrong just addressing someone in the military by their rank.
**EDIT:** I should probably say that isn't to say that you never combine Doctor with another title: "<NAME>" is standard in Germany, (though I'm not allowed to call myself that, since I have a doctorate from the US) and "Reverend Doctor" (or even "Most/Right Reverend Doctor") are established titles, though more common in Britain than the US. Just in the specific context of military titles in the US, it's not standard to mix them with other titles.
Upvotes: 5 [selected_answer]<issue_comment>username_2: CPT <NAME>, Ph.D.; MAJ <NAME>, M.D.; or LTC <NAME>, J.D. are more correct when addressing doctorate officers in writing.
Although this is true that Military rank usually comes before academic in most cases, there are some exceptions.
Doctors in the Medical Corps are often addressed as "doctor." Many medical officers preferred to be called doctor as this title reflects their professional and client relationship instead of subordinate and superior. In addition, JAG officers are sometime addressed using the title of "counselor."
Upvotes: 2 <issue_comment>username_3: in general the only time a rank and title are used together is with Chaplains. Formally it is Chaplain (Major) <NAME>, and informally it is Chaplain Doe.
Upvotes: 0 |
2014/02/26 | 1,919 | 8,437 | <issue_start>username_0: Journals like *Nature* and *Science* have impressive impact factors. How and why did these top journals become top journals? Why are they able to sustain their statuses?<issue_comment>username_1: Stringent review standards, leading to highly integritous articles, could account, in part, for what has led these journals to become 'top' journals.
Upvotes: 1 <issue_comment>username_2: I think this is analogous to "why is Harvard a good university, and able to maintain its standing as such?" A partial answer is that (1) it was founded a long time ago, and (2) it was founded by serious people. Given that, further serious people will tend to gravitate to the same institution, creating an inertia in the rankings.
A quote from *The Crucible* (set in the year 1692):
>
> I am not some preaching farmer with a book under my arm; I am a graduate of Harvard College.
>
>
>
Upvotes: 4 <issue_comment>username_3: All journals that have a high standing have the standing because of the support of the community. If the community loses interest, the journal will drop in the ranking. The top journals have therefore attracted authors for one reason or another. The editorial staff of journals try to maintain this status by making sure the work published there is of good quality and will be cited. It is thus not impossible for new journals to attain high status as long as authors provide the necessary papers. To this mix, we now also add the impact factor and other bibliometric factors. They matter now but have not been the driving factor for making the older journal what they are today.
I work with a more modest journal and I can definitely state that improving your impact factor is far more difficult than to drop in ranking. But, if your ranking becomes high enough (no specific number will be relevant since it varies between fields) a journal will be self-fuelling since many want to publish their material there and competition stiffens leading to a strong selection.
So the standing of Science and Nature, is part their long history, in part the hard work by the journal itself and in part the, now, need for authors to publish in as high ranking journals as possible since that is what forms the basis for most evaluations in academia.
Upvotes: 6 [selected_answer]<issue_comment>username_4: Journals gain their status mainly by being the first to offer publications in a new field and secondly by recruiting influential people in the field as editors.
Once a journal has a high impact factor people will want to publish there because authors themselves are judged on the impact factors of the journals they publish in. This means the editors can set a higher standard for acceptance. Since the impact factor is based on citation rates it then increases further. Most academic journals publish only papers and have very little editorial content, so this positive feedback mechanism is the main thing that maintains their top ranking. The mechanism operated even before impact factors were formally measured because people still knew roughly what the impact and standing of a journal was.
It is very hard for a new journal to get a good reputation because it takes two years for them to even be given an impact factor and this will remain low because they will fail to attract the best papers initially. A new journal needs to offer something different to succeed. They may specialize in a new field that does not already have a top journal, or they may offer open access for low charges in order to get going, but the established journals are always very hard to dispose.
The other factor that keeps a journal in the top ranking is its editorial board, but this is not because the editorial job requires their skills and knowledge. What the journal needs is a good supply of peers willing to review articles well and it is not easy to persuade academics to dedicate their valuable time to this chore when they don't get paid for it. The main reason they do agree to review articles is to impress the editors because the editors are influential people in the field who may help them get their next job.
Editors themselves take on the role because of the prestige of being an editor for a top journal and because they get an opportunity to identify reviewers who understand the field so that they can recruit them. This establishes another positive feedback that helps maintain the journals top spot. One of the few things that can destabilize a top journals position is the resignation of its most influential editors.
Whether this amounts to a good system for academia is very much open to debate. Most top journals are in the hands of big commercial publishers who understand how the system works and who have cleverly developed and promoted the journal impact system to their advantage. They make huge profits taking money from scarce scientific funds when most of the hard work in publishing is done by unpaid authors, reviewers and editors. Efforts by academics to change this usually fail because they don't understand how the system works, or because they dont have the time or funding to realize their ambitions. Another reason seems to be that governments and funding agencies like the big profitable corporate publishers so they tailor legislation to suit the publishers rather than the academics. Also the academic societies (APS, AMS etc.) who supposedly oversee the interests of the fields are themselves funded mostly through their journals so they have a massive self interest in perpetuating the system.
Upvotes: 2 <issue_comment>username_5: Suppose you start with a collection of journals and people who want to publish quality papers in them, who arrive over time. Suppose that each new quality paper is sent to a journal which is chosen at random, but where the probability of choosing journal X is an increasing function of the number of quality papers which have already appeared in X. Then you are dealing with a [preferential attachment process](http://en.wikipedia.org/wiki/Preferential_attachment) and you will find that after a long time, most of the quality papers will be appearing in a few top journals and there will be lots and lots of mediocre journals with very few quality papers.
Naturally it's a very simplified model, but the same argument can be used for the sizes of cities, views of Youtube videos, distribution of wealth, etc. See [Chapter 18 of Easley and Kleinberg's](https://www.cs.cornell.edu/home/kleinber/networks-book/networks-book-ch18.pdf) textbook for more.
Upvotes: 4 <issue_comment>username_6: One related comment-
There are some comments here that are equating high-impact factor with high prestige. I think this generally holds true, especially for people working firmly within the boundaries of a single discipline. For these people, of course, reputable journals are going to be more widely read and thus cited more frequently, and have higher impact-factors.
For people whose work are more inter-disciplinary, the relationship between prestige and impact factor are not so straightforward, because the size of the disciplinary audience can be very different.
For example, my work is interdisciplinary and lies at the boundaries of sociology, economic geography, management/organizations, and Asian studies. Journals in each these fields have different audiences, number of scholars, and thus, different impact factors. For example, Asian studies have a number of high quality high prestige journals that publish excellent papers, but because the size of the core audience, even the top journals hardly have impact factors that exceed 1. For management, however, because the field is very large, even journals that publish not-so-rigorous studies tend to have high-impact factors, easier exceeding top journals in social science/humanities. There are a number of journals that have impact factors over 5 or 6, and even mid-range journals have impact-factors around 3. Sociology and geography lie somewhere in-between.
It might be a natural tendency for people to try to publish in journals with high-impact factors. However, I would say, at least in my field, there is definitely higher prestige that will be attached to work in, say, top tier Asian studies journals (say, impact factor of 0.5-0.6) or a top- sociology journal (impact factor around 2-3) than a mid-tier management journal (impact factor of 3-4).
Upvotes: 2 |
2014/02/26 | 658 | 2,819 | <issue_start>username_0: The [discussion on this infogram](https://academia.stackexchange.com/q/17431/1033) made me wonder about the number of PhD students that a full research professor successfully graduates in their entire career. By professor, I mean a full professor, not an associate or exclusively teaching professor or other positions referred to as professor depending on field and location. Of course, the answer is not a single number, but rather a probability density function that is a function of field, place, time, university, and probably other factors. To narrow the scope, I formulate the question as:
*For selected fields and countries, what are recent figures on the mean and standard deviation (alternatively median and median absolute deviation, in case the distribution is non-Gaussian) for the number of PhD students successfully graduated per professor throughout their entire career?*<issue_comment>username_1: My guess is that it probably varies hugely, by field, by department and then again by professor.
The fields might vary because of the different expectations about having co-advisors, the size of dissertation committees and so on.
Departments might vary based on their teaching needs. A department (Department A) that the university needs to cover lots of intro classes might be funded primarily by teaching, and so such a department is going to have a lot of graduate students. On the other hand, Department B that brings in tons of grant money might have more labs, but less teaching responsibilities, and therefore have a higher proportion of post-docs and lab assistants than grad students. Hence, profs at Dept B might have fewer students than profs at dept A, but that won't speak to the relative quality of the faculty at either institution obviously.
Finally, it might also vary just from faculty member to faculty member. Some people are jerks and nobody will want to work with them.
Upvotes: 0 <issue_comment>username_2: It depends on the size and staffing needs of their lab. For example, theoretical computer science and mathematics professors may need no lab support at all. Thus, they are under no pressure to take grad students or post-docs and can choose just the ones that they want.
However, if you are doing work on stem cells, you may need a great deal of lab support. You would want a team of doctoral students and a couple of post-docs at any one time. In order to maintain continuity, you would want to accept at least one doctoral student each year. So if you had a 20 year career, you would have at least 20 students (or 20 - 7 = 13 given that it takes students 7 years to graduate and you don't want to leave students hanging at the end).
You'll need to narrow down what you mean by a "STEM" field in order to get a more precise answer.
Upvotes: 2 |
2014/02/27 | 2,423 | 9,647 | <issue_start>username_0: In mathematics (and other sciences) there are thousands of concepts, theorems, lemmas, etc which are named after some mathematicians (scientists). However, this nominations are not always very straightforward, especially if we are going to assign a new name for new concepts. For example, I can imagine the following scenarios and I would like to know what general protocol we should apply in each case:
1. A concept may have several origins in different fields and due to several individuals.
2. A concept is built on another concept which already have a name and this can happen several times. For example "Hecke pairs" is a concept in mathematics, then Bost and Connes make a particular Hecke pair famous, so we have "the Bost-Connes Hecke pairs". Should we name all influential people in each stage of advancement of a concept?
3. If an author invents a concept, is it appropriate to name it after him/herself, or he/she should wait others call it after his/her name?
4. A concept was invented by some author "X" in long time ago, and then it has evolved to something very modern and somehow different. Should we still call it by the name "X"?
Please do not hesitate to add new items if you can imagine other scenarios too.
Finally, I would like to ask another related question:
Can we use acronyms in stead of the full names, especially if the names of several people are involved?<issue_comment>username_1: Good questions. I will only tackle the last two:
3) In mathematics, it is (virtually?) universally bad form to name something after yourself. This is high on the list of things that amateurs/newbies do that make the professionals/veterans roll their eyes.
Some people have joked that the best strategy to get something named after yourself is to give your nice new concept such a terrible name (or lack of a name) that the rest of the community converges on naming it after you.
Even after something has been named after you, it is not necessarily completely kosher to speak your own name when referring to the concept. <NAME> famously speaks of "the calculus of framed links" (or, I think ironically, "the calculus") where others speak of "the Kirby calculus". At one point <NAME> writes of "the subgroup whose name I have the honor to bear".
It gets a bit ridiculous: when you give a talk and state one of your own theorems, it is most common to write out the names of your coauthors and not write out your own name. In my student days I saw a lot of first letter then dashes. Nowadays I mostly see the name dashed out entirely. Come to think of it, this reminds me of the Jewish practice of leaving letters out of the name "Jehovah", although the theological implications of treating *your own name this way* are much more profound.
4) I don't know whether we "should", but we often do. In general, it seems to me that mathematics has gotten used to naming things after certain people, and we often name things after people who would never have understood the things that are named after them. The example **Hilbert space** (coined by von Neumann in its present generality) is a famous one. The example **Euler system** has always struck me as being especially ridiculous (I asked my advisor about this, and he told me that the name comes from Euler products: that's quite a stretch).
Some people in mathematics are somehow especially good at getting things named after them. In my field, perhaps the outstanding example is <NAME>: he has curves, algebras, half of the Shafarevich-Tate group, half of Hodge-Tate weights, half of Lubin-Tate formal groups, a pairing...As a graduate student, I was struck by the fact that I was giving a talk on Galois cohomology of products of Tate curves, analyzed via Tate local duality. The title of the talk, "Tate-Tate-Tate Stuff" was a riff on the title of the previous speaker's talk ("Hodge-Tate Stuff") and this Tate-ish ubiquity. When Tate showed up for the talk, I got very nervous...but he was cool with it.
Needless to say, <NAME> is a true luminary. The fact that so many things bear his name is only possible because of the immense amount of fundamental work that he did. But the converse does not hold: e.g. <NAME> is a mathematician with a similar impact on the field, but he has...what? A manifold and a swindle? (Both of these come from his work in topology at the beginning of his career.) Instead we have the **Eisenstein ideal**. These things are strange.
Upvotes: 5 [selected_answer]<issue_comment>username_2: In order to disabuse you of the idea that there's a reliable system to name things after people, I present to you: [Stigler's law of eponymy](http://en.wikipedia.org/wiki/Stigler%27s_law_of_eponymy).
And if that's not enough, you'll occasionally have item A invented by author X but named after Y, and item B invented by author Y and named after X.
My usual joke about this is that something is usually named after the last person to invent it, because they're the one to popularize it enough that no one else can reinvent it.
Upvotes: 4 <issue_comment>username_3: The following has been my impression as a biologist:
>
> What is the general protocol to name something after somebody (some people)?
>
>
>
There is none. For certain things, such as names of genes or species, there is a protocol for submitting a name to the relevant databases, which is a right reserved for authors of the publication. This name can be anything you want, although certain standards are encouraged.
People often name species after their own name. For plasmids, the convention is to acronymize the name(s) of people who created the plasmid and make it the name. For genes, this would be considered tacky (the fashion seems to be to naming them after "clever" puns instead) but I'm sure you could get it to happen with enough perseverance. But there is nothing special about it being *your name*, because the name of the thing is arbitrary. You are allowed to give it any sort of name, your own name is just one of the (less interesting) options.
>
> A concept may have several origins in different fields and due to several individuals.
>
>
> A concept is built on another concept which already have a name and this can happen several times.
>
>
>
Concepts are not formally named after people. When originally published, the authors may or may not invent a term for the concept they discovered, to facilitate its discussion. It gets named after them, when the research turns out to be so seminal that everyone cites and recites it, and the authors begin using "the Smith protocol" as shorthand for "protocol described in a recent high-profile publication by Smith and colleagues (Smith et al. Nature 2012)". If it yet persists the test of time further, it may become a de facto tradition to call this the "Smith protocol", especially once textbook authors start electing to use "Smith protocol" as the canonical name in their own texts.
>
> If an author invents a concept, is it appropriate to name it after him/herself, or he/she should wait others call it after his/her name?
>
>
>
The exception I name earlier notwithstanding, absolutely not. A scientist would get laughed out of the room if he tried to present something he blatantly named after himself (some subtle reference to his name, like an anagram of his first name, might be begrudgingly accepted), unless he was perhaps a famous Nobel laureate.
If he was a famous Nobel laureate, people would still laugh, they would just wait for him to leave the room first.
>
> A concept was invented by some author "X" in long time ago, and then it has evolved to something very modern and somehow different. Should we still call it by the name "X"?
>
>
>
Since I assert the naming of concepts happens not through formal procedure, but as a consequence of frequent references to the original publication, then an improved "Smith protocol" may be named the "Doe-Smith protocol" or "Doe protocol (based on the Smith protocol)" or even just "Doe protocol" if Doe manages to publish a paper which provides a useful reference for the improved version, and the improvements are substantial enough that people feel the need to cite and refer to Doe's paper at least as much as Smith's paper.
If you were trying to get something named after you, the realistic strategies in biology are:
1. Discover and name a new species, plasmid, gene, etc. And hope the nomenclature committee doesn't think you're being too arrogant.
2. Describe a new experimental or mathematical/computational method, and fail to give it a nice name yourself.
3. Write a definitive reference which synthesizes several existing ad-hoc variants of a concept into one unified theory, and fail to give it a nice name yourself.
For 1, formal procedures exist and are detailed by the agency you submit your name proposal to. For 2 and 3, you basically write the paper, and wait for everyone and their brother to start citing your landmark publication - hopefully they will talk about your research so much that the name you used will prove too cumbersome.
Some examples:
* The famous Southern blot is described as only "a method of transferring fragments of DNA from agarose gels to cellulose nitrate filters" in Southern, 1975. Although extensions of this method, like the Western, were important discoveries, their popularizers got a bit less glory since it turned out that geographical puns were more fun.
* Eagle's minimal essential medium is described as "a fluid medium" in Eagle, 1955.
* Okazaki fragments were not referred to as such in Sakabe, Okazaki, 1966.
Upvotes: 2 |
2014/02/27 | 1,279 | 5,308 | <issue_start>username_0: An undergrad who's worked with me for just over a year (for course credit or for pay, depending on his preference in any given semester) presented a poster on his work with me at an event hosted by my lab. Afterwards, the student told me that he spoke to Professor B at the poster session, and Professor B suggested a project that he'd like the student to work on with him.
He had a meeting with Professor B to talk about the specifics, after which Professor B emailed me and asked whether I would recommend the student. I indicated in my response that the student is very capable and I would rather **not** lose him, to which Professor B responded, "He's planning on working with me for course credit next semester."
**Was Professor B's behavior in this case OK**?
More generally,
**Under what conditions is it OK to hire another lab's student?**
By "OK," I mean "not considered inappropriate behavior by the professor."
Hiring another lab's student is of course a continuum:
* On the one hand: Student from Professor A's lab appears in Professor B's office, says "I've heard about your research and would really like to work with you." Professor B says, "Sure, I'd like that a lot."
* On the other hand: Professor B attends event (open house, workshop, etc.) hosted by Professor A, where Professor A's student gives a talk about his ongoing research with Professor A. Professor B chats with the student after the talk, then says "You should work with me next semester."
Is either or both of these considered OK/not OK?
Does the type of student (PhD, MS, undergrad, high school student doing summer research) make a difference to your answer?
Do the terms of the student's position in either lab (earning course credit, getting paid, just getting supervision) make a difference?
Does it matter how long the student has been working with Professor A?
Should Professor B ask how Professor A feels about it before offering Professor A's student a job?
---
This is not an active, ongoing situation - I am not looking for advice on how to respond to Professor B, or whether I should say something to the student. (The student chose to continue his work with my lab and not to work with Professor B.) I just want to know whether Professor B's behavior was appropriate.<issue_comment>username_1: In this particular case, there are two factors to consider.
1. Prior to Prof. B's offer, had Student expressed an interest (or committed) to remain in your lab next semester?
2. Prior to Prof. B's offer, had you (or anybody in your team) already invested time and effort in the specific research that Student would be doing?
If the answer to both of these questions is "no", then Prof. B is totally entitled to try and get Student into his team. We'd be talking about a student whose connection to your lab finishes, as far as anybody is concerned, at the end of this semester. So why shouldn't other people try to hire him? On the other hand, if you had already prepared this semester with this student in mind, and/or he had already committed to staying with you, things are different. Prof. B would effectively be disrupting part of your lab's work, and you should let him know that he would be. You should also tell Student that it is not good behavior to suddenly abandon a project after committing to it. If he really wants to leave you under these circumstances, it should be because Prof. B can offer him something you can't (for example, if my brightest MSc student told me "I have been accepted to this super prestigious PhD program, under the supervision of Prof. Superstar, so I'm leaving at the end of the semester", I'd be annoyed, but I'd let him go for his own benefit).
Upvotes: 2 <issue_comment>username_2: Consider the following scenario:
>
> I've been working with a professor for a year now, and at one of his
> events, I presented a poster. Another professor came up to me and
> started talking, and it turned out that this professor had a very
> interesting project related to my interests. I'm applying to grad
> schools in a year, and if I can get two recommendations from faculty
> it will really help my application. Should I work with this professor
> or not ?
>
>
>
Students have agency too. There's a lot of context and background missing in your description that username_1 alludes to. But in general students should be free to make their own decisions about their research activities and honor existing commitments that they've agreed to.
Personally, if I were Professor B, I might suggest that the student talk to you first before deciding, but it's also possible that B did that, and the student indicated that no continuing commitment existed. If I were advising the student, I'd also suggest they clear things up with you first. I might also suggest that depending on the level of interest in the project they have with you, they give you the right of first refusal.
But this exact situation has happened to me with students (twice). They worked with me for a while, and then found a topic that made more sense to them with another professor. I wished I could have convinced them to stay, but they did well with their advisors and I was on both their committees, with no hard feelings at all.
Upvotes: 6 [selected_answer] |
2014/02/27 | 708 | 2,973 | <issue_start>username_0: I've been just had an abstract accepted to one of the top conferences in my field, but I'm cancelling my participation for personal reasons (short story: my wife is going to give birth to our first child a couple of weeks before the conference, and at that point I'd rather stay at home than spend several days in a different continent; this happened because I neglected to check the actual conference dates when I submitted the abstract; let that be a lesson for all of us). When I informed the organizers, they suggested that I still should add the conference to my CV, with an indication that I didn't actually present, i.e., under "Peer-reviewed conferences", I would write something like:
* "A genius solution to an insanely difficult problem". Conference Everybody Wants To Attend XXIV, May 2014, Prestigious American University (unable to present).
What are your thoughts about this? If the organizers hadn't said anything, I would have left it out of my CV; but then, the particular organizer I corresponded with is a big name in the field and way more senior than I am, and she didn't seem to have any problems with it.<issue_comment>username_1: As it is a peer-reviewed conference, I think it is OK to mention it the CV.
I am not sure about your particular conference, but in such conferences the biggest step is to get accepted (extended abstract/paper) and you did. So you did the job, they liked your idea and in normal situation you would present there.
It can happen that you are not able to present and you cannot find anyone to do it for you. However, you met all the requirements to be there, so it has its place in your CV.
If you want to be more correct you can mention the abstract as well because not everybody can know if the abstract or whole paper is required.
>
> "A genius solution to an insanely difficult problem". Conference
> Everybody Wants To Attend XXIV, May 2014, Prestigious American
> University (accepted abstract, unable to present)
>
>
>
Upvotes: 5 [selected_answer]<issue_comment>username_2: I am not sure that it is a good idea. Firstly, it draws attention to your lack of preparation (however understandable). Secondly, it was never presented (however acceptable the organisers found it), and Thirdly, you did not even attend. It is meritable that you succeeded in getting accepted (& should re-direct the paper to another similar conference), by your CV shows 'where you have been & what you have done', & you did not complete the journey.... do you want to draw attention to this? You should treat being accepted as a personal success, but adding a caveat to something that was never presented is (IMO) not recommended. Your CV mentions actual activities and formal achievements, and although being accepted is an achievement in itself, it is taking part & delivering the information to others that is of note. I would leave it off & submit the good work you have done elsewhere.
Upvotes: 0 |
2014/02/27 | 824 | 3,335 | <issue_start>username_0: Suppose we perform experiments with input parameters (temperature, humidity, processing time...) and collect resulting data (thickness, structure, mech. properties...).
Is there a tool (or set of tools) to organize, process and export data from such experiments?
Key features are:
* Structured files decomposition (raw text files).
* Basic math operations.
* Filter and sort by given parameters (show/export data from samples treated at given temperature and humidity for various times).
* Generating tables with given parameters and list of "constants" (table of times, mech. properties and thicknesses and list containing temperature, humidity...).
* Vector graphics output and/or output suitable for MATLAB (graph of thickness as function of time).
* Automated (or easy-to-create) LaTeX output (report sheet).
If not, any idea, hint or recommendation how to create it is appreciated. Right now I'm thinking of a spreadsheet (Excel) as core database and MATLAB as the processor (filters, sorting, graphics).<issue_comment>username_1: I would store data in [CSV](https://en.wikipedia.org/wiki/Comma-separated_values) (i.e. text file with a table, with values separated by commas) rather than XLS files (the first is easier to import from and export to anything). Otherwise many tools will do the job (if you are familiar with MATLAB - why not using it)?
For general data processing and manipulation, **Python** (with [SciPy](http://www.scipy.org/) stack) is capable of everything you mentioned. In particular [IPython Notebook](http://ipython.org/notebook.html) is great for data exploration and presentation (you can use code, comments and LaTeX in such notebook - also for reports). For tabular data, use [Pandas](http://pandas.pydata.org/) (R-like DataFrames).
For reports also you can create files in Markdown (with LaTeX), and then convert them to pdf with [Pandoc](http://johnmacfarlane.net/pandoc/) - may be much easier than generation of LaTeX code. (To get you some taste what is Markdown - look at [StackEdit](https://stackedit.io/).)
And alternative to Python is **[R](http://www.r-project.org/)**, with [knitr](http://yihui.name/knitr/) for report generation. If you are not sure, if to choose R or Python, then for your task R seems to be an easier and better way to start (especially with [RStudio](https://www.rstudio.com/) as an interface).
For a bigger list and links to tutorials, take a look at a list of [software for scientists](https://gist.github.com/stared/9130888).
Upvotes: 3 <issue_comment>username_2: I find Google Sheets is easier and more powerful in many ways than excel. I have done a couple of projects with a sheet of raw data coming in via csv, then other sheets to process it. If you're clever, it can be done so that when the raw data are updated, everything else falls into place. Google Charts is basic but has some neat features for looking at data. The Transpose, Filter, Sort and even query (SQL) is very cool if you have lots of data.
You can collaborate in teams, commenting on interesting findings, etc. Graphs output to PNG or PDF look great in latex. Data are available in the cloud, not just on some file server in a lab. Tables are a special kind of graph that can be shared on web pages and have user-selectable options for sorting, etc.
Upvotes: 1 |
2014/02/27 | 557 | 2,546 | <issue_start>username_0: I was recently asked to review a paper, and ended up recommending a rejection of the paper. The journal has, apparently, asked the authors to revise their paper, and the journal has come back to me asking me to review the revised paper. There is, however, no option offered for me to decline. I will likely recommend rejection again as I see that the main problem with the paper is still not addressed. What should I do in this case? Should I write to the journal that I am not willing to review this paper again, or should I go ahead with the review and recommend rejection for the second time? Since I need to provide my comments to the authors again, what should I write while still being constructive? This was my first time recommending a rejection.<issue_comment>username_1: That the paper was not rejected probably depends on the second (or more) reviewers comments. I would consider it normal to ask if reviewers wish to review the paper again. That you were still asked may be a mistake, most electronic systems would require you to make a decision on that point. There may also be a flaw since a rejected paper would not need a second review and the paper was not rejected based on your suggestion. One can only speculate. I would consider it only fair to write to the editors and state that you are not interested in re-reviewing the manuscript. You could state that your impression is that a similar result would be likely were you to do the job. But, honestly, why you decline the review is no-ones business and you should have been asked before being faced with the task.
Upvotes: 3 <issue_comment>username_2: This has happened to me a couple times. As username_1 has pointed out, what probably happened is that, while you recommended rejection, reviewers 2 and 3 said "it's actually publishable if the authors solve such-and-such problems". This is the kind of situation where a sympathetic editor will make a "revise and resubmit" decision.
In this particular case, you have it easy. Just write a very short review along the lines of "in my first review, I recommended rejection of this article because of [problem that made you recommend rejection]. As the authors have not addressed this problem, I'm regrettably forced to maintain my previous evaluation".
[FWIW, one of the times I reviewed a paper like this, the journal actually ended up publishing the paper in question, with the problematic section still exactly as it was when I reviewed and rejected it. Go figure]
Upvotes: 6 [selected_answer] |
2014/02/27 | 1,473 | 6,125 | <issue_start>username_0: I am a biologist and very recently there has been a movement to increase the use of preprints in publishing biological research. This has generated a lot of discussion about preprints and their merits and has spawned a few servers (e.g., [bioRxiv](http://www.biorxiv.org/)) but I have not gotten a good sense of how I should incorporate the preprint server into my normal publishing workflow.<issue_comment>username_1: The role that pre-prints have in the collective workflow of people in a field depends half on each person's preferences and half on how it gets established as a means of communications. There are no 'right' or 'wrong' workflows for using a pre-print server (though there *are* wrong ways to use one), and you should use the one that fits you best personally and lets you communicate best with your colleagues.
There is a broad spectrum of reasons you might want to upload a preprint, which are explained in detail in [this question](https://academia.stackexchange.com/questions/16832/why-upload-to-academic-preprint-sites-like-arxiv). To give a brief summary, you might upload a preprint or postprint
* to provide free access to your content to researchers and students in institutions without subscription to the journals you publish in, which is also a way
* to help increase the use of your papers by the community, and hence the number of citations;
* to establish priority of a result, and particularly as a way to get more widespread credit for having introduced an idea at an early stage;
* to open a manuscript up for public comment from your colleagues after you feel it's mostly ready but before you're prepared to set it in (published) stone;
* to make it visible to people who browse it often as a way to see new results;
* to cite as-yet-unpublished work in some other paper, in a way that referees of the second one can see it;
* to fulfil open-access conditions on a grant;
or for many other reasons. Whether these (or others) apply to you will determine how you use the repository. Some of these are personal choices, and may come down to how much you feel you stand to gain from non-institutional readers having access to your work. Some of these are field-dependent, and hinge on there being a significant fraction of the workers in your field that regularly check the repository.
The appropriate time to upload will typically vary on a case-by-case basis. You might upload at an almost-finished stage, at the time of submission to a journal, at time of acceptance, at the time the paper is published, or even six to twelve months after that. Each of these corresponds to some or other of the motivations above.
One thing that's important to keep in mind is that you must have a good idea of prospective journals you'd like to publish in, and of what their preprint policies are, *before* you upload, as it can rule out certain publication venues if you're not careful. This is again field-dependent; many physics journals take that as standard but biology ones might not.
Upvotes: 3 <issue_comment>username_2: Workflow may be different, but the one I am familiar with is:
* put preprint on arXiv along with sending it to a journal
* after the final version is confirmed, update the arXiv with the newest version of text (with your formatting)
* (in case there are serious mistakes or omissions, update arXiv at anytime)
Sometimes version on arXiv is put before the submission to a journal, for example:
* there is some work that need to be done before the submission, but we want to have it before a conference we are attending, or a talk at another department (so we can point the preprint to the reader),
* we haven't decided yet where or if we want to send it.
And in some cases, arXiv is used *instead of* a journal, especially if:
* the work is not suitable for publication in a journal (e.g. a PhD Thesis, textbook), but we want to disseminate it, preserve it and make it easily citable,
* the author prefers it that way (e.g. it is a short note, or the topic is unconventional and the author prefers to avoid struggling with editors).
Upvotes: 5 [selected_answer]<issue_comment>username_3: I'm compelled to add an answer to this question from the biologist's point of view. arXiv and bioRxiv are extremely important in the field (both wet-lab biology and dry-lab bioinformatics/computational biology) for 3 reasons:
1. Getting your work out there ASAP (since peer-review can, and often will, take over a year). As such, if another paper gets published that's very similar to yours in study design, methods, and research, then you have the benefit of the timestamp of arXiv/bioRxiv submission and can claim precedence. Which brings me to the next point:
2. From personal experience, I've been in situations where my work was in the middle of peer-review but someone publishes a related paper (in the same subspecialty as me) in an Advanced Access issue of a peer-reviewed journal (e.g., Bioinformatics, Nucleic Acids Research, etc). Their paper did not include a citation to my bioRxiv work, so I emailed the editor of the respective journal drawing attention to my preprint and timestamp. The editor sent my query to the authors and all agreed to cite my bioRxiv work in the next edition of the paper (which came out the next month). If I had not posted my preprint to bioRxiv 6 months beforehand, the situation would have been very different. Once my paper got accepted, the bioRxiv citation transferred automatically to the journal article. Hence, I did not lose a citation needlessly.
3. arXiv/bioRxiv is indispensable if you feel that giving away your work to peer-review might open you up to the "non-public domain", which is slang in our community meaning "opening yourself up to getting scooped." This is important if you're publishing in a very hot, fast-moving field and/or there is academic funding/grants at stake (think study section at major organizations). There are many reasons (not all of them noble) for why people volunteer their time to journal editorial boards and/or grant review panels. I'll leave it at that.
Upvotes: 2 |
2014/02/27 | 1,202 | 5,167 | <issue_start>username_0: **Background:**
I have done quite a lot of research work for a particular project.
I am working in the field of operations research (i.e. applied math/physics),
so this work primarily takes the form of propositions, proofs and numerical experiments.
In the process of my research,
each day I write up my daily progress in my [lab notebook](https://en.wikipedia.org/wiki/Lab_notebook),
which in my case takes the form of a very very long LaTeX file.
I am now trying to write up my work as an article for submission to a journal.
**Question:**
What is an efficient process which I can use to write the journal article?
**Related links:**
* The [Mumford method](https://sites.google.com/site/stephendmumford/the-mumford-method) sounds interesting.
However, as Mumford is a philosopher,
his writing process seems to me to be less relevant to writing in the sciences.<issue_comment>username_1: I should think an annotated outline (indented list of titles) would be effective. Do a basic outline of the article with however many titles, and under each title put one or more of:
* link(s) to the text to use from your journal
* a short phrase to use in conveying the point of the titled section
* an idea of how much (word, line or paragraph count) to use to fill this out.
* links to other outline titles.
Once the structure looks good (or even make a few structures to pick from), go back over
and fill in the body under each title. The annotations will help you keep track of
certain goals (word length, number of ideas, enough persuasive sentences, logical
coherence), and it is easier to manage a high-level version of the article this way.
It also makes editing easier if you prioritize which titles to cut.
Upvotes: 2 <issue_comment>username_2: I think the process of writing papers is pretty individual to each researcher. What works for me may not work for you. That being said, the following are a few rules that I usually give me mentees and students when we start writing a paper together:
* As "username_1" states, do start with an outline. Look through related work for papers with a similar scope / methodology / idea and start by imitating their outline. However, do not do so blindly. Instead, focus on **understanding** why the authors of your related work chose to structure their paper the way they did, and check whether their (assumed) reasoning is also useful for your paper.
* Keep in mind that your paper needs a "story" - you know your material and reasoning, your readers do not. Start at the beginning, and end with the conclusions. Avoid statements that are not understandable at the point in the paper where they appear (rule of thumb: when you feel writing something along the lines of "as will be explained later on", you likely have a bug in your structure).
* Plan the length of your paper. Fill each section with some [Lorem Ipsum filler text](http://www.loremipsum.de) of roughly the same length as you plan for this section. This allows you to see how much space you actually have for each part of your paper. During writing, when I start a new section I remove the filler text and replace it with what I actually plan to say at this point. I sometimes even go as far as drafting where in the paper I will have which figures, and put placeholder figures there instead when doing the outline.
* Write the paper **in order** (i.e., in the same order as it will appear in the final paper). This is a bit controversial - I have seen many experienced paper writers suggest various other orders ("Start with Related Work" - "Start with Conclusions" - etc.). To me, the problem with writing in a different sequence is that it is very easy to lose track of what a reader actually knows at this point in the paper (hence destroying the story of your paper). For instance, you would end up using concepts and ideas that you actually only discuss at a later point. This makes papers unnecessary difficult to understand. I feel it is also much easier to produce a convincing line of argumentation when you produce the material in the same order as it will be read.
* In any way, later rearrangements will be necessary. After reading the paper, you decide that you need to switch some subsections around, or that you do not need Section 3 at all. Stay flexible and don't be too much in love with your current outline just because it is how you initially wanted to do it. One additional sidenote in this is that you write your paper in LaTeX. This makes later changes in the outline trivial.
* Write the text the way you suppose it should appear in the paper. Do not draft too much - there is no point writing throwaway text unless you really do not know how to write this section / part properly at this point. Only go on to the next section when the last one is pretty much done.
* As soon as a section is pretty much done, get some feedback on it. Remember that your paper should already be coherent and complete up to this section, so there is no harm in sending it to colleagues or your supervisor and asking them to tell you whether the paper makes sense up to and including Section X.
Upvotes: 3 [selected_answer] |
2014/02/27 | 1,096 | 4,717 | <issue_start>username_0: I've recently obtained my PhD in mathematics and started a post-doc this year. I have 5 published papers, across a wide spectrum of journals (in terms of quality, from very good to mediocre). However I never received any off-prints from the journals and it seems that to receive those one has to pay. On the other hand all the professors that I know of have always a lot of off-prints for most of their journal publications. I always wanted to have these neat looking off-prints but it seems that the winds have changed and journals are becoming "cheaper" (behavior-wise) than ever.
This leads me to the following question:
* Is this a recent change? Is it considered the norm now to not send off-prints free of charge?
* Are these professors perhaps ordering the off-prints through some departmental fund?
Is there anything that can be done about this situation? Can I pressure the journal into sending me off-prints free of charge (e.g would trying to refuse signing the publishing agreement, unless they provide the off-prints for free work?). Have people tried boycotting journals not offering off-prints? This kind of cheap behavior really strikes me as pushing the boundary of what is acceptable. Not only we do most of the work for the journal (refereeing, writing, etc.) but on top of that journals are expensive and do not even offer off-prints anymore.<issue_comment>username_1: Offprints were a key part of the publication process before the digital era since digital versions did not exist; a smaller number of them commonly included in the page charges. Since journals are now digital and are also moving away from printing as a whole, reprints are things of the past. That your professors get them is most likely because they are used to have it this way but I am sure there will be journals from which they would not be able to get them other than the now standard pdf. I am not sure they get them for free anymore either. A pdf is easy to distribute and carries virtually no cost, to the publisher (journal) or the environment. I am sure the publishers were happy to see them go but the move was not primarily a financial move, it was a lack of demand. Some publishers still provide reprints but since they are no longer part of the standard service, they may charge for them. After all you get a pdf for free to distribute in a similar manner as the reprint. I have been publishing long enough to have a shelf full of useless reprints that in addition exist as pdfs as well. I am also an editor for a journal and for us it is also a question of when, not if, we move away from printing altogether. And in that case the publisher has no part in the decision since we are a society owned journal with no page charges. So I am not sure why you believe the reprint is so important. There is little demand for posting reprints to others when a pdf exists that can be sent over e-mail. I can understand that sending a paper copy can be more personal than e-mailing a pdf but I still think the demand for a printed copies is very low indeed.
Upvotes: 3 <issue_comment>username_2: Off-prints are a remnant from the days when photocopying hadn't been invented and, if you wanted your own copy of a paper, the only reasonable way to get one was to write to the author and ask for an off-print.
Providing off-prints to authors certainly seems to be becoming less common. Some journals still provide them for free, some only for a fee, and some not at all. I don't think most people care, and among those that do care, many prefer not to receive the off-prints. It's been years since I received a request for an off-print, so when I do get them they just end up sitting in piles in my office while I offer them as party favors to anyone who enters the office. Some decline and probably many of the rest recycle them, since electronic copies are far more convenient.
>
> Can I pressure the journal into sending me off-prints free of charge (e.g would trying to refuse signing the publishing agreement, unless they provide the off-prints for free work?).
>
>
>
I wouldn't try pressuring them, which could come across as both eccentric and rude. Instead, you could try begging, by explaining that you are a postdoc with strictly limited funds but would really love off-prints and hope they could provide them at a reduced cost. I have no idea whether this could work, but the worst that can happen is that they'll say no.
>
> Have people tried boycotting journals not offering off-prints?
>
>
>
You are welcome to investigate which journals provide off-prints for free and submit your papers there, but I doubt many people will join you in this.
Upvotes: 5 [selected_answer] |
2014/02/27 | 831 | 3,359 | <issue_start>username_0: Several friends and acquaintances have recently died from cancer. The chemo treatments are crude and destroy quality of life. Furthermore, chemo treatments depend on the efficacy of antibiotics for protection while one's immune system is compromised.
We have to do better.
Who is developing the "personalized medicine" processes? It seems like there should be a way to look at the DNA of a cancer and reprogram it to settle down or go away.
My expertise is in systems engineering and software development - not in the biological sciences. I am just retired, so I don't need reimbursement. How can I contribute to improving this situation?
I suspect the answers depend on more fundamental research, that is why my question is: How can I contribute on a volunteer basis to cancer research?
Update: I plan to upgrade from a 2006 MacBook Pro to something that can run BOINC problems a bit quicker. Rosetta@home seems like a reasonable target.
I have found Rosalind - a site for exploring bioinformatics.
The next step for me is to ask friends who might have contacts in academia.
Thanks for looking!<issue_comment>username_1: I greatly admire your interest in contributing to an area of medicine, and I'm sorry to hear that so many of your friends and relatives have succumbed to this disease. You're absolutely right that the treatments are barbaric. The subject of personalized medicine with application to cancer treatment is a hot scientific topic right now, and we're probably on the verge of a revolution in this area.
Two specific groups come to mind as leaders in this though I'm sure there are more. [Levi Garraway's lab](http://garrawaylab.dfci.harvard.edu/?q=node/2) at Harvard is developing "PHIAL", which stands for "Precision Heuristics for Interpreting the Alteration Landscape" [in cancer genomes]. The name alludes to Galadriel's phial, although it remains to be seen whether this sort of computational analysis will live up to being 'a light to in dark places, when all other lights go out.' It is exactly what you imagined in your post, that is, given DNA sequence, predicting candidate causal mutations. There's also [<NAME>'s group at Washington University in St. Louis](https://together.wustl.edu/Pages/News/Tim-Ley.aspx) which was one of the first to sequence a patient (one of their own oncologists!), identify the specific mutation and treat his cancer. The story is pretty compelling, but it's worth noting that this was a fortuitous situation where the underlying mechanism just happened to be one treatable by a drug already on the market.
Laboratories are always underfunded -- your services would be a gift. It's just a matter of finding the right one. If you live in a major city, consider contacting some people in cancer to identify someone in computational biology or bioinformatics. If you post what city you're in, I (or others, I'm sure) could help you find a lab doing this sort of work already. Good luck!
Upvotes: 3 <issue_comment>username_2: You may want to consider keeping an eye on Solvers.io - it's a new site that's trying to link up coders with scientists in need of help for bite-sized software development tasks. I haven't looked to see if there are any cancer-specific calls for help, but it may be a place to find people who would value your time and expertise.
Upvotes: 3 |
2014/02/28 | 824 | 3,431 | <issue_start>username_0: I was reading about what makes reasonable grounds for rejecting a paper and came across the following statement:
>
> [... two major revisions are not allowed.](http://www.cs.gmu.edu/~offutt/classes/phd/Hints-Review.html)
>
>
>
Does it mean that only two rounds of review at the maximum are allowed? Is this always the case? I do remember recommending major revision twice, and indeed the editor stopped the review process and decided to accept the paper without the authors having to revise their paper the second time.<issue_comment>username_1: Long answer short- It depends on the editorial policy of the specific journal or the current editor.
In my field, 4 or more rounds of revisions is not completely unheard of. Despite people's frustration with such policy of unlimited number of revisions (Imagine being that person who got a rejection after 4 major revisions and the manuscript being under review for 3 years? --- this is not uncommon in the social sciences), some journals do retain such policy. The top 2 journals in sociology are notorious for this.
I currently serve on an editorial board of a reputable journal in the field (one of the top 5), where the current editor has changed the journal policy to not extend second R&Rs. In this journal, you only get once chance to revise, and the result of the revise can either be an outright accept, conditional accept (which will come back with very minor change suggestions-- for example, cite an overlooked source, change the title of the paper, rewrite the conclusion, etc), or reject. I think this is a positive step forward, especially since many editors in the field tend to be very nitpicky about minor/aesthetic issues.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Is this always the case? **No.**
However, I often see reviewing software that does not have the "major revision" category for the second revision. On the other hand, a colleague told me that he had a paper in the **seventh** revision at a very respectable journal (actually it was a debate about proof details in math…).
Upvotes: 3 <issue_comment>username_3: Q: Does it mean that only two rounds of review at the maximum are allowed?
A: No
As long as the authors keep failing to address the concerns of the reviewers, the paper can potentially end up on a long loop of repeated revisions that are required.
It is important to fully address the concerns of the reviewers.
Upvotes: 1 <issue_comment>username_4: >
> Does it mean that only two rounds of review at the maximum are
> allowed?
>
>
>
In this particular case, it appears so.
>
> Is this always the case?
>
>
>
No. Different journals will have different policies. For example, there's a journal in my field that has a reputation for indulging many rounds of review, and I've seen papers there go over two rounds - indeed, I just finished my third review for a paper.
However, as a general rule, two rounds of review both of which came back "Major Revisions" is likely a bad sign, as it means that there *remain* substantial problems with a paper even after substantial work should have been done. There's a number of reasons that might be true - the revisions introduced new errors, one of the reviewers or the authors are digging in their heels, etc. After two rounds of being nowhere near acceptable, it's possible the editor will start looking to cut their losses.
Upvotes: 0 |
2014/02/28 | 572 | 2,534 | <issue_start>username_0: I've never been in a University in my life, but I enjoy reading research papers to expand my knowledge and go deeper in the understanding of details which are not covered by books.
A recent paper shows that this substance X causes reaction Y. Many others have shown that reaction Y can cause a dangerous consequence Z. I'd like to ask the author of the paper if he thinks substance X can trigger consequence Z. No one seems to have answered this question and, if they did, they're out of my radar.
Many papers covers details and little aspects of how reaction Y causes consequence Z, however I've found that a Wikipedia article does a good job in summing up all of these papers:
* Can I link the Wikipedia article to the Doctor?
* Or should I link the 20+ papers?
* Assuming consequence Z is well known in the field, should I give for granted that he knows of it (thus not linking anything)? Can putting the links to something that is well known make me sound like a pretentious prick?<issue_comment>username_1: About your questions:
1) If you want you can link the Wikipedia article or make a small summary explaining your thoughts, usually Professors or researchers do not have too much time to read long emails; so try to keep it up simply
2) explained before
3) It is nothing bad that you point to the other papers of consequence Z, if he knows about them he will tell it straightforward
In conclusion, do not feel bad that you did not have any formal education. Just address to the researcher in a respectful manner and tell him/her that you are interested about his job. You do not need to send your CV or a motivation letter just for asking something, but beware, you can or you cannot get an answer (usually depends how much interested of free time the researcher you are appointing has)
Good luck!
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can contact any professor asking about they research projects, assuming you are polite enough and really interested in the topic, not just in talking with somebody from the "real science".
The kind of response (if any) largely depends on the personality of the researcher, how busy is he at the moment, how frequently he receives a contact requests of this kind and how good is your question (a really good question contains majority of the answer asking to explain which alternative is true or to clarify particular point).
There are no particular rules that would force a scientist to ignore or to respond your message.
Upvotes: 2 |
2014/02/28 | 1,368 | 5,707 | <issue_start>username_0: I got a BA in English at a pretty good school, spent the next 10 years in a fruitless pursuit of screenwriting with a day job in a legal department, and now that I'm sick of all that, I'd like to do something mathy. And it seems pretty clear that that's going to require going back to school.
Since this is a HUGE redirection -- a ridiculous one, really -- I assume I should get a second BA/BS before even considering a masters, right? I did well in math in school, but I left off at single variable calc. (I'm currently studying linear algebra on my own and loving it.) But what, really, are my chances for even getting into a decent second bachelor's program?? I'm thinking of taking college extension classes to get more experience and recommendation letters for that purpose. If I get some online bachelors degree (EDIT: or a post-bac), are any (reputable) masters programs even going to consider me??
You may well ask what my eventual goal is, but I'm at such a basic level that I'm not sure that that question is all that relevant. If I were to pick a goal just for the sake of aiming for something, getting a job in statistics sounds interesting, but who knows what I'd want to do after getting a second bachelors. I'd like to try my hand at research, but that sounds way too pie-in-the-sky given my background.
Thanks for any advice or feedback.<issue_comment>username_1: You are not prepared for a Master's program right now, which I think you recognize. That said, it may not take too long to prepare yourself. I would recommend looking at requirements for continuing education and graduate programs at nearby regional universities. Many of them have programs designed to accommodate a student with your needs. For example, [here](http://www20.csueastbay.edu/csci/files/docs/comp%20sci%20pdf/msmath.pdf) are the requirements at CSU East Bay, a regional school near Oakland. Note in particular the Post-baccalaureate unclassified status. You might not expect it, but many such programs are quite strong and have a solid record in placing students in PhD programs. Don't discount them.
Upvotes: 3 [selected_answer]<issue_comment>username_2: **If you want to go into math, should you get a BSc?**
I'd say "yes". It often is quite doable to pick up a new subject on your own if you have academic experience, but English and Mathematics are so far apart that I'd doubt there would be a lot of synergy. Mathematics has its own way of thinking, which is probably picked up best by going through an undergrad degree.
**Should you go into math?**
Enrolling for a degree is of course a rather strong decision. In my experience quite a few of those who start anew later in their lifes drop out rather early. Maybe taking an online course first could both be a good preparation if you go through with it, and helpful to figure out if you really want to it.
**Can you get in somewhere?**
I don't know about your country, but both the UK and Germany often have places reserved for mature students, which have fewer formal requirements. So in those countries, getting into a decent program would be quite doable.
**A final comment**
For me at least mathematics is great fun. If you believe you'd like it, and you are willing to put in the effort, give it a try.
Upvotes: 3 <issue_comment>username_3: Well, I commend the OP on having the heart to contemplate such a step. I'd like to mention a factor that has not yet been mentioned in the other answers - your age, and more specifically, the responsibilities that you have to shoulder while attempting such a career switch. If you do decide to apply for a undergrad program in maths, and happen to be accepted in a reasonably good one, you'd be starting from scratch in a field governed by abstract concepts that take a good deal of focused work to wrap ones head around! This is a lot easier if all you have to worry about is yourself, and don't have the weight of other responsibilities (family obligations, relationships etc) to bog you down. Even if you don't have such responsibilities, you'd have the inescapable feeling of being a generation behind your peers, and unless you have a very determined and strong force of mind, you would be having to fend off doubts regarding your decisions/capabilities at regular intervals - which could hinder your focus significantly, and make your mental faculties less acute than they ought to be!
(This is a personal opinion of mine - I've seen many later-career grad students struggle with these issues, and hence I thought it was wise to know about the possibility of such a train of events before committing to such a momentous decision!)
Upvotes: 2 <issue_comment>username_4: I'm answering from an American point of view, on the supposition that is your background also. I'm not sure how applicable this advice is to a non-American.
You already have a bachelor's degree, so you shouldn't need to take a second full BS. There are two ways that I can see you going. Either way, you should be taking the equivalent of a major, or at least a minor in math before you pursue graduate studies.
Enroll as a full time second, or third year "transfers" student in a math-science oriented BS program, using your BA credits surrounding your English major for your non math credits. You should be taking something like two to three math courses (and one or two courses in physics and/or computer science) a semester, until you have completed a math major.
Enroll as a "special student" somewhere part time, taking one to two math classes a semester, until you have 10-12 math courses that constitute the equivalent of a math major, or at least 6-8 courses for a math minor.
Upvotes: 1 |
2014/02/28 | 1,051 | 4,503 | <issue_start>username_0: I'm told there are conventions in scientific papers around graphs. I'm publish material for a general audience based on a the findings of a scientific paper (unpublished) and having a disagreement with the author of the paper about how graphs must be presented.
I'm specifically asking here about the conventions for scientific publishing. I'm very aware that conventions outside scientific papers for graphs are much more open, I take my visual data cues mostly from Edward Tufte's books.
I'm being told that displaying horizontal grid lines implies a greater accuracy in modelling data and therefore should be absent in the case of this carbon sequestration modelling since it's not the results of measurements? (I would have thought significant figures on axis, axis spacing and fundamentally the caption explaining the data source and assumptions were more relevant to that)
I'm told that titles are a no go, captions only. (I've found a Uni spec online for science papers saying titles are mandatory). I'm told titles are rare in journals.
Is there any right or wrong to these matters of convention or is it just opinion?<issue_comment>username_1: There is no right or wrong when it comes to grid lines. There may be conventions varying between disciplines. The basic question of whether to use such lines or not, is if they add something useful to the reader to better understand the data displayed. A go figure should communicate as many thought as possible to the reader without to much effort. If you want to get some ideas of thinking about graphics, try to locate the book [The visual display of quantitative information](http://www.edwardtufte.com/tufte/books_vdqi) by <NAME>. There are many constructive thoughts about displaying information there worth considering.
In the end you need to look at how others publish similar data and figure out if a "standard" has developed. It may not be the best way to display data but since many are familiar with the format it becomes an efficient communication. Otherwise you should try to display the data as clearly as possible, lines or not.
Upvotes: 2 <issue_comment>username_2: **On grid lines**
It depends on the points you would like to make with the graph. If you're just going to show an upward or downward trend, then the grid lines are probably redundant. If you'd need refer back to a certain point of a curve, and knowing the vertical position of that point would be crucial, then grid lines can help. It's not about the graphs (or I may go so far to say even within publication culture,) it's about the points you are trying to get across. If the grid lines will get people there with less puzzling or work, then yes to grid lines. In all other occasions, then no.
>
> I'm being told that displaying horizontal grid lines implies a greater
> accuracy in modelling data and therefore should be absent in the case
> of this carbon sequestration modelling since it's not the results of
> measurements
>
>
>
This is perhaps the oddest graphical rule I have heard in the last 12 months. For most grid lines are just an extension of the tick mark on the axes. As long as you provide the tick marks on the y-axis, anyone can draw horizontal grid lines.
It is, however, not advisable to provide tick marks or grid lines finer than what your instrument or model can discern. For example, if your measurement or prediction is in the unit of meter. Then, at most I'd just put grid lines at 0.5 m increment. I wouldn't go so far to put 0.01 or even 0.1 m increments. That would imply some precision that I never had. I believe your partner author's concern may be more related to this problem. In that case, you two need to talk and make sure at least the tick marks make sense.
**On caption vs. title**
>
> I'm told that titles are a no go, captions only. (I've found a Uni
> spec online for science papers saying titles are mandatory). I'm told
> titles are rare in journals.
>
>
>
Yes, they are rare in my field (biomedical.) We use captions (located *below* the graph) most of the time. The caption usually starts with something like this:
**Figure 1. The [would have been title]**
[texts explaining the graph.]
If you have a title in the illustration, it only serves to duplicate information, making it redundant ink.
Though, depending on fields and journals, the rule may differ. Check with the journal's guideline and other published work in that journal for clues.
Upvotes: 3 |
2014/02/28 | 309 | 1,392 | <issue_start>username_0: There is this investment bank specialized in the mining sector. I had access to one of their presentations where they have data regarding the production and end use of some chemical elements. They do not cite the data.
Do you think that is good or bad, trustful or not, to cite these data in my PhD thesis?<issue_comment>username_1: If the source has been used, it generally must be cited, otherwise you assign yourself results or conclusions obtained by other researchers.
Providing the source is not a question for its reputability. If you think the source is not reputable enough, do not use it in your work. It is not very common but I have even seen "personal communication" as a type of the reference.
Upvotes: 3 [selected_answer]<issue_comment>username_2: It depends on your research topic.
If you are conducting research on the marketing value of a chemical element, the trustfulness of a presentation without citation sources is unknown at best. All you know is that they did the presenation for purposes.
However, if you are conducting research on the investment bank marketing strategies, this presentation can be a research subject. The trustfulness of the presentation can be a research topic by itself. However, you will have the citation issue if the presentation is not publicly available as @xLeitix pointed out in the comment above.
Upvotes: 1 |
2014/02/28 | 477 | 2,127 | <issue_start>username_0: I have done a short summer research internship at a department of a famous University in Europe. Unfortunately, this was last summer and I reminded my professor several times now to give me some sort of written confirmation that I actually stayed at his department.(3 times per E-Mail and one time, when I left him). He always said that he will do it, but he always pointed out that he is very busy currently.
Do you think this sounds true? I mean, it has been half a year now and apparently there is not much to do about it instead of just waiting or is there? I also just asked for a few lines, not a confirmation letter of anything similar and I also told him that I would need this for my home university(Which was true)- but he did not really react upon that.
I do not want to pressure him by being more "rude" in my mails, this is not the way I deal with such situation, but I think his behaviour is very annoying and I want this piece of paper now.
How would you deal with that?<issue_comment>username_1: I would address this by contacting the professor's *administrative assistant* or *secretary*. Usually, such "form letters" do not need to be actively written by the professor in question—just signed by the professor. The assistant can prepare the letter, and get the faculty member's signature; in some cases, the assistant may even have a digital signature available, so the professor's direct involvement isn't even necessary.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Hah! It took me 4 years to get a confirmation for my exchange student year to get my degree. After numerous emails and even official letters from my University which all got ignored, the only thing that finally worked was a flight back and running around there for a week, where everyone was telling me they had no trace of my stay whatsoever and putting the responsibility on someone else. So I hope you don't need to do those extreme measures but stay prepared and first go to your departement and let them write a formal letter. Also phone calls may work better than emails. Good luck!
Upvotes: 2 |
2014/02/28 | 1,689 | 6,713 | <issue_start>username_0: I am in the process of updating my CV. Since I often get labeled as "the bioinformatician" I get to play with many different languages and technologies, and similarly what people expect from a bioinformatician varies from person to person. So I figured it would be a good idea to indicate how much I *feel* I know in respective fields/languages.
Inspired by [this question](https://academia.stackexchange.com/questions/14080/mention-impact-factor-or-conference-acceptance-rate-in-cv), I came to wonder whether or not its acceptable to have self-assessed ratings of your technical skills, such as: proficiency programming languages, familiarity with relevant software etc.
My own feeling is that such ratings are useful to indicate what you feel most confident or comfortable with. It would also be useful to show any potential future employer the level of competence you have in different fields. If you think about it a bit, it is common to have some type of rating for the languages one speaks, so I think an analogue to programming language proficiency should not be that alienating.
On the other hand there is the risk of rendering your CV like, as a friend of mine put it, a role-playing game character sheet.
Is it common to have such ratings on skills? Are there any potential problems with it?
**Edit:** What I was thinking is a small listing something like:
<issue_comment>username_1: What scale do you intend to rate yourself on? Maybe that sounds like a silly riposte, but that's a serious issue. If you say you're proficient in Java, how does the person reading the CV know what on earth you mean (assuming they're willing to take your word for it). I would be much more inclined to focus on what experience you have with a language (I have X many years of Java programming experience, I've done such and such projects), since that's actually something which people understand the meaning of. You also don't necessarily need to cover this in a lot of detail in your CV, since if you're applying for a job where these skills are relevant, you can mention it in your cover letter.
**EDIT**: In response to the proposal of using stars or a 0-5 rating: **DON'T DO IT!** If you want to write "I'm proficient in Java and have some experience in C" that's harmless but won't make too much of a positive impression either without some more concrete information. The stars will make you look eccentric at best, and lunatic at worst. I know that some times the usual convention about how to do things seem constraining and silly, but if you've never seen something on a CV before (and I've never seen giving yourself numerical ratings on an unknown scale), there's probably a good reason.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Let's get some terminology clear. A self-assessment is something like this:
>
> I am proficient in Java and Python, and have a good working knowledge of C++.
>
>
>
You don't want to self-assess, if only because, in the absence of an external standard, self-assessments are difficult for others to evaluate. What does "good working knowledge of C++" mean, for example? If my work depends on a program that involves many thousand C++ lines across dozens of files, can I count on you to maintain, debug, and expand it?
What you want to do is accomplishment-listing, which looks like this.
>
> I took CS304 "Advanced C++" (grade: A) and CS407 "C++ Applications in the Life Sciences" (grade: A+) in Alma Mater State University (2010-2011). At BioInfo Inc. (2011-2013), I helped develop the C++ backed of the following programs...".
>
>
>
This is much more helpful for prospective employers.
Upvotes: 4 <issue_comment>username_3: It seems like you need a second opinion so:
No way should you put the graphic you added to your question in your CV. It looks very strange and does not help you. When I see it:
(i) My eye immediately notices that there are a lot of missing stars. Altogether you are giving yourself 63.3% of the maximum possible programming proficiency [whatever that means!]. That sounds really mediocre. Most other candidates' CV will contain only 100% positive information about them.
(ii) While my eye notices that you haven't rated yourself so highly, my brain is very frustrated that it doesn't know what any of the ratings mean, high or low. You give yourself 3.5 out of 5 stars on LaTeX. If I want to take your LaTeX skills into account in my decision on whether or not to hire you...then what on earth am I supposed to do with 3.5 out of 5 stars?!?
Upvotes: 2 <issue_comment>username_4: I do think there *can* be some value to listing skills and your confidence in them, especially if they're not immediately obvious from your accomplishments.
But, I agree with the others that the stars are not useful and do not work in your favor.
For example, if your CV lists: "Project X: Did A, B, C, (implemented in Ruby)" and "Project Y: Did D, E, F (used HTML, CSS, Javascript)" that doesn't really tell me much about how much you've really done with each of these languages. In Project Y, did you really design your CSS or did you find some nice templates and modify to suit your needs?
It's not always appropriate to describe in such detail what each project entailed. If I'm looking for your expertise in a particular skill that isn't obvious from your experience, then a listing of skills and confidence levels *is* helpful. But, there's a better way to do it than with star ratings.
Google's self-rating scale ([reportedly](https://softwareengineering.stackexchange.com/questions/15004/at-which-point-do-you-know-a-technology-enough-to-list-it-on-a-resume)) goes like this:
* 0 – You have no experience
* 1 to 3 – You are familiar with this area but would not be comfortable implementing anything in it.
* 4 to 6 – You are confident in this area and use it daily.
* 7 – 9 You are extremely proficient to expert and have deep technical expertise in the subject and feel comfortable designing any project in it.
* 10 – Reserved for those who are recognized industry experts, either you wrote a book in it or invented it.
On your CV, a textual description ("**Ruby**: I am confident in Ruby and use it daily") is more useful and also makes you sound better than saying "**Ruby**: 6/10"
(Of course, the rest of your CV should go on to present your experience in Ruby, so the reader becomes confident that your self-rating is reasonable.)
You didn't ask about this, but I would also strongly advise **against** listing "Microsoft Office" as a software skill if you are looking for a technical job in a technical field.
Upvotes: 3 |
2014/02/28 | 2,229 | 9,015 | <issue_start>username_0: (Breakdown of a larger issue - [full story here](https://academia.stackexchange.com/q/17516/12468))
Following the [deliberate delay my thesis paperwork by my professor](https://academia.stackexchange.com/q/17551/12468), and after [my department ignored my thesis revision requests](https://academia.stackexchange.com/q/17553/12468) for over 6 months, I finally got my graduation paperwork completed (with a $200 late fee) just after the start of the 3rd semester after my thesis defense. My official graduation was therefore graduated *a full year after my defense*. Due to the concurrent program I was in, this meant my undergraduate degree also did not post until that date - *2.5 years after I finished my undergraduate classes*. As my profession requires 4 years of *post-degree* training for licensing, this has significantly delayed my professional career. If not for delays beyond my control by school personnel, I would have had time to complete the graduation paperwork and finish my degree a full year earlier.
I went the the director of my program and was kicked out before I could do more than state my problem - "I don't believe the department could have held you up; students just don't know how to be responsible." I finally got issues escalated to the dean of the college, and he agreed that the department had grossly mishandled things. He contacted the dean of the graduate school, who refused to even hear the case - all that he or I ever got back from the graduate school was blanket statements about 'policy', which I presume meant their own and not some oversight body. Even when I (repeatedly) went to the graduate office in person, the (brand new) graduate dean refused meet with me. I suspect that the program director (a very aggressive personality) had preemptively contacted him to ask him to ignore me; the dean of my college was retiring that semester and presumably carried very little weight in department politics. However that is purely conjecture.
1. Why would the graduate dean refuse to talk to me? Liability concerns? Or just pure pomp and disregard for the woes of a lowly student?
2. Is there a valid reason the graduate school would be unable to back-date my degree to the semester I had completed all my coursework and successfully defended my thesis?<issue_comment>username_1: I sympathize with your plight. I'm lucky to have not had such a long-lasting and impactful academic misfortune, but have been in some similar situations.
1. It's more likely ignorance/miscommunication, but it is
very probable that the graduate dean doesn't know the truth.
Observation bias alone would make it easy to assume you shared the
burden somewhat, and you can be sure your issue was trivialized and
persistence cast negatively if they spoke at all.
2. Accreditation may play a role in the college's ability to back-date a degree. I can see how that could be a very slippery slope. That said, "a matter of policy" is a rather dismissive reason and sounds more like he simply was trying to avoid the problem. You should investigate both graduate college and institution-wide policies on changing degree dates to rule out a high-level ban.
I've offered some suggestions on actions you can still take in your question [here](https://academia.stackexchange.com/questions/17553/can-a-student-seek-redress-for-the-administration-neglecting-their-paperwork). If it's impossible to change the degree dates, then his reason for not talking to you is moot at this point. However, it could still be relevant if you go down the legal path.
Upvotes: 2 <issue_comment>username_2: <NAME>' extensive comments in response to [your related question](https://academia.stackexchange.com/questions/17553/can-a-student-seek-redress-for-the-administration-neglecting-their-paperwork) apply here as well. However, there is a direct question that needs a direct answer and an emotional core to your posts that deserves more direct attention as well:
>
> **[Could] the graduate school administration change [my] graduation dates?**
>
>
> Is there a valid reason the graduate school would be unable to back-date my degree?
>
>
>
Your graduate school's administration cannot and will not change your graduate date if your request does not come with the support of your advisor and your department. The graduate school and the university may be "higher ups" in terms of a traditional administrative hierarchy, but in academic and curricular matters your department has absolute primacy. Any effort by the administration to "force" a graduation date on your department would be seen as an encroachment on their academic freedom, and *that* would get the attention of outside faculty and peer institutions in a tremendously negative way.
This may be unrelated to the harm that has occurred to you, but it is a valid reason for the graduate school to deny your specific request. If you still have any credibility within your university's administration, you could get more traction seeking other forms of redress.
>
> **Subtext: Does the department get away scot free?**
>
>
>
You will not get what you decided to ask for. This does not mean that your efforts have had no impact. They have cost your department in at least two ways: First, to the extent that you still have the sympathy of anyone in the administration the dispute has cost your department credibility and administrative reputation. As you press the matter in increasingly intrusive ways, this cost will decline and eventually flip into sympathy. No program wants to be pitied, but it's better than being disliked. Second, if you've reached the point where a dean refuses to meet with you then you've earned the "problem child" achievement regardless of the merit of your complaint. Your graduate program admitted you, and will therefore be seen as bringing a "problem" (that would be you) to the university's doorstep. The administration—even the sympathetic administration—will call your department's judgement into question because they vouched for you.
Your department has paid and will continue to pay for what you went through.
*None* of this will be visible to you. That is what professionalism looks like from the outside: calm seas and a gentle wind, nothing happening here.
>
> **More subtext: Why should the department's error cost *me*?**
>
>
>
This is the most important part of your question. As a student and teacher (can't and won't speak to this as an administrator), I have seen graduate students throw away their professional futures over pride and pocket change. It may not *look* like pocket change from the perspective of a graduate student's meager income, but you need to review your relative costs in terms of a tenured professor's *considerable* income.
Based on what you have written here, it looks like you are burning your professional future to ash over four years of lost pay. That "problem child" tag you've earned is sticky, and it will follow you. No faculty wants to hire a colleague that brings trouble, and *no* faculty wants to hire a colleague that *escalates* trouble.
Here are two ways to overcome this tag that I am aware of:
1. Be at the very top of your field. If your scholarly stature outweighs your immaturity, some good university will accept the latter's cost. This is emotionally easy—everyone would happily be the best at what they do—but it is intellectually difficult. Are you capable of being that good? What would achieving this cost you elsewhere? (It will cost you a lot. For example: Ever looked at the divorce rate for professors?)
2. Demonstrate a "newfound" maturity. If it *looks like* you learned from the experience, this will slowly offset the negative reputation you've acquired and allow faculty search committees to review your applications on scholarly merit. This is easier intellectually, you just need to be an employably strong scholar. This is very difficult emotionally because it requires an extraordinary humility. That's uncommon in academia; it can work against the strength of will and self-confidence needed to succeed as a professional scholar.
Deans, directors, and other professors may sometimes look like colonoscopy bags full of "pomp and disregard," but the attitudes that give you this impression are the same ones that got them through a dissertation, tenure probation, and other indignities of the profession. You seem to have that attitude in larval form, and it seems to be acting out. These "dismissive" deans and directors have learned to channel their attitudes into socially acceptable forms, and search committees will be looking to hire only those professional scholars who have learned to do the same.
<NAME>' comments elsewhere are going to be more valuable overall. The answer to your question about what the graduate school can do is little more than a footnote to his discussion of what you can do, and of what you *need* to do to recover personally.
Upvotes: 4 [selected_answer] |
2014/02/28 | 1,932 | 7,881 | <issue_start>username_0: If you are interviewing for faculty positions, how can you find out whether a particular work environment would likely be toxic? (Either generally toxic, or particularly bad for you as a {woman, early career researcher, researcher in a particular subfield, etc.})
Can such environments be avoided?
Can you ask about this during a visit or interview? **Who** should you ask (faculty, deans, students) and **what** should you ask that might elicit the relevant information?
Are there other ways to detect a toxic environment, besides for asking people who know to be on their best behavior for you?
This has been discussed [here](https://academia.stackexchange.com/questions/8507/what-questions-should-one-ask-to-the-former-current-students-of-a-professor-befo), [here](https://academia.stackexchange.com/questions/17192/what-are-the-right-questions-to-ask-professors-at-a-visit-day-for-prospective), and [here](https://academia.stackexchange.com/questions/158/how-to-evaluate-potential-advisers-on-grounds-other-than-their-research-publicat) for prospective PhD students, but not for faculty candidates (as far as I know). I believe the answers will be different for faculty candidates - for one thing, PhD students are likely to be honest when telling a prospective student about their advisor; faculty members talking to a candidate about their colleagues, not so much. Also, the interview/visit procedure is different for faculty candidates, as are some of the relevant indicators of toxicity.
[Source: I read this question on [FemaleScienceProfessor](http://science-professor.blogspot.com/2014/02/toxic-avoidance.html)]<issue_comment>username_1: What I found useful was to be very watchful of how the interviewers act towards each other. Typically some sort of meal is part of an on-campus interview and you will be eating with several of the faculty members. If they can't make it through the meal without doing something objectionable you probably have a toxic environment. The funny thing is that they know to act properly towards you but will still forget to do so to their colleagues even though you are right there.
As an example there was one such dinner where I was pressured into drinking alcohol the night before the real part of the interview and the junior (and female) faculty member who was present was the target of most of the jokes from the senior male faculty members. Both of these details did not help their chances of getting me to accept their offer. Fortunately I had another offer to take instead.
This is by no means going to catch every situation you want to get away from but the general idea is to watch their behavior. In larger departments where the jerks are kept away from the candidates you may have to be more active in searching for these issues. I was mostly interviewing in small departments where I was able to meet everyone.
Upvotes: 6 [selected_answer]<issue_comment>username_2: "for one thing, PhD students are likely to be honest when telling a prospective student about their advisor"
I disagree. In fact, I have seen the opposite. A PhD student's future is *completely* in the hands of their advisor in a way that not even TT faculty depend on their chair, etc.
I think the answer to this question is the same for both prospective students and faculty. You cannot ask directly but must read between the lines. I find that staying more quiet than normal during a conversation will sometimes inspire the other to fill in the silence in some very ... revealing ways.
Upvotes: 2 <issue_comment>username_3: A comment on the [FSP](http://science-professor.blogspot.com/2014/02/toxic-avoidance.html) post offers the following answer:
>
> One tell tale sign at the last university where I worked, was that almost all the research collaborations in the department had fallen apart, many due to personal conflicts. If a department doesn't/can't collaborate I'd call that a bad sign. Lots of collaborations, especially interdisciplinary ones, suggest some rudimentary ability to interact with other humans :)
>
>
>
Upvotes: 4 <issue_comment>username_4: A couple of suggestions ...
1. There should be no "invisible" people. Do faculty greet students and administrative staff with a smile and words like "please" and "thank you?" Do they greet janitorial staff? A culture that values people for being people has a certain energy to it. Likewise, how do you interact with the secretary making your arrangements, people you see in the hallway during your interview?
2. Consider the structure of the interview. Will you visit with everyone in the department on an individual basis? Does everyone in the department have the opportunity to meet you, even if its not one-on-one? Departments thrive on discourse, its up to you to find out if its civil or disruptive. One way to check this is to be in a position before the "job talk" to observe the dynamics of the room as various members enter. If the room goes quiet when someone walks in, try to observe why. Do eyes roll when someone asks a question designed to demonstrate their knowledge as opposed to find out about yours?
3. When hiring (from the perspective of chair, dean, & provost), I want the candidate to know the unit's story and dynamic. That means a commitment on my part to allow the candidate to experience some of the discourse mentioned above. If all you see is harmony, then its either group think or a group that's been cautioned to hide the unpleasantness. You might ask your direct manager what people skills you could bring to the department that would build a stronger team.
4. Get specifics. You are interviewing the institution at the same time they are interviewing you. For example, if you ask what people skills you can bring to the team, and the response is just be nice, think twice. Productive working groups should be built upon mutual respect for each others' strengths and a willingness to overlook some of the weaknesses. With that said, a healthy department will have a sense of what it needs to get stronger as a team.
5. Read "Blink: The Power of Thinking Without Thinking" by <NAME> and trust your gut. If you're sensing something is wrong, it probably is. For this to be effective, you need time to reflect. If you're being pressured for an answer immediately, you should be concerned. Not that you need 2 weeks to think about an offer, but you should know how much time you need to make a reflective decision.
6. It is a very small world in the academy with LOTS of information available, especially in the public setting. Want to know about the larger faculty culture? Go look up the minutes from the last 12-24 months of faculty senate and look at the faculty in department/college you'll be joining. Go to the website of the local newspaper and search for articles regarding the institution. Have you looked at the faculty satisfaction survey on the Chronicle of Higher Ed? Looked up articles on the institution on Inside Higher Ed?
One last point - healthy people are attracted to healthy environments. Think about who you are and take a look at what you want, make sure its consistent with the institution, and don't be fearful of asking difficult questions. You're worth it!
Upvotes: 4 <issue_comment>username_5: I know it's difficult to get straight answers out of people, but sometimes just asking, "Are there any politics within the department to look out for?" to multiple faculty on your one-on-ones will provide some insight. People who make a confused face and say, "No, no" or vehemently say, "Absolutely not!" are likely not lying about it. Those who sigh, or those who get wooden, or those who decline to speak about it might indicate some problems.
In my experience most faculty members are rather honest and have a hard time denying problems when asked straight up about them.
Upvotes: 3 |
2014/02/28 | 945 | 3,498 | <issue_start>username_0: Whenever I am creating figures for publication, I often wonder if I should be using a serif or sans-serif font. I browse the journals in my field and notice that there is no standard, just chaos.
I have typically chosen serif font to match the typography of the body text; however, I have read that sans-serif stands out among the serif body text. My rationale for choosing serif text is because it allows me to reproduce the symbols from the text exactly in my legends (and/or annotations). To me, this seems clearer.
Does anyone have a source or standard that recommends one or the other?<issue_comment>username_1: When it comes to typographic design, it can be dangerous to adhere to rules of thumb. Sometimes (actually most of the time) sans-serif fonts work, sometimes they don't. It would depend on tradition, trend, and overall feeling that the fonts project.
If the journal does not specify. I would usually favor sans-serif. The reason is that unlike my texts, I am not always sure how much the editorial team may size down my illustration. Sans-serif has a pretty good property that they are quite resistant to shrinking, and can still be legible at relatively small size.
In the mean time, if the publisher uses any software to smooth out the edge of the fonts after resizing (e.g. through [aliasing](http://en.wikipedia.org/wiki/Aliasing),) serif fonts can sometimes appear broken at their thinner strokes.
There are, however, some illustrations that just don't look right with sans-serif. For instance, line labels and angle labels of trigonometry problem sets and formula like [this one](http://mathworld.wolfram.com/Trigonometry.html) are much nicer with bold and/or italicized serif fonts, monotone ink-drawn anatomical charts (like [this one](http://img0.etsystatic.com/023/0/6607786/il_570xN.494030876_fmxm.jpg)) will just look very odd if we put on sans-serif labels. This [timeline](http://www.edisondrama.com/graphics/LifeofShakespeare.jpg) describing Shakespeare's Life may look ridiculous if sans-serif fonts are used.
In those difficult situations, look for serif fonts that are beefier or with more uniform stroke width. As they can likely withstand shrinking and aliasing. In addition, look for fonts that are slightly wider, and have a good "x-height" (literally height of the font "x"). Some possible candidates are [Caslon](http://en.wikipedia.org/wiki/Caslon), [Baskerville](http://en.wikipedia.org/wiki/Baskerville), [Garamond](http://en.wikipedia.org/wiki/Garamond), and [Palatino](http://en.wikipedia.org/wiki/Palatino). Avoid cursive fonts, or fonts with some very thin lines like [Times New Roman](http://en.wikipedia.org/wiki/Times_New_Roman). A more in-depth discussion on squint-free fonts can be found [in this blog page](http://layersmagazine.com/art-of-type-squint-free-small-type.html) and this [thread on SE UX](https://ux.stackexchange.com/questions/3330/what-is-the-best-font-for-extremely-limited-space-i-e-will-fit-the-most-readab).
Upvotes: 5 [selected_answer]<issue_comment>username_2: I use Helvetica/Arial on all my figures, as it is a neutral font that doesn't detract from the point of the figure - to present data. It lacks the flourishes of most serif fonts, or stylistic features of othe sans serif fonts. As others have commented sans-serif fonts are more readable at small sizes, hence their overwhelming use in road signage. Since most figures are small when reproduced, readability is paramount.
Upvotes: 3 |
2014/03/01 | 2,806 | 11,699 | <issue_start>username_0: I live in a 3rd world country and at one of top universities in my country a professor offered me a PhD position. I will pursue a part time PhD while working full time. I have to decide on whether or not to accept his offer.
My PhD chances in U.S. or other western countries where an established academic community exists is infinitesimal. This is due to my undergraduate degree is from another nationally lower ranked school than the nationally top ranked university I mentioned. Professors at my undergraduate school have no connection with western researchers and no cares about them, my undergradute professors also do not care about the international academic community.
Though the institution I was offered to pursue a PhD is nationally reputed and having a PhD degree from there carries a nationwise reputation, I believe that my postdoc chances from decent to good schools in western countries are very low. My professor has a title as Professor, but his h-index is extremely low (< 10), while his western colleagues usually have an index of greater than 30, and usually renowed ones have an index of gretaer than 50. Also his students do not seem to secure good postdocs.
I have started to dislike my professor too. I may work with him a few years and apply for a PhD after obtaining some publications, but still I will need his connections.
This might be my only chance for a PhD and I am not sure what to do. What are my chances in western academic system after this PhD ? Should a PhD done with a professor whose work does not receive much citations and who publishes rarely ?<issue_comment>username_1: Picking a PhD supervisor based on his h-index is like picking a car based on his horse-power; you ignore a huge number of factors that are probably equally important if not more. It is drivable (can you work with this person?), is it expensive to run (do the guy needs to pampered and treated like royalty?), are other owners happy with their purchase (are his other PhD students happy with his supervision?), etc. etc. Getting a really fast car only to crash it cause you can't drive it does mean much and getting a supervisor who after a year's time makes you want to quit your PhD doesn't mean much either. Most probably in both cases people are going to think less of you.
I think the most important thing is that you say that "I have started to dislike my professor too." that is a major problem and you should not pick a supervisor that you dislike. I do not mean that by being "homies" with supervisor; I mean about mutual respect and ability to work efficiently and with understanding about each other maybe "small quirks". (eg. My supervisor avoided setting up morning meeting with me because I am a night-person; it was fine, he even joked up about it at times "Next week I have X thing going on so we probably need to meet at 11.00. I know you'll just be out of bed but that is my only available time." That did not mean though I was not expected to be always punctuational for our meeting or having worked seriously on the projects at hand.
To recap: As you present things I would say "do not to work with this professor" but not because of his low h-index but because you say you do not like him and that his PhD students seem not to take good positions (low after-sales value :) ).
You mention that US institution are out of the equation effectively; "fine". Have you thought of PhD programmes in Europe? Some small, not too famous but reputable universities in EU can be stepping stones for a post-doc in US (Given you do excellent work at your PhD obviously).
Upvotes: 5 <issue_comment>username_2: Having a low h-index doesn't mean that your professor is a poor scientist, in the same way that having a high h-index doesn't guarantee he/she is a good one. The primary reason is that the h-index is bounded from above by the total number of publications, so people who have entered the field recently have a lower h-index than those that have been working there for decades, simply because the former haven't had so much time to publish enough papers. Additionally, the h-index only cares about a minimum number of citations per publication, and it doesn't take into account the total number of citations per publication or the importance of those citations. For example, if I publish two papers in *Science* and then retire from academia, my h-index will never be higher than 2, even if those two papers are completely revolutionary and get cited a kazillion times by the biggest guns in the field. In contrast, if I publish 20 papers reporting trivial and mundane results in *North Dakota username_1 College Engineering Bulletin* that only get cited by a bunch of my colleagues in a seventh-rate journal, I can potentially get my h-index up to 20.
A better way of deciding if you want to work with this person is to spend an afternoon reading through some of his recent work, and then to ask yourself: *Does this person's work look interesting enough that I want to spend the next several years talking to him every day?*. Or *If I was already a professor, would I advise my own students to go get a PhD under this guy?*. Or, if in doubt, ask these questions to your current mentors, who probably will have a more informed opinion than you do.
Upvotes: 3 <issue_comment>username_3: I think the question of how good your advisor will be is of secondary importance here because the real question is will you do a PhD or not? PhD positions are not easy to come by (depending on the field of study.) This may be your only chance.
The role of advisor is of course important, especially when it comes to getting your post-doc positions. You need to ask whether you are confident enough in your own abilities to write notable papers that are going to compensate for the shortcomings of the advisor. Have you discussed with him the projects that he will want you to work on? The biggest danger is that he will want you to do something that you are not inspired by. If you like the projects he proposes and feel confident that you can do well even if your advisor's help is limited then you should go for it.
At least you will still be working part time so you have a backup. Why not give it a try and be prepared to drop out after one year if it does not look promising (but don't tell the prof that obviously).
One more thing, if you do go for it try to have a more positive attitude. No advisor is perfect but they are usually on your side.
Upvotes: 2 <issue_comment>username_4: I had a similar dilemma when I decided to pursue the PhD. The rank of the program is important but the intersection of your advisor's work and your interests is the critical factor. If your research is not related to that of your advisor, he will not be able to offer insights to guide you along. You can get in-depth guidance about literature searches, literature reviews, and selecting and arguing a thesis from many outstanding reference books. Furthermore, your advisor cannot cover the breadth and depth of these reference books in the few short meetings that will be allotted to you. What you need is concise, trenchant insight that is relevant to the research.
Upvotes: 2 <issue_comment>username_5: To address your last question:
>
> Should a PhD done with a professor whose work does not receive much
> citations and who publishes rarely ?
>
>
>
There are many considerations as pointed out by other answers. H-index is one measure that might help you understand, at a glance, things about a scholar, but given the seriousness of your situation (wanting to do a PhD) you should dig deeper. For instance, does the professor have a low H-index because he is new to the field (as mentioned by username_2)? Or is the H-index low because his area is highly specialized, and quite small? These might be reasons for relaxing how important this metric is in making your decision.
If, on the other hand, his H-index is low because he does not publish often (e.g. he does not value publishing as a scholarly work), or because he publishes in venues with low impact, these might be good reasons for concern. Similarly, you note that others in his field who would be experts from the west would have an H-index > 50; if this professor isn't an expert in the field then I would consider that cause for legitimate concern too.
You asked a second question:
>
> This might be my only chance for a PhD and I am not sure what to do.
> What are my chances in western academic system after this PhD ?
>
>
>
Your insight that his previous students don't tend to get good postdocs is something to consider, especially if other students in the university are able to secure quality postdoc positions. My personal experience has been that if you want to secure a position in the western academic system you need to do something there first (a degree, a postdoc, etc.), so making sure your PhD puts you on the path to achieving this sounds like it is important for your goals. I would encourage you to ask the adviser directly who he collaborates with and how you can get experience through the PhD working with scholars worldwide. Be explicit about your goals. If he decides he doesn't want to work with you because of this, then he probably isn't the right supervisor for you.
And a final note of advice, your social networks and institutional affiliations are more important when you are looking for that first academic job than your h-index. H-index is used more regularly for judging things like tenure, promotion, etc. In the western system, from my experience, you want people to know at a glance that you have credentials that are rigorous and prestigious. If doing a PhD with this professor won't put you on this track then you should seriously consider your other options. But dig deeper than the H-index to investigate this.
Upvotes: 1 <issue_comment>username_6: I have been working as a graduate advisor for many years and I am giving you suggestions based on that experience. I hope I don't sound overly critical, just a couple of things I usually tell incoming PhD students about their expectations of graduate school. This may be because I work mostly with undergraduates transitioning directly into graduate school, so often I have to play the antagonist in these discussions to challenge my students to think about their own plans for their own future. And to be realistic. So here goes:
Most importantly, I think that you should also consider the amount of effort you are willing to put into the PhD. Typically, PhD students are asked to commit full time to it, and though this varies with the discipline, my experience working with PhD students is that the more time they spend developing themselves as academics and masters of their field, the better they do professionally.
I am concerned that you do not like your mentor/professor much. Are there others on your committee (or academics you are considering to be on your committee) that you do prefer? It's not unusual to not see eye-to-eye with your mentor - it is unusual that you don't want to continue working/ knowing him after your graduate - but rather his connections. Typically, his word to his connections is what begins your immersion in his network - so you will have to be careful to either keep that disdain in check or work will not be fun and challenging (as it should be) and will end up being a chore and make you more frustrated - and isolated.
Finally, about rising in the ranks of academia. Being a part of the Western academic society is not the ultimate social status. Being a highly valued academic in your chosen field of work is.
Good luck!
Upvotes: 1 |
2014/03/01 | 1,107 | 4,748 | <issue_start>username_0: In a year, I will be taking a difficult entrance exam at a renowned postgraduate school. There are many competitors, most of whom are smart. Despite that, I'm aiming at becoming #1 in the exam, and that's why I have started preparing now.
I have interviewed a few top candidates of the past years and already learned which books I should read and how much I should study every day.
I think that learning more about planning would help me a great deal. But those books on planning and goal-setting mostly focus on reaching financial goals. I have not managed to find any books befitting my situation.
Another thing is that, because the duration is quite long, I'm afraid that my enthusiasm may start weakening after a few months into it, no longer performing at my best.
So, here is my question: How can I a make one-year study-plan for myself, and remain motivated in the long run?
Also, any suggestions about books, software, etc. that will help me reaching my goal will be appreciated.<issue_comment>username_1: I would recommend you not to rely upon external entities for motivation. Rather then relying upon just some books and software, practice **"Self Motivation"**.
Each day think about the **happiness and satisfaction** that you will achieve once you got an admission in the targeted postgraduate school.
Furthermore, browse through the **list of notable alumni** from the same school and make a target that you will also feature in the same list some day.
It will encourage you to put 100% effort each day without losing excitement.
Upvotes: 3 <issue_comment>username_2: making plan is essential for the exam but rather more imp is to excute daily and alway remember ur goal ...
if u able for the exam then u also able for making strategy or plan ownself...
so belive in ownself,hardwork,smartwork systematicity ,mindset,ur god and positive think ...
never give up...
always think about ur goal ,goal,goal...so...on
Upvotes: -1 <issue_comment>username_3: I am going to take a different strategy than suggest you need to stay motivated and positive. You can be super motivated and positive, but still not reach this goal. Focus on principles of effective and efficient learning. Create a study plan and use study strategies that maximize every bit of your study time. Here are a few possible strategies to consider.
1) Establish a fixed study schedule that is realistic. An overly ambitious plan will likely lead to early failure. What is a realistic plan that you can reasonably adhere to for the course of a year?
2) Avoid binge-study periods. Breaking your study sessions into shorter but more frequent times is more effective than marathon / binge study sessions. You can take advantage of the 'recency' and 'primacy' effects in learning.
3) Be certain that you are monitoring your study sessions. Make sure you are giving yourself credit only for productive studying.
4) Measure / monitor your progress. This is important to ensure that you are moving forward in your study plan. You can do this by specifying measurable objectives, perhaps on a weekly basis. For example, "By Friday, I want to have accomplished ... "
5) Try to obtain practice tests that are similar in structure or content to the one you will be taking.
Upvotes: 2 <issue_comment>username_4: When I went through this, the thing that kept me most motivated was not doing it alone.
To get through this, I joined a group of 4 of my peers who were studying for the same tests. We met twice a week for several hours and planned out before each meeting which chapters we would discuss. When someone felt they had a particular understanding of the subject matter for the chapter, they would lead the discussion on that chapter.
There were several advantages to this approach.
1. I was motivated to attend because not doing so would affect the group
2. I was motivated to honestly read the chapter and not just skim it if I thought I knew the content already
3. I found that there were things that I thought I understood that I did not. Explaining your understanding of something to someone else is a great way to find all of the holes in your understanding of it
4. In our case, the exams were based on courses we had taken already so we all had notes and previous exams from those courses and were able to share those resources.
In our case, several of us had family. A structured time to meet and discuss provided us a way to work with our already overloaded schedules. We chose to meet on campus in the evenings as our department is open 24/7 to students by keycard. This allowed other students studying for the same thing to drop in and out of our meetings when they were interested in particular topics.
Upvotes: 2 |
2014/03/01 | 875 | 3,931 | <issue_start>username_0: While attending a course sometime back, I recall an instructor saying, "I need to warn (other instructor's name)...their course syllabus is available to anyone!"
I do not have access to an LMS for my classroom-based course, so I just post my course details to a regular Web hosting service. On the Web site, students, or anyone else, can easily locate:
* the syllabus
* exam study guide
* homework instructions
I cannot think of any reason why this would be a problem, but recall the comment, so wonder if there might be some issue I have overlooked. Is there any reason why any of this information should not be open to the public?<issue_comment>username_1: Basically, I would answer no! There are, however, several issues that may prevent people from publishing material openly. One is if it contains copyrighted material, another is if the material contains hints that can help students gain an unfair advantage. In your list, the only possible issue could be with the third if those in any way could lead to an unfair advantage (not that I can think of how). That said, many publish homework questions, lecture materials etc on web sites that can be found by a search. I have benefited from finding such materials when developing my own courses and I am very grateful for that. Returning to your three points, I think they provide a good basis for students to decide what they can expect from the course and hopefully will attract the right students to it.
Upvotes: 4 <issue_comment>username_2: As @PeterJansson explains putting material online has many advantages for the students. But I believe that there are two small conditions for this.
First, that the author keeps control of the material. By this I mean that is not just posted somewhere on the internet where the author cannot modify it. This is because usually the material created and typesetted by a single person has not gone through a publishing process and usually has many errors an typos that the author should be able to correct anytime. This is the reason why I believe any uploaded material should always explain the way to contact the author (at least an email address). We all have found note on the web plagued with error that the author either can't correct or doesn't even know they exist.
Second, that the existence of the material and the way to access the last version of it is explained to all the student attending the course. The problem is that we cannot control what happens whith the files once we upload them but at least we can tell students where to get the right version of them.
This is the way to avoid the only way to prevent the only form of unfair advantage I can think of which is some students having a more recent version of the notes or some students not knowing that the notes exist.
Upvotes: 2 <issue_comment>username_3: Let me put it this way, if I know of a published book that explains the topic of the course very nicely, do I have an advantage compared to my fellow classmates? One could think that. It is unfair? Not really, anyone could have done the same research as me and find the same book.
Some universities actually encourage professors to publish their class notes. It is a good way of getting prestige, as other professors can base their course plan on yours, or students may find the notes useful. In both cases, it is a very good publicity for the university, and very cheap.
Copyright issues are probably the only possible limitation, but they depend greatly on the subject: for modern English literature, you will need to comment on extracts of copyrighted books, and perhaps you want to avoid any legal fuzz regarding whether you are under fair use or not; but in mathematics there are hardly any copyrights in theorems.
Another reason not to have things public is if you are going to publish them as a book. But that is another ethics debate for another time.
Upvotes: 2 |
2014/03/01 | 971 | 3,803 | <issue_start>username_0: As a grad student I was, for the most part, shielded from issues like high-level bureaucracy, departmental duties and politics, and long-term career advancement. Obviously these things become more important when you're looking for a faculty job.
**Question:** As a faculty member in the U.S., what are the most tangible differences between being at a public vs. private university?
I.e., how does it affect your day-to-day life, or alternatively, key events like promotion, student recruiting, etc.? Obviously the question depends a great deal on the particular department and perhaps its ranking; I am interested mostly in departments "near the top" *[ed: of some fairly arbitrary ranking systems...]*, but broad answers are also useful.
Thanks!<issue_comment>username_1: In the US, the private universities apparently tend to pay more to their faculty:
<http://www.aaup.org/file/2012-13Economic-Status-Report.pdf>
<http://www.huffingtonpost.com/2013/04/08/faculty-pay-survey_n_3038924.html>
On the other hand, the private universities should, for obvious reasons, be less affected by the state budget cuts.
Upvotes: 1 <issue_comment>username_2: A few anecdotal observations. It *must* be said that these are trends, and exceptions to everything I say are plentiful.
* As username_1 said, private universities tend to pay a little better. Also, they seem to have less faculty turnover (perhaps for this reason).
* Public universities often have more BS committee work. For example, my university periodically mandates a lengthy process of "post-tenure review". Negative or positive reviews have no consequences, and therefore the process is a complete waste of time, but we have no choice.
* At least among top-notch research universities, public schools are usually large, and many private schools are small. Large schools typically have advantages (big seminars, lots of courses that can be offered to students, etc.) and disadvantages (lots of grading, people can feel lost in the crowd, etc.)
* A corollary to the above: public universities often influence their towns more, simply because there are more people working and studying at the university. For example UNC-Chapel Hill and Duke are excellent universities (public and private respectively) which are ten miles apart, and it is Chapel Hill that has the really outstanding K-12 school system.
* Especially at public universities, there might be many clubs which are largely student organizations, but faculty members and others also participate in. Especially if you are not yet inclined to "settle down", this could be a huge positive.
* The campuses tend to be different. Many private universities are surrounded by lush greenery, where at public universities it is more common that you can walk across the street and get something good to eat.
* Sports culture is more prevalent at public universities (although it is also very big at many private universities). I am a bit of a curmudgeon, annoyed at the overwhelming football culture at my school. But those more laid back than me simply enjoy the games.
* Student attitudes tend to be different. At wealthy private universities many of the students will be more optimistic, and more ambitious goals and dreams seem to be more common. The downside (from what I have heard) is that entitlement and grade-grubbing are also more common.
* Public universities serve the public. My university enrolls a number of students from disadvantaged backgrounds, many of whom are the first in their families to attend college. I attended a graduation ceremony, and when many names were called there was a palpable and obvious joy on the part of large crowds in the audience supporting them. Witnessing this was a deeply moving experience.
Upvotes: 5 [selected_answer] |
2014/03/01 | 1,616 | 6,603 | <issue_start>username_0: Imagine that I write a paper about a controversial topic like global warming denial, the link between vaccines and autism, why different races have different IQ. After publication, the paper gets the attention of mass media, and as a counter measure, serious experts start explaining why the paper is completely wrong.
Would that count as citations towards a higher h-index?<issue_comment>username_1: h-index counts citations regardless of the content of those citations, so citations by people criticizing the paper, disagreeing with it, or pointing out that it's nonsense do still count as citations.
(As a plan to improve one's h-index, this seems like a bad plan for a number of reasons. As a concern about the meaning of h-index, it's a concern, though there's room to argue about whether this sort of situation is common enough to matter.)
Upvotes: 4 <issue_comment>username_2: *Yes, you can!*
**But that it is possible is by no means to say that it is ethical, practical, wise, or otherwise commendatory.** I would be especially concerned about becoming known as the 'person with a kooky idea' rather than as a serious academic researcher.
The question becomes, "It is possible to write on a very controversial topic, create a media firestorm, attract a lot of attention, increase your h-index, and still keep your credibility intact?" The answer will be highly variable dependent on the validity of your research, your previous reputation, and the sheer capriciousness of luck.
First, assuming that your work is valid, even if you have proven the viability of a very controversial position, your work is likely to attract some negative attention as well as attempts to disprove your research (or you-- *ad hominem* attacks are unfortunately common). However, if your work can and does stand up to scrutiny, all the brouhaha may actually work in your favor-- you have proven a controversial theory to be true, your h-index will increase and your credibility is not only intact but also bolstered by your success.
On the other hand, if your work does not stand up to scrutiny (which seems to be the scenario you are picturing), you will have made a public fool out of yourself and the slight increase in your h-index will be more than offset by the decrease in your credibility. Neither the counter-moves of serious researchers nor the attacks of fanatics are likely to help your academic career, especially if your work cannot stand up under scrutiny.
**So, write a really bad, but controversial, paper *only* if you are willing to sacrifice your credibility for the slight increase in your h-index.**
Upvotes: 4 <issue_comment>username_3: Not really: note that by [definition](http://en.wikipedia.org/wiki/H-index) of the h-index this paper can increase your h-index *at most by one*, unless you are lucky enough to get citations of the type described by username_5 Richerby in the comment below ("X, despite making significant contributions to the subjects A [1-3] and B [4,5,7-13], has some unorthodox opinions on the subject C [6]").
However, attracting this kind of citations is very field-dependent, and I doubt that even if this strategy works out, be it with a single paper or in username_5's way, it would really pay off, especially given the losses in reputation.
Upvotes: 3 <issue_comment>username_4: No, you probably can't, because for it to gain a lot of attention it needs to be intriguing in some way. Simple rubbish isn't; there's plenty of that already, and the peer review process screens out most of it. You could try some huge publicity campaign, but if you're that good at publicity, maybe you're in the wrong field?
Finding just the right balance of plausibility, tension, incorrectness, and publicity is very hard. One indication that this may be the case is that the number of highly-cited bad/controversial papers is much smaller than the number of highly-cited good papers.
Just write a good paper. It's easier (not easy!) and more useful.
Upvotes: 2 <issue_comment>username_5: Most bad papers are ignored not cited for being wrong. You would have to get it into a good journal and get people to praise it etc. before other researchers will think it is worth critcizing it.
Upvotes: 1 <issue_comment>username_6: I just want to add that your examples are not likely to succeed. If you attract media attention, the debunking papers will be published in newspapers, not in peer reviewed journals, and it will not count towards bibliometric indexes. For it to work you would need to attract attention in your field, and therefore, fool experts.
This said, there are some exceptions, one "good" and one "bad":
* Very groundshaking papers, like some of the ones published by Nature. They explore a very new frontier, and are likely to make mistakes. They do get attention because the ideas are refreshing. Even if they are not correct, the mental process is useful.
* In some multidisciplinary fields, most experts tend to be in one of the sides. For example, some branches of biomedical research are dominated by doctors and biologists, but there are not many physicists or statisticians. In these fields, people may incur in mistakes outside of their area of expertise (for example, a doctor may not have understood the electronics involved in his machine, and why his results are flawed). In this case, one could write a honest paper that has a fundamental mistake, and the reviewers are from the same area of expertise as the author and don't catch it; or someone could take advantage of his rare expertise and introduce wrong procedures on purpose. The first case is not unheard of, the second I haven't seen.
Upvotes: 0 <issue_comment>username_7: In my experience, most papers of this nature tend to be written by senior academics near the end of their careers, and so it has little impact on their h-indices as they already have a sufficient number of papers with more citations than the controversial paper is ever likely to attract. Less senior academics at the start of their careers (where it might have an impact on their h-index) tend to be more circumspect and careful (as their lack of experience tends to make them more self-sceptical). As a scientist, self-scepticism is a vitally important quality to be carefully nurtured.
Upvotes: 1 <issue_comment>username_8: This is a clever idea, never thought of it.
You are famous for the wrong reason. Of course, what's being worse than being talked about is NOT being talked about. Unlike Hollywood, publicity does not equal fame, nor does it generate grants.
Upvotes: -1 |
2014/03/01 | 539 | 2,365 | <issue_start>username_0: I bet most of the users here had one of the following bad experiences: your idea and someone's idea happen to be very similar, manuscript topics got scooped, etc. Among these bad experiences, the worst is perhaps finding out a very similar paper was already published after the experimentation, simulation, writing, or even submission was done. This is very time consuming and stressful.
I realize that this question is very general and field-dependent. However, I would love to learn from your experience on **how to efficiently look for related works in one's own field.** For instance:
1. Where (websites, publishers, pre-prints... )
2. When should we find or update these related works, and
3. What kind of tricks have you used to perform such searches efficiently.
To help you orient, my background is electrical and computer engineering, communication and image/video processing.<issue_comment>username_1: 1. Scholar.Google.com, IEEE, and ACM
2. After every major conference in your field.
3. See 1, find a paper, then follow its references and the papers that cite it. Repeat.
Upvotes: 4 [selected_answer]<issue_comment>username_2: In addition to the sources listed in <NAME>'s answer, try also other databases like SCI (<http://www.webofknowledge.com>) and Scopus (<http://scopus.com>) to get a more complete picture.
Upvotes: 2 <issue_comment>username_3: In addition to searching the published literature (as the other answers suggest), if your subfield is a very active one you'll also want to know about **almost-published** literature. There are two ways to do this:
1. Identify the important conferences in your field, and look through the list of accepted papers as soon as it becomes available.
2. By following username_1's suggestions you will notice that some names come up especially often as authors of related work. These are researchers doing work similar to yours, and therefore there is a good chance that their next paper will also be related. Find their homepages and monitor their publication lists on a regular basis. Many researchers list their "to appear" papers, which may not yet be available from the publisher.
If you suspect significant overlap based on a paper title you see in the program of an upcoming conference or the author's homepage, you can request a preprint from the author.
Upvotes: 3 |
2014/03/02 | 1,507 | 6,468 | <issue_start>username_0: In July 2012 I submitted a bachelor's thesis on a machine learning topic. I designed an algorithm that I developed in Java.
Now I have found a publication (September 2012) by my supervisor with all the results of my thesis, including images, with only a thanks at the end for the "Java implementations", but all of the results and the designed algorithm were taken from my thesis. The supervisor has not added anything to what I had already written in my thesis.
For me she had to add my name as a co-author of publications. Of course the supervisor helped me in the writing of the thesis but having only revived my job in publishing, then I was expecting my name as a co-author because she has not added anything new.
What recourse do I have?
**Edit1:**
One problem is that my thesis, about 140 page, was written in Italian and the pubblication was in English. For this reason I suppose it's hard to write to the Journal to show him my thesis. In addition, my thesis is not published online on any official channel.
The project and the thesis was made by me and one other student with the constant help of her (supervisor). But on the publication the name is of Supervisor, Co-supervisor and one more people (i suppose this people have translated 140 italian page in 5-6 english page) but why not me?
In add now I'm in other city, in other university and now i have no bridge between my supervisor.
If i have no more contact (only an email) with supervisor is there any way to write to the person who published that? and how can I prove that content is that of my thesis?<issue_comment>username_1: If this paper was published in a journal, you can write to the editor and state your case (or better, if you can, have a senior colleague to do that: (s)he would have more clout with the editor) and ask which are the available options (retraction, publishing a comment stating your authorship, etc.). It would be helpful if your thesis was available online for all that time at some respectable (and easily accessible) place like the University online repository, and the thesis has a reliable date stamp preceding the date of submission of the paper.
**Caution**: if you follow the above advice, **be prepared to burn all bridges with the person who plagiarized your work**.
**EDIT:** Following the suggestion given in comments below by <NAME>: you may wish to consider discussing the situation with the supervisor first and, if you could agree on that, write to the journal together requesting the correction (adding your name as an author) rather than pointing out the plagiarism case. However, be warned that the odds of reaching such a compromise with the supervisor are, in my opinion, rather slim, and you should still be prepared to burn all bridges with the supervisor if the conversation doesn't produce any reasonable outcome.
In any case, it would be extremely unlikely to obtain (supportive) letters of recommendation from this supervisor.
Upvotes: 2 <issue_comment>username_2: Unfortunately, your story does not seem implausible to me at all. Here in central Europe, some computer science departments seem to have a *very* lax mentality when it comes to acknowledging research contributions coming from undergrads or non-research master students. In some places, this thinking seems to be so ingrained that even otherwise honest and fair researchers do not even consider putting the name of undergrads on papers despite their work making up a significant part of the paper's research contribution (something that the same faculty would never do and, in fact, consider highly unethical, if the student was a PhD or a master student on a research track). I guess part of the problem is that around here, the majority of students heading for an industry career (which is almost everybody at many large universities) does not care one way or another, so nobody really complains about this practice (which, of course, does not make it ok).
*(the following is written under the assumption that what you wrote is actually correct - clearly, my advise is terrible if you vastly overstated your contributions)*
>
> If i have no more contact (only an email) with supervisor is there any way to write to the person who published that? and how can I prove that content is that of my thesis?
>
>
>
You should **definitely** get in touch with your supervisor. Keep the mail friendly, but do make clear that you are not ok with how this has went down. If (s)he is one of those that simply did not consider whether you should actually also be a co-author of this paper, there is a good chance that (s)he is in fact pretty embarrassed by the incident. Presumably, the first thing that the faculty will explain that this "is just the way it works around here". Don't accept this excuse (even though it might be factually true). Be aware that you are in the right here, and that you raising your valid concerns to the conference organisers will **at least** be **really** embarrassing for the faculty (and, as stated on this website once, reputation is the currency of science), so you do have some leverage.
Essentially, I think the onus is on the faculty to come up with a solution here. It is not like **you** need to think of a way how this can be resolved sufficiently. *Maybe* your supervisor will think of a solution for your issue that is acceptable for you. As a last resort, you can contact the organisers of the conference, as stated by username_1, and give them the information that you also gave us above (now deleted). Be aware that you will likely gain little by this move, though - presumably, either nothing will happen or the paper will be removed from the proceedings. In any case, the reputation of the authors will likely be tarnished quite a bit by this incident, and, as stated in another answer, you will have certainly burned all bridges with this group of people.
Upvotes: 5 [selected_answer]<issue_comment>username_3: The primary question here is what you would like to see happen.
It sounds like you are angry that your work was plagiarized, but are scared that it will impact your current work.
Typically universities have an ethics group or commission. Look into one at your past-supervisor's university. These groups often keep their work secret and use mediation to resolve such issues. That would be a good place to start to know the options available to you.
Good luck!
Upvotes: 3 |
2014/03/02 | 851 | 3,535 | <issue_start>username_0: Your undergraduate CGPA is quite low, but you were somehow accepted into a respected Masters and PhD program, and completed your postgraduate work quite successfully. Your undergraduate major was the same general field as your postgraduate research.
You are now beginning your academic job search. How would the low CGPA affect your chances of getting a job at a top-tiered academic institution in the US or UK?
If yes, then what else can you do in the meantime to counteract the negative effects of that low CGPA?
(btw, I am asking this for a friend, not for me as I am not in academia)<issue_comment>username_1: For applications for faculty positions people would usually state only their undergrad degrees with date, subject and university. For postdoc applications, one could include more detail, but I don't think anyone would get suspicious if no grades are listed.
Thus, having low grades on your undergrad studies would have no direct impact on jobchances after the PhD, because people simply wouldn't know about them.
Upvotes: 3 <issue_comment>username_2: As you have already done **Masters** and **PhD**, your CGPA in undergraduate is least to be bothered. If you are in academia, your quality of research matters more than the grades that you have obtained long back in undergraduate studies.
As your credential will grow, your resume will be filled with much more valued contents rather than just grades.
Upvotes: 4 <issue_comment>username_3: What you have done since your undergraduate days is far more significant than how focused/motivated you were at that time. Everyone understands that students are still figuring out their priorities, and that adolescents are insane by definition.
If you're worried about it anyway, you may want to have an answer ready in case someone asks you about them. Mine would be a combination of:
-- I was spending too much time on student activities, mostly on volunteer projects though I admit D&D ate a great deal of my spare time as a freshman.
-- I was still figuring out what I wanted my actual career path to be. (In fact, my degree says EE but I've wound up returning to CS ... my grades would have been better if I'd stuck with my first love, but I felt I needed to balance my knowledge of software with more hardware insight.)
-- I was more concerned with learning the material than with proving I had learned the material. As a result, I tended to work hardest on homework in the classes where I was struggling, and sometimes blew off homework in classes where I felt I didn't need the practice. If you could look at my records in greater detail, you'd see a fair number of courses where my final grade was a B because my homework grade was C but I blew away the final. Obviously, I've gotten smarter about time management since then.
Note that every one of those, while true and admitting a failure, also acts as an opportunity to discuss what I learned from that failure, what *strengths* it demonstrates to offset the failure, and why I'm a good candidate now. Use it as an opportunity for storytelling and marketing; make lemonade out of the lemons.
Upvotes: 2 <issue_comment>username_4: My grandfather was famously (within the family) asked just this question (concerning poor undergraduate marks) during his interview an M.Sc. program. His answer was roughly "As an undergraduate, I enjoyed being an undergraduate; now I am ready to concentrate on my studies." He was accepted eagerly, and the subject never came up again.
Upvotes: 1 |
2014/03/02 | 1,203 | 5,100 | <issue_start>username_0: Every year we organize a competitive international call for PhD students (in the area of biology). What measurable criteria should we use to predict their academic success and award them research fellowships?
I realize that part of the question is ill-defined because it is not clear how to define “success” for a PhD student. But since I imagine that many of us have this problem and have potentially thought of a solution, I would love to read your thoughts on this question.<issue_comment>username_1: The major quantifiable predictor of success in research is... success in research. People who have done research successfully in the past are more likely than not to continue to do so.
For students that have not done research in the past, the best predictor I have seen for success (whatever that may be) is expressed in a quote from *The Unwritten Rules of PhD Research*, by <NAME> & <NAME>:
>
> *A willingness to learn for themselves and good judgement about when to stop and ask for feedback.*\*
>
>
>
It's not exactly quantifiable, but you can get a sense for it in an interview.
(Of course, this quality can be learned, so a lack of it doesn't necessarily predict an *inability* to succeed at research.)
\* *I took this quote completely out of context; the authors there are actually discussing the role of the PhD advisor, and they mention this quality in reference to a student "who can be pretty much left to get on with it, with supervisory meetings being something that both parties enjoy, and where each party learns from the other."*
Upvotes: 5 [selected_answer]<issue_comment>username_2: **Short answer:** You should not be looking at what people do or how they are, but how what they do looks like.
This is, how are they qualifications, specially considering consistency between them, it's not about the average, but the average and a small standard deviation. The greater the deviation, the less predictable they will be.
On a side note: you will probably reject geniuses.
**Long answer:**
"Success" is now defined as "publishing", it's pretty clear how to define it since people that have to define quality and metrics are focusing on this.
This may be wrong, some people say it's wrong to use P-values to test your hypotheses, some people say it's wrong to use h-indexes to measure the quality of researchers, but it's certainly becoming more and more common. These values provide a warm feeling of objectiveness and it's very hard to refuse to that. IMHO, they are here to stay, and without any doubt, they are here.
Having clarified that, I have a personal hypothesis (not verified at all, sorry) that people that get good grades are better at publishing. The reason is that we can consider that the corrections of an exam and the reviews of a paper are similar.
In my experience, in both cases is not about how much you know or how much you can do, but on the contrary it is about:
* *Conformity*. Using the same language, terminology and not producing something shocking or hard to understand that will cause failing the exam or the paper being rejected.
* *Writing skills*. Trying to predict (consciously or not) how the person reading your paper/exam is going to interpret it, avoid misinterpretations, show self-confidence, clear ideas, clear structure, etc.
* *Concision*. This is more than a writing skill. Time is limited in exams, pages are usually limited in papers (and the less pages, the more papers, that's also good, in principle). But it's even more than that, because most of the time it's not about how much you know or how much you have done, but avoiding mistakes. An exam that replies perfectly to half of the questions (and only that) will look better than one that replies perfectly to 90% of them but then makes really stupid mistakes in the other 10%. A paper with a small contribution may get accepted (depending on how small it truly is), but a paper with an important contribution and then an important mistake will get rejected (even if the mistake is only in the mind of the reviewer because the terminology used does not conform to what is usual and this makes reviewers very confused).
So it's really about being compliant with the state of the art and moving step by step further, with small steps, [baby steps](http://www.cad-comic.com/cad/20130114), avoiding mistakes. How can you know whether someone can do this? Looking at their grades, and specially to the deviation of their grades from the average, it's not just about high grades, but consistent grades. It's not about how much they know, but about how often do they mess up, because if they do often, chances are they will do it at least once per paper, getting all rejected.
This is the case for me, from time to time I'd do something really *"brilliant"* in an exam, the teacher would not understand it and I would get a qualification of 0 in that exercise. I have never cared about qualifications, but learning, now publishing is just like any other qualification. *Success* is just like any other qualification.
Upvotes: 1 |
2014/03/02 | 839 | 3,452 | <issue_start>username_0: I have several math research articles on my site.
Some of my articles are published in open access journals.
Some of my articles are currently available only from my site.
I may probably publish something in a closed access journal.
The question: Should I put these kinds of articles at arXiv (and possibly replace articles on my site with redirect to arXiv)? I think yes, because it would increase visibility of my articles, as many people search on arXiv and/or receive arXiv mailing lists.
Is it ethical to put an already published article also at arXiv?<issue_comment>username_1: Well you can put your published articles on arxiv just for visibility purposes, but you will be having problems with the copyright of the journals and conferences in which your work is published. Essentially, the editors will not get too busy to track you down, but it is not ethical.
There is one way to go around this problem, publish in arxiv the draft versions of your articles, the ones that are a little bit different from the published ones. In that case you would not have any kind of ethical issues (you can also put them on your webpage, but always look to the copyright forms)
Good luck!
Upvotes: 4 [selected_answer]<issue_comment>username_2: I suggest you first of all check
* the copyright transfers you signed
* the publisher's FAQ on rights you retain as author
Many publishers nowadays allow you to self-archive the version of the manuscript that passed the review. Some do not allow self-archiving on public repositories (but e.g. Elsevier makes an exception explicitly for arXiv). For a quick overview have a look at the [SHERPA/RoMEO site](http://www.sherpa.ac.uk/romeo/).
* and your local copyright legislation.
E.g. the [German UrhG](http://www.gesetze-im-internet.de/urhg/__38.html) now allows secondary publication (e.g. to arXiv) of your manuscript (including the version with exactly the content of the published paper) for journal contributions that were financed mainly by public grants.
Upvotes: 5 <issue_comment>username_3: Albeit not a mathematician, I like to add this: if your work was supported by a public agency and got accepted for a peer-reviewed journal, public agencies may have a policy to make these manuscripts available for everybody.
See (as example) the [public access policy](http://publicaccess.nih.gov/) by the U.S. National Institute of Health:
>
> The NIH Public Access Policy ensures that the public has access to the published results of NIH funded research. It requires scientists to submit final peer-reviewed journal manuscripts that arise from NIH funds to the digital archive PubMed Central immediately upon acceptance for publication. To help advance science and improve human health, the Policy requires that these papers are accessible to the public on PubMed Central no later than 12 months after publication.
>
>
>
Depending on the journals listed in [PMC](http://www.ncbi.nlm.nih.gov/pmc/), some articles are "Free Access" immediately (if published in journals like like *European Journal of Histochemistry*), delay of six months (like *Organogenesis*) or twelve (like *Optics Express*), for example.
While not a mathematician, I'm glad to see NIH *does* fund work in mathematics, too (according to their publication database). Of course, NIH's public access policy is in regard to the single articles published, not the entire journals listed.
Upvotes: 1 |
2014/03/03 | 821 | 3,349 | <issue_start>username_0: I have a sentence with two concepts and two quotations from two different authors. It goes like this
>
> Some is true because of **concept one**, that is "*quotation one*",
> and **concept two**, that is "*quotation two*".
>
>
>
**Concept one** and **two** are both from Author A while *quotation* one and *two* are from Author B.
What would be an elegant way to cite both authors at the end of the sentence making sure:
* The reader will be able to attribute each concept/citation to the write author,
* The reader will be able to understand which work (and from which page) the quotation is from<issue_comment>username_1: If this is suppose to be a **research paper**, one can cite both the authors in **Chronological** manner.
In your case "concept one" is from Author A and "quotation one" is from Author B and those are repeating in the same order, so citing them as \cite{A, B} will work fine.
Upvotes: -1 <issue_comment>username_2: First off, citing papers is not about giving credit to first authors, it is about making literature traceable to readers. This is a key part of scientific writing, providing sources. The format for citations is of course focussing on first authors who may, or may not, be the main contributor (remember that author order varies between disciplines). A secondary aspect is the fact that many evaluations of academic status is based on authorship and as such authors may not be credited as much as they should. This is, however, not the reason for why we reference the way we do. So, from this perspective, I do not see why you necessarily need to emphasize the name of someone other than the first or second author (I am now thinking Harvard-style references where two-author papers have both names listed in the in-text reference).
If there is a scientifically based reason for highlighting the originator, one could write
>
> Concepts One and Two (reference to B) were first developed by A [then I would argue some form of explanation of why this distinction is *scientifically* important should follow or be included]
>
>
>
or
>
> A originated the concept one and two (Reference to B) [then I would argue some form of explanation of why this distinction is *scientifically* important should follow or be included]
>
>
>
Note that this would seemingly take away the importance of B, which in many reference systems would look strange and implicate something may not be right with the articles. I therefore think it is wise to clarify why you feel the work of A is such that it requires highlighting. Clearly, I cannot judge the case since all details are unavailable. As a side point, reviewers will likely pick up on any inconsistencies and ask for clarification in a case such as this, unless the reasons for the formatting is either clear from your writing or well known in the community.
Upvotes: 1 <issue_comment>username_3: If the reference is at the end of the sentence and you don't have any other clue in the sentence, then the only connection is the order of appearance in the citation, e.g. [A, B], which means that the first quotation is from A and the second from B.
I would suggest using citations in the sentence and not only at is end. E.g.
>
> From [A] the first is true because [..] and from [B] the second is [...]
>
>
>
Upvotes: -1 |
2014/03/03 | 821 | 3,376 | <issue_start>username_0: The problem: I have a professor that accepted me to take on a thesis when i'm ready but another one offered me one also. The second one did it while the first professor was in the room and I felt the obligation to deny it, even though we have way better communication with the second one.
Both of them know each other very well (they even have neighboring offices) and denying one of them will complicate things and the worst case is that none of them will give me a thesis because I will seem too selfish.
The question:
How can I choose the second one (if the offer is still on) without making a mess?<issue_comment>username_1: [I am assuming that you are in the American system where PhD students are admitted to and funded by the program as a whole rather than any particular advisor. I can't speak for the etiquette in other systems.]
I don't really see a problem here at all. At any time you can work with any faculty advisor who will have you. Switching advisors may seem awkward from the student perspective, but in fact it is very common. If you haven't even started working with one advisor, then no time has been invested in the advising relationship, and you can start working with someone else without any qualms whatsoever. (Even after you have started working with one advisor, you can still switch at any time, but if you've worked with one advisor for a while then it does start to feel a bit awkward. Sometimes one must do awkward things...)
If the two professors know each other well, then of course they will find out about it, yes, but it should not be embarrassing or problematic for them: it's just the way things work. If you are sure that you want to work with the second professor, talk to the second professor to make sure that this offer is still on the table. Then accept it and immediately tell the first professor that your plans have changed. No biggie.
Upvotes: 2 <issue_comment>username_2: I recently posted a [question](https://academia.stackexchange.com/questions/17478/is-it-ok-to-hire-another-labs-student) about being on the other side of this scenario - a professor in my university offered to take on a student who had already agreed to work with me (and had been working with me for the previous year already).
You'll notice from reading that question and its responses that while I was a bit annoyed with the professor, I wasn't upset with the student. In fact, I advised the student to choose the advisor he thought would be best for him, and gave him a good recommendation to the other professor in case that's what he chose.
That's because **the student is supposed to act in his/her own best interest** (while still being responsible and professional, of course). It's not being selfish, it's being smart.
The first professor shouldn't get upset with you for pursuing an opportunity that is better for you - if he/she does, then you *really* don't want to work with someone like that, anyways.
So, go talk to the second professor: "I was thinking some more about your offer to advise me on my thesis and I have reconsidered my original decision. Is the offer still available?"
If he/she says yes, accept the offer and go talk to the first professor: "I really appreciate your offer to advise me on my thesis, but I've decided to work with X instead. Thanks, again."
Upvotes: 5 [selected_answer] |
2014/03/03 | 4,897 | 21,237 | <issue_start>username_0: We have undertaken a small statistical study of M.S. students in our department, including their application information and their eventual performance in our program. The goal is to develop criteria for making admissions decisions for new applicants based on their likelihood of success in our program (as predicted by the performance of recent, similar students). I won't get into the details of the methodology.
One outcome of this study was that different attributes predict success for students with undergrad degrees from different countries. For example, for applicants to the program from schools in country X, but not schools in country Y, undergrad GPA is correlated with the students' GPA in our program; for schools in country Y, but not X, GRE scores are correlated with the student's GPA in our program. (I'm simplifying a lot here.) A "toy" example I just *completely made up* to illustrate is shown below:

(of course, for real applicants the criteria and the relationships are more complicated)
There are many possible reasons for this: for example, we could think that the grading system is more consistent in X so undergrad GPA is a better predictor there, and in X students study "for the exam" so the GRE becomes virtually meaningless as a measure of knowledge. I could speculate, but I don't think it would be helpful. The bottom line is, we find that different factors predict student success among different populations.
Therefore, if we wanted to admit students based on their likelihood of achieving a certain GPA in our program, we would apply an undergrad GPA cutoff for students from X, and a GRE threshold for students from Y. (Again, this is vastly simplified from the criteria our study actually suggested.)
**Is it fair to apply different criteria to students with undergrad degrees from different countries in admissions decisions?**
Does the answer change if this would significantly skew the admissions decisions in favor of a particular country of undergrad study (because statistically, applicants to our program whose undergrad degrees are from X have done much better than those with undergrad degrees from Y).
My concern is that we're effectively saying, "Students with undergrad degrees from Y with a GRE score < T will be rejected, but students with undergrad degrees from X with GRE < T may still be considered for admission (pending other criteria)."
On the other hand, if we ignore these statistics and reject students with undergrad degrees from X with low GRE scores, we are rejecting applicants even though we have no valid reason to believe that they won't do well in our program.
**To those who doubt the results of the study:**
* When I say the study is "small" I don't mean it isn't statistically significant - just that it was not designed to be generalizable beyond our applicant pool. (This is the same reason why I won't give too many details about the study - I don't want anybody to read it and try to generalize from our results.)
* As we know, the sample size is not the only factor that determines whether a given effect is significant. We found that the results are significant, given the sample size.
* The results also seem "sane" (which of course is subjective). It's not unexpected that undergraduate grading standards (and the standard-ness of grading standards) differ by country; or that different educational systems and cultures prepare students differently for standardized exams like the GRE and TOEFL; etc. The specifics of the results (i.e., which criteria are good predictors for which undergrad country) are consistent with what students who studied in those countries have told us about grading standards, student culture, and exam prep. So, we really have no reason to doubt them.<issue_comment>username_1: I'm going to advise against.
* You say your sample size is small. And honestly, unless you hired a professional statistician, I have doubt the analysis was carried out properly or shows exactly what you think it shows. Good statistics are hard to do.
* Assuming my google results are truthful, National Origin is a [Protected Class](http://en.wikipedia.org/wiki/Protected_class) in the US. You can't discriminate against people because of it. I assume this is an american institution as you talk about GRE, but similar applies in other places. **EDIT: OP has since clarified a misunderstanding of mine - the judgement is based not on the student's country of origin, but the country of the school they attended, which makes me less certain this point still applies.**
* You're department is actually, seriously proposing to say to people "Sorry, while you have the same official qualifications as another candidate we accepted, we're rejecting you because of your country of origin"? I mean, really? Whoa. Just stop and play back how this kind of justification would sound in almost any other circumstance. Think about how the *press* will make it sound when they (inevitably) get wind of it.
**SUBSEQUENT EDIT**
So, this question interested me, and I've bounced it off a couple of other people in my lab. Their main concern seemed to be that grouping by country was just to coarse a measurement, as every country has its own mix of good and bad schools. The basic idea, though, didn't seem to elicit the same gut reaction as it did from me.
Moreover, I've googled and found at least two examples of courses that vary their requirements based on country of origin, so there is precedent:
<https://www.auckland.ac.nz/en/for/international-students/is-entry-requirements/is-minimum-overseas-entry-requirements.html>
<http://sydney.edu.au/business/futurestudents/postgraduate_study/pg_coursework_studies#app_req>
So maybe I've got this wrong, and should just be ignored. At any rate I'm no longer sure it's quite so clear cut.
Upvotes: 4 <issue_comment>username_2: >
> Does the answer change if this would significantly skew the admissions decisions in favor of a particular country of undergrad study (because statistically, applicants to our program whose undergrad degrees are from X have done much better than those with undergrad degrees from Y).
>
>
> My concern is that we're effectively saying, "Students with undergrad degrees from Y with a GRE score < T will be rejected, but students with undergrad degrees from X with GRE < T may still be considered for admission (pending other criteria)."
>
>
>
To my ears this sounds analogous (with certain exaggeration) to: "*we only want rich, white men from successful parents that could afford to send them to good schools through out their childhoods.*"
I realise that this is not what you intend but you got to realise that having different criteria based on the country of undergrad studies is unethical, biased and I can't see how you could implement something like that without having major headaches. You can, by all means, have different criteria for an *individual* (for instance by having an interview) but if you clump up an entire population based on some statistics, which you can't elaborate on for some reason, I call that discrimination.
Bottom line is, you cannot evaluate the chances of an *individual* being successful or not, without giving that person a **fair** chance. If you deem a particular GPA/GRE score to be "good enough" for your graduate program, all applicants that satisfy that criteria should be considered good enough, regardless where they come from. Any additional selection criteria could be justified only if it warrants additional information, such as TOEFL score for those who don't have English as a mother tongue.
One other option would be to calculate "success rate coefficients", something like a multiplicative factor for the GPA/GRE score for applicants from different undergrad institutions, which *could* in theory be a fair assessment, but practically unfeasible considering the number of possible institutions involved.
Another alternative would be to devise a test for your institution, that you consider to be more fair than using GPA or GRE score. But even that, judging by your comments, is not an acceptable solution. Honestly it sounds a bit like you just want confirmation of some sort.
Upvotes: 1 <issue_comment>username_3: I don't think it has to be bad. GRE is a measurement of how good you are at doing exams, and not necessarily the most relevant kind to your graduate work. Some universities and education systems are very good at making people excel at set fixed format exams; but they don't teach them how to think by themselves and be creative.
In my opinion, GRE is a very biased measurement in itself, and it is not very well designed (just look at the ridiculous maths part). Also, it is based on a lot of fairly simple exercises, whereas in some universities, we are used to completely different kind of exam: a few long and complicated exercises, and perhaps don't have the ability to work fast enough in simple and repetitive tasks.
Upvotes: 3 <issue_comment>username_4: I think the underlying questions are (ought to be) first, whether it is fair to reject a student because you cannot accurately measure their potential to do well in the program; and second, if you can perform equally accurate measurements on two groups as long as you use different criteria, is it okay to do so?
Answering the second question first: I think the answer is "yes". The reason is made apparent by advocating the alternate view: should you take students who are likely to do poorly simply because you fail to apply a more sophisticated measurement? That seems pretty boneheaded, and not very fair to the best students. You should leave the analysis up to computers, as they are good at this sort of thing and impartial, but it's a good idea if you can implement it. Don't forget, though, that data on whether a score distinguishes students within your program does not tell you about whether it helped you reject students that didn't make it in! So you should be skeptical but open to the idea if that's where the data leads.
The first question could be rephrased as: "if I know I'm not getting a good sense of some students, can I just ignore them all?". And that guides my advice there: no, that is not fair. Find a way to do your job better--to get more information so that you can get a good sense of these students, or just accept that you would rather make mistakes in acceptance than find a way to do better on your predictions.
Putting these two together: if you have equal statistical power across groups when subdividing one population, then great! Use the information. But if you end up with one group better-measured than the other, you should only apply different criteria if you can show that students who are great and do all the right things in their less-measured context still have a shot at being admitted.
(Also, simple thresholds for individual scores, e.g. "We only take people with GREs above X" are rarely a smart way to run admissions unless those thresholds are set amazingly low. You should be thresholding an overall score to get the top N students, so you'd be applying different weights for GRE in one context vs. another.)
Upvotes: 3 <issue_comment>username_5: I would argue that you are interpreting your data incorrectly and using the results in an unethical and discriminatory way. Your model is not identifying groups of individuals who should not be accepted, but rather groups of individuals that require additional support so that they can reach the full potential predicted by their past performance.
Consider the following example: For a variety of reasons that have nothing to do with ability or commitment, women have historically been less likely to become full professors in STEM fields than men. To reject female applicants base on a model that captures this historical effect is completely unethical. What the model would show is that female applicants need additional support to maximize their potential (e.g., flexible working hours and mentoring).
Upvotes: 2 <issue_comment>username_6: I was waiting and hoping that [@JeffE](https://academia.stackexchange.com/users/65/jeffe) would expand his comment, since I share the opinion he expressed there, and moreover I think he is far more qualified to give advice on the subject. (Quite possibly, if he decides to expand his comment to an answer that has better facts and arguments than what I am about to offer, I will delete my answer).
*DISCLAIMER: I do not know either where you are, what the laws are like there, or if your final decision can have any legal repercussions. I advise seeking legal counsel for any questions of that sort.*
I think **it if fine to have different criteria for students that obtained their undergrads in different countries** (and/or in different institutions), as long **as you can ensure the criteria are fair**.
Now, how to approach ensuring that the criteria are fair, I have no idea, but I definitely know that if in some countries the GPA is indicative of a person who has potential as a researcher, in some countries it is simply not (I for example did my Masters in Croatia, and I know both people with around 3/5 GPA that made excellent PhD candidates and people with a perfect 5/5 GPA that I do not think would be capable of much independent thinking that were happy to take jobs where they have strictly defined output they have to produce and no research to speak of).
I have two different examples of using "different" criteria in the admission process, but unfortunately both of them are for the wrong level of study:
* First one is about undergrad admissions to my former Computer Science Faculty\* in Croatia. While the big majority of the candidates take a standardized test and are admitted based on that, a small amount of students (maybe 1%, maybe even much less) are *invited*.
These invitations work as such: the Faculty regularly does a continuous performance study on the previous invitees according to their High schools, as well as (I assume), overall performance of students from different High schools. Each year, a number of invites is extended to High schools, who can then "award" their students with those invites.
Based on the performance study and possibly High school size, some schools get a larger number of invites (up to 5, I think), while some get only 1 invite or even none.
And, actually, I think it's fair. E.g. the strong mathematical High schools will get the most invites (those are the schools "prepping" their students for technical studies after all), strong general schools would get some, and weak schools would get none. And still everything would be re-evaluated year after year. Also, additionally, nobody still loses their fair chance to enroll: this "invited" students make less than 1% of the enrolled students, while everybody can still take the standardized test.
* The second example is about the interview for a PhD position / pre-PhD internship.
Recently, a permanent professor from my lab started looking for a person to hire for an internship with a strong chance of offering them a PhD upon the completion of the internship. I mentioned that my ex supervisor used to supervise people with a similar profile to what was required, and that I could ask him weather he has somebody interested (and good enough) to apply.
When he received the preliminary application documentation, he asked me to comment on the profile because he was not familiar with the Croatian University system at all. I said that good grades were *usually* indication of a good student, but bad grades do not have to be an indication of a *bad* student.
There was also some other points in the CV adding value to his application, not directly obvious to somebody non-Croatian. After talking to an ex post-doc of his (who worked a bit with the applicant), the professor decided to interview with the student.
A few days later, he told me that based on the interview, he offered the applicant a position. He also told me that *he would most probably not interview a person with that profile*, if not for what me and his Croatian ex-post doc said about the "interpretation" of the profile.
I agree with both these cases. The ("absolute") criteria is indeed not the same for everybody. But still, in both these cases, the goal was to be fair, to base the decision on the applicants abilities, and finally **to judge the applicants abilities and potential "on the same scale" for all applicants**, just based on different information that was available, finally causing different "absolute" criteria.
\*In Croatia, the Universities are not wholesome entities, and all the administrative decisions are made on the level of the Faculty. There is no identification of students with the Uni in Croatia; if you ask a Croatian studying "at home" where he is studying, he is going to provide the information about his Faculty.
---
**ADDITION**. I wanted to add that, despite what any statistical research might show, **if students from different countries have the *same, internationally standardized tests*, then I think making a different criterion would be wrong.**
Maybe, for the country X, those tests are not good indicator because students get prepped exactly for those types of exams, but there just might be a few students who learned the material in the "proper way" (as do the majority of students from the country Y), who might get rejected for no real reason.
So, bottom line, I think
* **studying, and then declaring your own criteria to interpret *different* (non-standardized) national grading schemes is fine** (*even* if they use the same scale "on the outside" -- e.g. two countries which both have a national standard grading schemes on a scale 1 to 5)
* but, **if a criterion is based on an internationally standardized exam, that same criterion should be used with everybody where applicable** (e.g. everybody who took that standardized exam).
As a compromise-suggestion, I think it would be fine to say that you accept all students from everywhere if their score is *extremely high* (e.g. >95%), and all the students with *medium-to-high* scores (e.g. between 75% and 95%) will be considered based on additional criteria (where you can introduce personal interviews, standardized test from your institution, personal research statement + recommendation letters)...
*Edit2:* ***This opinion was supported*** *when I talked to* ***a person with a background in law*** *(not in academia thou).*:
The final verdict was: interpreting criteria that are initially different (e.g. GPA) differently is OK, but if something is standardized on an international level, it would only be OK to either take it in to account equally for everybody, or not at all. (purposefully using OK instead of "allowed" or "lawful" since I don't know the laws at your place). With a note that, unfortunately, *What is lawful and what is just and fair is sadly not always the same.*
Upvotes: 4 <issue_comment>username_7: You asked an ethical question and got a lot of scientific, legal, and political answers.
Ethically, you should:
1. Do your best to admit people based on merit.
2. Be open about your policies.
3. Make a serious and competent effort to ensure that your statistical methods are valid logically and empirically, and that they can withstand scrutiny from people who are experts in the field.
4. Carefully consider the historical legacy of racism, colonialism, and nationalism, and work hard to make sure that you aren't inadvertently reinforcing this legacy.
Your behavior so far is probably far more ethical than that of most people in your position, since most such people are probably secretive about their practices (you publicly asked for advice) and probably apply various heuristics without carefully considering whether those heuristics could withstand professional scrutiny.
You might want to expand this small, informal, unpublished, nonprofessional study into something more serious and systematic, done by people who have expertise in psychometrics and the (very difficult) methodological issues of the social sciences.
You are unfortunately operating within real-world constraints imposed by (1) the existence of countries where GRE scores are fraudulent, and (2) the existence of countries with such poor undergraduate education that undergraduate GPAs don't mean much.
Upvotes: 5 [selected_answer]<issue_comment>username_8: I would "massage" this problem by including a dummy statistical variable for the country. E.g. 0 for the country where grades are more important, and 1 for the country where GREs are more important. You could almost separate the applications into two piles this way, and tackle one pile at a time.
If asked about it, I would answer that neither grades nor GREs alone are dispositive, and they both have have greater explanatory power when you introduce the third (country) variable into the equation.
As far as I'm concerned, it is ethical to use any combination of statistical variables that satisfactorily explain performance, and to strive for "best fit" (statistically).
Upvotes: 0 |
2014/03/03 | 2,233 | 9,685 | <issue_start>username_0: This question is about maintaining email records when changing institutions. I’m writing it here because it seems to be a problem quite specific to the academic context of managing long-term relationships within the context of a series of short-term posts which seems to typify post-doc life.
Now I’m in a new post-doc position, my student email has about 6 months left to run before everything is deleted when the account is closed, and which probably contains several thousand carefully filed emails (academic collaborations and correspondence, supervision and PhD project, university milestones, passwords, agreements...). Probably about only 30-40 of them will be crucial to me over the next couple of years, but it’s difficult to know which ones. I’d like to keep a fairly complete paper trail. I am also preparing to encounter this problem again when my post-doc post finishes in 18 months' time.
A colleague who was in a similar fix ended up mailing hundreds of emails to her new account, which is clearly less than ideal. Another colleague just prints everything and stores it in paper files. If I wasn’t in academia I would switch emails to a more permanent one, but it is essential to use the proper address for work correspondence, so this doesn’t appear to be an option, unless if anyone has any ideas.
(Technical bit: The previous system used Outlook and exchange on the web; the current system uses Outlook and outlook.com; the next system could use anything. To complicate things, Outlook has been auto-archiving files which means there are multiple .pst files, so this option looks a little nightmarish. I'm a mere social scientist, so eager to avoid a very technical solution.)
Has anybody found a reasonable system for handling this sort of problem, whether manual or automated, other than finding a permanent academic post?<issue_comment>username_1: You can use a permanent account while still having the emails sent to your official address. I do this by getting gmail to check my university email accounts for me (Settings > Accounts > Check email from other accounts). I am then able to reply to those emails directly in gmail, while having them appear to come from my university address (there is an option to always reply from the same address to which the email was sent).
I'm not sure whether this solution can be any use for the emails you have already received (you may be able to use it, if you can for example mark all your emails as unread, so that gmail maybe sees them as new), but it might be worth starting now to reduce future hassle.
Upvotes: 4 <issue_comment>username_2: I've had similar problems in the past. I think for the future, @TaraB's solution is the best one, given that gmail's reply-to features and identity management are quite good, and they also permit archiving of email.
But for the past data, if you're technically savvy, it's not hard to write a small script that can ping your server, download all the email and store it locally. I in fact run such a script once a month to flush out my mailbox and organize emails in monthly folders.
Upvotes: 0 <issue_comment>username_3: In Outlook, your best bet is to **export** all of your mail.
* Click *File*, *Open & Export*. Select *Import/Export*.
* Pick *Export to a file* and click *Next*.
* Pick *Outlook Data File (.pst)* and click *Next*.
You can export a particular folder, or select your account if you want to save everything. (Make sure *Include subfolders* is checked.)

Outlook will save your data into a single PST file you can keep and open in Outlook at any time. Everything is saved in the PST file; you need not have access to the original email account.
Upvotes: 2 <issue_comment>username_4: Disk space is cheap, so you might as well archive (almost) everything. You never know when you might need it later. But I think the technical details of how to do so are outside the scope of this site.
One consideration for academics, though, is that some of our email may have confidential information: student grades, or correspondence about disciplinary issues, or search committee business. Your institution may not want to you to keep a personal copy of such emails after you leave. If you keep them, and they later leak out, you could conceivably have legal problems.
Before archiving a personal copy of my emails, I search through them to purge information that should not leave the institution, or is otherwise too risky to save.
There's a similar issue if you want to keep your email in the cloud (gmail or similar); you and your institution both have to trust the provider to keep the data private. I know some institutions forbid users to forward their email to gmail for this reason.
Upvotes: 3 <issue_comment>username_5: There are two distinct problems: having a permanent email address so that others can reach you even if they don't have your current details, and keeping an archive of your past emails.
### Email address
I have an email from my alumni association that's pretty much guaranteed to be for life. If you don't have this chance, you can rent a domain name for your own use for around $10/year (if you aren't picky about the choice of [top-level domain](http://en.wikipedia.org/wiki/Top-level_domain)). It's impossible to predict what the Internet will be like in the 40 years or so that an academic carrier lasts, but it is plausible that such methods of contact will remain relevant and affordable. At this price, you get the opportunity to make `http://your-chosen-name.tld/` point to some website and `[email protected]` redirect emails to some email provider; hosting the actual website and storing emails is a distinct service. You would typically set a web redirection to `http://example.edu/~lplatts` and an email redirection to `<EMAIL>` and update when you change institutions (or redirect to some other service of your choice).
Some institutions may insist that you write the email address they provide on papers that you publish while they're paying you. Journals often allow you to specify two email addresses, though this can be crowded if you're co-financed by several institutions already. If you don't have your permanent email address in your published papers, a web page that's easy to find by typing your name in Google (or whatever becomes the de facto standard search engine) is helpful.
### Email archive
I strongly recommend that you keep an archive of all your academic emails on a computer that you personally own. Keep your emails also on an online service to be able to access it anywhere, but don't leave your data at the mercy of an institution that you'll be leaving sooner or later or of a commercial service that could fold or become unusable (e.g. due to an unacceptable change in the terms of service) at any time. In other words, uploading all your emails on Gmail isn't enough.
Most academic institutions consider that academic emails are related to your academic career and therefore your property. On the contrary, most companies consider your emails to be company property and won't let you walk away with them. If you have some data that may be considered confidential to your institution, they may not like you to walk away with it. Make sure to check your institution's policies. If you're only allowed to retain some of your emails, classify them in separate folders and export only those.
I recommend making sure that all your email is available in [Thunderbird](http://en.wikipedia.org/wiki/Mozilla_Thunderbird). Thunderbird is the email client from the [Mozilla Foundation](https://en.wikipedia.org/wiki/Mozilla_Foundation) that also makes the Firefox web browser. Thunderbird runs on the major desktop operating systems (Linux, Windows, OSX) and has a decent GUI.
You can [import Outlook's `.pst` files into Thunderbird](http://kb.mozillazine.org/Import_.pst_files). This way, you won't depend on a proprietary tool, you can move your archives onto any machine that has Thunderbird installed. Do this regularly even if you keep using Outlook. On your last day at an institution, import your last emails into Thunderbird and burn your mailbox to a CD. When you arrive in a new institution, either use Thunderbird or export your old emails to whatever the standard format there is.
You can *additionally* upload your emails to a service such as Gmail (currently free and with a practically unlimited mailbox size). This has two benefits: you can access your emails from anywhere, and Google's search is better than anyone else's. Once you have your emails accessible in Thunderbird, you can upload them to Gmail by configuring Thunderbird to access that account and copying the emails to Gmail. Do this only if you're willing to trust Google's privacy (depending on your field, you may not be willing to allow a potential competitor to process your emails, for example if you're researching search algorithms).
Upvotes: 4 [selected_answer]<issue_comment>username_6: This is a quite different answer than all others, but many email services are accessiable via IMAP. Via [offlineimap](http://offlineimap.org/) one can synchronize between two IMAP servers (say old and new university) as well as your computer and an IMAP server.
This allows for moving mails, keeping a local backup of all mails, as well as using this as a general-purpose email solution.
I synchronize between one IMAP server (my universities') and three different computers.
The disadvantage is that this requires some knowledge of unix-like tools, and reading the offlineimap manual.
Upvotes: 0 |
2014/03/03 | 1,013 | 4,117 | <issue_start>username_0: I wanted to get some directions on how to prepare for a MS Degree in Mathematics.
Background:
1. I'm interested in getting a Ph.D in Statistical Learning or related area in 5-6 years.
2. I took some courses in Mathematical Statistics and I struggled because I do not have recent coursework in Analysis, Measure theory, etc.
3. I studied electrical engineering with a very heavy mathematical component from a very decent University 20 years ago however, it is amazing how much I've forgotten.
4. I've always been fascinated by mathematics and I'm very tempted to build a solid foundation before partaking in doctoral study.
5. I am working at the moment - my job is flexible and I'm saving to take off a year or two for the final years of my doctoral work.
6. I have a couple of graduate degrees in the area of Computer Science and Artificial Intelligence
Plan:
1. I'd like to build up to where I was 20 years ago: calculus, linear algebra, diff equations, calculus of complex variables, frequency domain analysis.
2. I'd also like to take courses that are typically reserved for math majors like proofs, analysis, group theory, algebra, etc.
**I think the best way to accomplish the plan would be a decent community college or extension program like (UC Berkeley extension) that offers online classes -- any recommendation?**<issue_comment>username_1: I fear that not so many community colleges would offer the upper-division courses a math-major sort of person would want, especially to aim toward graduate school in mathematics. Further, you'd be needing letters of recommendation for grad school, and community colleges would not generate letters that would help you, since the letter writers (by far most often) would not be familiar with grad school from the side of mentoring and supervising grad students (even if they themselves did have a Ph.D.).
It is true that community colleges are usually much cheaper than "universities", but the coursework, context for coursework, and outlook of faculty teaching the upper-division courses you need, and their letters on your behalf, are things that you can't avoid but need.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Unless you are using the term "community college" different than I am use to for in Canada and the US, then **no**.
If you mean a local state or government funded **degree** (4-year) granting universities then perhaps. Another possibility, if they exist anymore are one or two-year *junior college* which acts like a feeder, or extension campus to a 4-year university.
Not the 9-months to 3 year **diploma** granting colleges which tend be vocational oriented. (Similar to a bit lower academic standard than Polytechnical post-secondary schools in Europe)
---
You should be able to find introductory classes (years 1 and 2) via distance education through out the world easily.
In the interests of cost and legitimacy, I recommend avoiding privately owned/run distance education programs including online universities. This does **not** mean government owned or run distance education like [Open University](http://www.open.ac.uk/) or [Athabasca University](http://www.athabascau.ca/), those are great and affordable.
At an university you should be able to register as a "non-degree program" (or similar) student, either part-time or full-time. This is commonly used for preparing towards a degree program in the future for not-fresh-from-high-school student enrollment cases.
---
>
> I have a couple of *graduate degrees* in the area of Computer Science and Artificial Intelligence
>
>
>
(emphasis added)
This makes *no sense* when you previously said "I'm interested in getting a Ph.D in Statistical Learning." If you have a M.Sc. or Ph.D. in Computer Science (I don't know off-hand of any place that grants *degrees* in AI).
Do you mean you previously have taken graduate level *courses* in CS and AI?
If so, I would expect you to start your search from that university unless geographic reasons prevent it, at least speaking to them as a starting point for recommendations.
Upvotes: 0 |
2014/03/04 | 714 | 3,127 | <issue_start>username_0: For paper submission, I have recently spent some time struggling to find appropriate classification. The main question is: who needs this information and why?
At first I thought that they could be used by the editor to find an appropriate editor or referees, but I have had two experiences contradicting this hypothesis. In the first one, I was only asked to provide classification AFTER the review process. In the second, I was requested to point myself to appropriate editors and referees.
In my experience, I do not look at those numbers, and that is true for other colleagues. The only exception is when I am trying to find appropriate classification and look at other related papers for inspiration. In such occasion, I sometimes found classifications which did not seem to match the content of the paper. Apparently people do not care much for this. Another aspect of the question is: how bad is it to have a bad classification?<issue_comment>username_1: From the ACM page "[How to use the Computing Classification System](http://www.acm.org/about/class/how-to-use)":
>
> An important aspect of preparing your paper for publication by ACM
> Press is to provide the proper indexing and retrieval information from
> the ACM Computing Classification System (CCS). This is beneficial to
> you because accurate categorization provides the reader with quick
> content reference, facilitating the search for related literature, as
> well as searches for your work in ACM’s Digital Library and on other
> online resources. It also ensures correct placement when a review
> appears in Computing Reviews.
>
>
>
There's [similar verbiage](http://www.ams.org/msc/msc2010.html) for the AMS MSC.
In other words, the institution needs it for their own databases and search mechanisms. The AMS probably needs it for something similar. So your use for the classification system depends on how much you expect people to search for your work using the classification structure provided by the institution. For math, I'd expect it to be used a lot: for CS, not so much.
Upvotes: 2 <issue_comment>username_2: This classification would normally be used by librarians to catalog the books/journals/proceedings accordingly. There is more than ACM and AMS - there are at least other 10-15 frequently used systems, each meeting needs of a specific customer - be it a library or consortia of academic institutions. Almost any publisher, beyond ACM, would use one or several classification schemes.
In theory, the classification of proceedings should depend on terms you provide for the article, in practice, it is not necessarily the case, as you say. There are a lot of things to be improved in classification, also using semi-automated approaches for finding right keywords for papers.
Upvotes: 2 <issue_comment>username_3: It's for classification purpose.
From practical side, I've heard that they use it for matching referees or seeing for what you can be a referee.
I've never met any scientist searching according to such classification. (Except for scientometrics or similar purposes.)
Upvotes: 1 |
2014/03/04 | 902 | 3,921 | <issue_start>username_0: I am currently located in central Europe. When I was hunting for an assistant professor position some months ago, I was also planning to apply to some US institutions for Tenure Track positions. However, one senior professor with some experience working in the US told me pretty much straight-up that this will be a waste of time, as "US universities do not hire people from outside the US/Canada on Tenure Tracks". Relativizing somewhat, stated that "of course exceptions exist, especially if they personally know the applicant, but generally you will get onto the *reject* pile immediately as they don't know your school well enough.". I was counselled to apply for a postdoc at an US institution first, if I really wanted to get into an US school.
Looking over the CVs of some existing assistant professors in good schools the statement could be accurate (almost nobody with the job title Assistant Professor seems to come directly from outside the US - many *graduated* somewhere else, but the last position before was almost always an US institution).
**In your experience, is this sentiment correct? Does it even make sense to apply for a TT position in the US from outside (under the assumption that your CV is reasonable for a TT in the first place, of course)?** Computer Science is most relevant to me, but any information from any STEM fields would be interesting as well.<issue_comment>username_1: While the "I don't know your university" element can have an influence on whether your application gets summarily dismissed to the reject pile, it's less of an issue than one might think. We all travel to Europe and Asia for conferences now, and meet colleagues who come from different countries. I can probably name the top few universities in my field in many European countries, as well as personally know people in each of them.
But there's a more mundane logistical reason for potentially avoiding applicants from other countries: expenses. There's always a risk in getting someone to come for an on-site interview, but with a foreign candidate the expense and logistical work (visa processing, payment methods and so on) are more onerous. We always want to find strong candidates, but when there's a large pool of highly qualified candidates in the US, it can be convenient to focus on those that we expect have a chance of actually making it through the interview process and coming.
There's also the question of whether someone from Europe (as opposed to someone working there) really does want to come to the US, or is just casting a wide net. But that can be addressed in candidate statements or even conversations.
None of this means that foreign candidates are disqualified. Far from it. But it creates a moment of doubt.
Upvotes: 6 [selected_answer]<issue_comment>username_2: I'm going to speak out of personal experience. I did my PhD in Japan, and then started doing a Postdoc in the US, my University in Japan is regarded in most rankings as a better institution that my current one in the US (not by much, but still).
I applied roughly to the same number of postdocs while in Japan and now, that I'm changing gigs as well. In Japan I got only 3 replies. While now I got many more replies and much more requests for interview.
So there's that to that. I'm also Mexican, with a valid travel visa (yes, they asked that), so we as part of NAFTA can get relatively easy working Visas (we only need a letter of acceptance). Also, a trip from Mexico City to LA is cheaper/shorter than a trip from Washington to LA
I think our experiences have many differences, but in my personal point of view, yes it is easier to get positions if you are already in the US.
Also, to be blunt, few people in the US are going to take you seriously if you do not have at least a postdoc (in whatever institution). Even <NAME> (Nobel Prize Laureate) did a Postdoc.
Upvotes: 2 |
2014/03/04 | 1,159 | 5,073 | <issue_start>username_0: My Background: I am an EE graduate, working in a Software Company from last 2+ years as an Application Developer(Java & J2EE). Recently I started thinking of going back to college and do my Master's in CS, as most of the jobs in industry require a degree in CS.
Country of Residence: India
Countries of Interest for University Applications: US, UK, Australia and EU.
I have noticed that there are prerequisites to get admitted to those programs. Specifically, I need to have completed courses in Data Structures, Algorithms, Operating Systems, Theory of Computation, Compilers and Computer Networks. My undergraduate degree did have courses like Computer Networks, but nothing else.
In some other Universities, I can get admitted to MS in CS, but then I need to complete those courses at the College before starting graduate school coursework. I understand all of those things, as I took various Computer Science Courses online. I understand all the things that there are in undergraduate Computer Science Course. So here are my options:-
1) Take a post-bacc Course in computer science. (That Costs a lot in money and Time)
2) Do Nothing and apply without requirements being completed and let them decide whether or not I should enroll in UG CS courses at college.
3) Do some online Computer Science courses on edX, Coursera and show them that.
Which one of them is more favourable to a person working in Software Industry with an EE background?<issue_comment>username_1: Maybe you should add your country of residence to your question since this might affect the answer.
In Germany, only 1) would be accepted since all others are not from an accredited institution.
It might be possible to enroll to a program and do all exams in one semester, but you should check this in advance.
Upvotes: 0 <issue_comment>username_2: I had two bachelors in Music and Art, minors in Marketing and CS. I then held a Computer Science career. When I went into the CS Masters program 7 years after obtaining my last bachelor degree, I had several deficiencies. I was accepted with two understandings. (1) I will complete the deficiency courses that are prerequisites to Masters courses I am to take, and (2) my current career in CS demonstrates I have the aptitude to succeed despite my entering the program with deficiencies.
I believe your best option is your #2, to enter the Masters program and complete the deficiencies in your coursework. When you write your letter of intent for application to the Masters program, I would describe these two aspects as they apply to your situation, personality, and drive to complete your degree.
Your other option for the US market is your #1, to complete the deficiencies from an accredited institution, then transfer the completed coursework with you when you apply. You must be careful with this option, to ensure the University you are to apply for a masters with will accept the coursework from the other institution. Before taking the coursework, it is best to call the University to ensure the courses will transfer.
This other option is best if you are under the following circumstances:
1. Currently working, and can take the deficiencies at night through an online University, or through a local college offering.
2. You will save money/time from taking the deficiencies from the alternate institute.
Your option #3 will not work, as they require the coursework credit from an accredited institution. Internationally, they usually require a demonstrated bachelors degree in CS. But post-bacc work needs to be completed as previously described.
Otherwise, you are best to just enroll in the University Master program and take the deficiencies there.
When I took the deficiencies for my masters program, I'd take deficiency courses one semester, and then the Masters courses that required the deficiency the next semester. In this way, except for the first semester, I was taking at least one Masters course every semester. Look at the University course catalog to see which courses have the least number of prerequisites that are your deficiencies, and then take those deficiencies first.
Upvotes: 3 [selected_answer]<issue_comment>username_3: In the US, it will depend on the school.
At my university they rely heavily on testing for foreign students, because the course on algorithms that is accepted as standard in your country may or may not be comparable to the algorithms course here. This is bad news if you pay a lot of money for a course that doesn't give you the preparation to pass the exam, but great news if you learned the material on your own (or through Coursera, for example) and were able to pass the test.
Upvotes: 1 <issue_comment>username_4: New York University offers a preparatory accelerated course in computer science specifically for people in your situation. Check the "What material is covered in the PAC Program?" section in the link below to see what they expect an incoming master's student to know:
<https://cs.nyu.edu/webapps/content/academic/PAC/faq>
Upvotes: 2 |
2014/03/04 | 1,057 | 4,388 | <issue_start>username_0: Do U.S. college accreditation agencies forbid teachers with only a bachelor's degree to be on the faculty of an accredited U.S. college?<issue_comment>username_1: Supposedly yes, but mostly they ask people with a master's degree minimum (depending of course of their publications in the field for both cases). If not they aim for PhD or post-docs.
Upvotes: 1 <issue_comment>username_2: It depends on what you mean. If you are asking a theoretical question about whether there are rules that forbid it, then the answer is that it is theoretically possible, at least at some universities. For example, [<NAME>](http://en.wikipedia.org/wiki/Andrew_M._Gleason) was a tenured professor of mathematics at Harvard from 1953 through his retirement in 1992, without ever having attended graduate school. (Technically, Harvard awarded him an honorary master's degree when he became a faculty member, but he had no master's degree when he was hired and never received a Ph.D.) The rules vary between universities, but I do not believe Harvard's have changed since Gleason was there. For another example, if you [invent the World Wide Web](http://en.wikipedia.org/wiki/Tim_Berners-Lee), you can become a professor with no master's degree.
On the other hand, it is impossible in practice. Unless you have received some sort of major academic recognition (a big prize, universities specifically soliciting an application from you despite knowing you have no master's degree, etc.), it's not even worth thinking about, since the chances of being hired are almost indistinguishable from zero. If you are aiming for an academic career, choosing not to go to graduate school means giving up on that career.
By the way, I'm assuming here that you are asking about fields in which there are very few famous practitioners without advanced degrees. I can imagine that in certain fields (perhaps art, business, or politics), there might be more people who would be attractive to universities despite having only a bachelor's degree. But even in those cases, it would require truly impressive achievements.
Upvotes: 4 [selected_answer]<issue_comment>username_3: Yes, it is possible (at least for part-time appointments). I am part-time faculty at the University of Washington, and not only do I have just a bachelor's degree, my degree isn't even in the same field as my faculty appointment. My real world skills and knowledge combined with my mentoring/leadership experience are all that were required.
However, it would be very rare for full-time faculty to not have at least a master's. UW policy requires the master's degree for full-time faculty and a PhD for professorships, and some departments are adopting rules requiring a PhD for any full-time position.
Upvotes: 3 <issue_comment>username_4: Extensive and creative work experience can trump higher ed degrees in cases where the person has something exceptional to bring to the faculty.
The idea behind attaining higher education degrees is that one becomes specialist of a discipline and that lends credence to teaching and research. Becoming a faculty member isn't instantaneous, there is a tenure track process which has its own requirements.
There are several accreditation entities, you have to be more specific about which discipline and accreditation entity you are referring to.
Upvotes: 2 <issue_comment>username_5: There is a category known as "Professor of Practice", which some schools use to explicitly recognize folks whose teaching authority comes from experience and demonstrated skill in the field rather than from academic credits.
However, if you're talking about tenure track, most schools will ask even these folks to have (or quickly obtain) a "terminal degree" in their field. That may only be a Master's for some fields, but Bachelors generally won't cut it.
Upvotes: 2 <issue_comment>username_6: It is not forbidden. Accreditation is based on many factors and percentage of terminal degree holders is one of those statistics that is considered. Even the most prestigious colleges may have a number of MS/MA/MFA on their faculty. In rare instances, even people without degrees might be on a faculty. You may find these rare people on performing or creative arts faculties -- writers, actors, painters, filmmakers, etc. that have outstanding bodies of work or accomplishments.
Upvotes: 2 |
2014/03/04 | 253 | 992 | <issue_start>username_0: I am Indian citizen, applying for Fall season graduate studies in the USA.
Are international students like me eligible to apply for Federal aid (FAFSA)?<issue_comment>username_1: [No](http://studentaid.ed.gov/eligibility). You must be "a U.S. citizen or eligible noncitizen" (i.e., a permanent resident with a green card) to get federal student aid.
Upvotes: 4 <issue_comment>username_2: NO , Federal aid is only for U.S citizens.
Even when it comes to job international can apply for only college roll positions but not work study ..
Federal aid and work study run through are like funding offered by the federal government or the countries central government thus these funds would directly allocated or redeemed by the students who are citizens .So if you are under F-1 or J-1 visa if you would want to have scholarships in u.s either your college or your department should sponsor you or if you are under grant the organisation would sponsor you.
Upvotes: -1 |
2014/03/04 | 2,538 | 10,955 | <issue_start>username_0: I applied for several postdoc positions recently. One made an offer which I accepted. Since then I've been invited for an interview for one of the other positions. Is there any benefit to attending? Is it an opportunity to make potentially useful contacts (in a relatively small field)? I should add that I have no experience with academic interviews (the position I secured was through contacts). Expenses will be paid.<issue_comment>username_1: Once you've accepted a job offer, you are supposed to inform other places that you've applied that you would like to withdraw from consideration.
If they still want to invite you over to give a talk knowing that you can not be considered for the position, go ahead. But, **you must tell them**.
To do otherwise would be a **serious breach of ethics**. You do *not* want to gain a reputation as someone who engages in unethical behavior (don't assume they won't find out).
Upvotes: 7 [selected_answer]<issue_comment>username_2: You **must inform them** that you've accepted a position.
If they *still* invite you for a talk, you should go because presenting to new people will give you more visibility, more feedback on your work, and if you make a good impression then a potential place of employment in the future.
They might still invite you for a talk because it is fun for them to attend good talks and learn about interesting papers. Even if you don't work for them, by spending a day there you might meet someone interesting and you might wind up co-authoring with someone there on something in the future. Only good things can come out of spending a day with other researchers who are excited about stuff similar to yours.
Upvotes: 3 <issue_comment>username_3: Let me emphasize a point made in other answers.
As soon as you accepted one offer, you should have withdrawn your application from the other employers **immediately**. If you had done so, this situation would not have arisen.
You should immediately write an email to the school offering this interview, saying you have accepted another offer, and apologizing for not letting them know sooner. They have a right to be a little annoyed with you; you've wasted some of their time, and if they'd known you were off the market, they could have moved on to pursue other candidates.
So you made a mistake; well, people are human and these things happen, but you should act quickly to put it right.
In a comment on another answer, you consider the possibility of not telling them, and attending the interview anyway for the experience (and the free trip). **Don't do that.** In academia, attending an interview for a job you know you won't take would be considered **extremely** unprofessional and possibly unethical. Interviewing a candidate is expensive, in terms of time, money, and opportunity cost (time they spend interviewing you is time they don't spend interviewing candiates who might actually come; the longer they take to get to those candidates, the greater the chance they will take another job first). Don't think they won't find out; academia is a small world, people talk, and you can easily make enemies doing something like this. Moreover, the people at your chosen instituion can find out too (the other institution's colloquium schedule is probably public) and it won't make them think well of you.
In principle they could ask you to come anyway just to give the talk, but it's unlikely. They will probably want to use that time and money right now to speak with another candidate. It's quite possible they are still interested in hearing about your work, but they'd be more likely to invite you to visit sometime in the future, after job season.
Upvotes: 5 <issue_comment>username_4: I personally don't see the harm. However, if that interview went well and you were to be offered the job, accepting it would obviously be a crappy thing to do.
But I don't see any issue in going for an interview, even with no intentions of accepting the role. Experience in interviews is really important. And on the plus side, you don't have to go in completely nervous because you've already got a job, so it'd be easy for you to sit back and relax!
Upvotes: -1 <issue_comment>username_5: < But why not take advantage of those benefits by not telling them I've already accepted a position?
The fact that you are asking that question after seeing the answers here, and acknowledging that it is "unfair" (actually it is dishonest), raises serious questions in my mind about your integrity and character. To be fair, you are possibly a young man who haven't quite made up your mind whether you are also going to be an honest one.
If you feel that you have something to gain by dishonesty, you're in a great deal of company (see username_4's answer, for example). You will always find those who applaud your dishonesty as willingness to make "tough decisions", "take on the grey areas", "get the job done" and so on, who all the while find ways to manipulate you in dishonest ways for their own perceived gain, and to your perceived loss. If you wish to be that sort of person, then you may expect to draw persons of like mind into your circle of acquaintances.
You will also find that honest people do not respect and trust you; you will furthermore find that dishonest people pretend to respect and trust everyone and actually respect and trust noone.
We are in a phase of existence where we as a society are deciding whether to base our actions on "survival of the fittest" or "the whole is greater than the sum of its parts." To not decide is to decide for the former. Cast your lot and reap as you sow.
Upvotes: 3 <issue_comment>username_6: The second interview: Of course you can't tell ahead of time if they will make you an offer, or if their offer would be one you would accept, but the question is, if the second interview resulted in a job offer, would you ***consider*** accepting their offer?
Perhaps for you, the second place is the place you'd rather work. Maybe it's the place you've always dreamed of working. If this is the case, you should go to the second interview, without telling them about the other position you have accepted, and see what comes of it.
In regards to the "second place", there is nothing wrong with that. In light of their offer to interview you, you are reconsidering your acceptance of the previous job offer. It is not a "wasted effort" on their part.
In regards to the "first place": If the second place offers you a position, and you accept it, you will have to tell the people at the first place that you have reconsidered their offer and have decided to accept another position. I can tell you that this should not shake them up too badly. I'm sure it's happened to them before.
They are free, and many places often do, continue interviews in spite of making an offer to you that you have accepted. You may not work out for them, or they might find someone "better" that they would rather have working for them. Unless there is some sort of "contractual obligation" that you have not described, if you were to go and work for them, they could replace you, and you could move on to another position, at any time.
On the other hand, if you really want to work at the first position, or if you feel you have some "social obligation" to proceed with the position at the "first place" due to your relationship with your "contacts" there, and wouldn't consider any offer the second place might make, you should tell the second place you have accepted another offer before the second interview so they could decide if they want to proceed. And they might... If for some reason, they really want you, they might feel they could woo you away from the other position with a great offer.
Upvotes: -1 <issue_comment>username_7: For reasons everyone has mentioned, you must let the second place know that you've accepted another offer. In addition once you have accepted that offer, you really must commit to it.
However, if this second place is truly amazing, you can inform them that (1) you've already accepted another offer and (2) would still be interested in giving a talk (as people have mentioned) and (3) would be **interested in taking this position if it were deferred for a year**, in the chance that they don't find a good candidate this year (this is rare for post docs but does happen on occasion, depending on the funding source). This assumes that your current job only has a 1 year commitment (many jobs these days are 1 year with a possible extra year of support). This is a huge long shot. They most likely will say no, unless they have no other qualified candidate. This is the only situation I could imagine it being ethical for you do do the interview (with their knowledge).
Upvotes: 2 <issue_comment>username_8: **Going to the second interview:**
* Gain: insight about the other position, experience, networking.
* Lose: time, energy, your reputation, especially if it is a small field.
**Declining the second interview:**
* Gain: energy, integrity, "you play nice".
* Lose: you don't know what you miss, experience, networking.
People usually recommend to "not to lose reputation", and therefore decline the second job interview, after you accepted a job offer already. This what I would do, too.
However: **you only really got the job on the first day after the probation period.**
Until then, anything can happen. They can cancel your application before you sign the contract (happens many times) for many reasons: budget cuts, change in management, change in priorities, etc. Or they decide during the probation period that they don't want you (happens also).
Therefore, I recommend not burning all your bridges! There are different ways of saying no - how about doing it in a way that shows professionalism, and keeps some doors open for you for the future.
I recommend the following:
**Write a letter to the other employer saying that**
1. You have already accepted a job offer that fits your expertise.
2. However, you truly appreciate that they considered you for an interview, seriously like their organization, could have imagined working for them, and would like to be in touch.
3. If for whatever reason your current offering doesn't realize, you will contact them again.
You can also offer to keep in touch for professional reasons, and/or offer to forward their job announcement in your network. If it's a small field, and people are hard to get with the right expertise, they will appreciate it.
It happened to a friend of mine that during the probation period, it turned out that the position was not as advertised, and he quit. He re-contacted the previous parallel interview offering. The company was happy to call him in again - the original position was not available, but they offered him a similar one.
If you just simply say "No", they won't know that you would have been interested.
Upvotes: 2 |
2014/03/04 | 648 | 2,373 | <issue_start>username_0: Is there a free, authoritative, trustworthy online database where one can look-up the accreditation of any college or university in the world?<issue_comment>username_1: In US, US news college rankings will give you this information
<http://colleges.usnews.rankingsandreviews.com/best-colleges>
Other databases that you can search are
US Department of Education databases
<http://ope.ed.gov/accreditation/>
<http://ope.ed.gov/accreditation/search.aspx>
Upvotes: 2 <issue_comment>username_2: The Council for Higher Education Association has a [Database of Institutions and Programs Accredited by Recognized United States Accrediting Organizations](http://www.chea.org/search/default.asp) listing schools accredited by US accrediting agencies, but this also includes schools in other countries that are also accredited by these US organizations.
Upvotes: 1 <issue_comment>username_3: For information about **European** institutions, the best starting point for such an online search is possibly the [ENIC-NARIC network](http://www.enic-naric.net/) (European Network of National Information Centres on academic recognition and mobility) which lists the national accreditation organisations.
A list of accredited **Swiss** higher education institutions can be found at the [site of the Rectors' Conference of the Swiss Universities CRUS](http://www.crus.ch/information-programme/recognition-swiss-enic/recognised-or-accredited-swiss-higher-education-institutions.html?L=2).
A database of accredited study programmes in **Germany** can be found at the [site of the German Accreditation Council](http://www.hs-kompass2.de/kompass/xml/akkr/maske.html).
The [Accreditation Organisation of the Netherlands and Flanders NVAO](http://search.nvao.net/home&tab=programme) has a similar database of **Dutch and Flemish** study programmes and institutions.
Upvotes: 4 [selected_answer]<issue_comment>username_4: There is an international database called [UNIVCHECK](http://www.univcheck.org) that allows to validate whether a university is accredited or not. It is both a white list and black list.
Upvotes: 3 <issue_comment>username_5: The [World Higher Education Database](http://whed.net/home.php) contains around 18,500 institutions from 186 countries. It's maintained by the International Association of Universities, a UNESCO-affiliated NGO.
Upvotes: 1 |
2014/03/05 | 8,454 | 34,374 | <issue_start>username_0: A [question](https://academia.stackexchange.com/questions/17718/campus-dress-codes-for-a-student-in-university) about university dress codes reminded me of an incident that happened when I was an undergrad, in which a classmate came to school wearing a really offensive and **misogynistic** t-shirt.
I was *extremely* uncomfortable, especially since this was an engineering program and I was one of only three or four female students in a class of about fifty. I had another class with the same student later that day and he was still wearing the shirt. I remember wishing at the time that a faculty member or *someone* with more authority than me would do something about it.
So, my question is as follows:
* Should a professor intervene if a student in their class is wearing clothing that is likely to be offensive and hostile to other students? If so, how?
* If yes: are there any scenarios in which a professor should *not* do anything even though a student's clothes contains material that is hostile towards another student or group of students?
And finally,
* If I come across this scenario as a TA, in which a student (who may be a peer in my program of study) in my class is wearing something offensive, what can I do about it? I don't feel comfortable (or safe, for that matter) as a woman confronting a male student about an item of clothing that is offensive to women. On the other hand, I feel like it is my responsibility to keep a non-hostile environment in my classroom.
**Discriminatory harassment** is forbidden by the university's code of conduct and includes: placing written or graphic material which demeans or shows hostility or aversion toward an individual or group because of race, color, religion, gender, national origin, age or disability.
The item of clothing in question contained a slogan and image that is indubitably demeaning and hostile towards women.<issue_comment>username_1: A professor definitely has some shared responsibility for maintaining a harmonious atmosphere in the classroom. Given that the university has a code of conduct in place (as per the edit) it gives the professor some leeway to address the situation. But it might be difficult to do so without some initial prompting from the concerned students (because as a professor I can't claim to know what is likely to be offensive to students).
So to answer question 1, yes, if the issue is brought up or if it's otherwise clear that the T-shirt is disrupting class. As for question 2, it follows that if no one brings up the issue, the professor might not do anything.
If you're a TA, then there must be a professor. In that case, you should bring it up with them. Maybe they can "drop by" by accident when the student comes, and then they can deal with it without needing to imply that you're the one who brought up the issue.
Upvotes: 5 <issue_comment>username_2: If you feel like the clothes that a student is wearing is fostering a less than nurturing atmosphere in your classroom, it is definitely in your best interest to end this. When you're in a STEM field, it's concerning to see a misogynistic message on a shirt and I think you should find a way to end it.
Depending on the content of the shirt, I would debate whether or not I brought it up in front of the rest of the class. If it had some relation to the course and performance, I would have a hard time not bringing it up front-and-center to the class in order to stop any type of stereotype threat that may pervade the course. My debate on whether to confront during class would be based on thinking about the mentality of the students that are being oppressed in this case and what they may think, whether it be "that student is not wearing that anymore" or "I cannot believe the instructor did not say anything about that shirt," the latter of which was your response in undergrad.
I would stray away from anything that was accusatory or telling the student what to do, but would focus on asking leading questions that explained why it was not appropriate. This approach depends on the personality of instructor and your mileage may vary.
A more neutral thing to bring up to the student would be something about professionalism in the classroom. A discussion on college being about preparing one for a profession and/or higher scholasticism, and then ask the student if the classroom is really an appropriate venue for his wardrobe choice. Really, how to approach the student would depend on your comfort level, the context of the whole situation, and the explicit description of the Non-Discrimination and Anti-Harassment Policy at your university. A higher-up may be able to help with regards to that.
This isn't an answer to your question per se, but I think this raises a good mindset in order to answer this question for yourself.
Upvotes: 4 <issue_comment>username_3: As an instructor -- or a TA, or whoever is leading a formalized academic session -- you have not only the right but some responsibility to enforce at least minimal standards of acceptable behavior. Some behavior is borderline and you do want to look to the other people in the room to see whether it is bothering them. Some behavior really isn't, e.g. discriminatory harassment as mentioned above. In particular if a student wears a tee shirt bearing what is clear to you is a slur related to
* Race, ethnicity, skin color ([examples](https://en.wikipedia.org/wiki/List_of_ethnic_slurs) include the n-word)
* Sexuality ([general examples](https://en.wiktionary.org/wiki/Appendix:English_sexual_slurs))
+ Homophobia ([examples](https://en.wikipedia.org/wiki/Category:Homophobic_slurs) include the f-word)
+ Misogyny ([examples](https://en.wikipedia.org/wiki/Category:Misogynistic_slurs) include the c-word)
* Religion ([examples](https://en.wikipedia.org/wiki/List_of_religious_slurs))
* ***Any*** other group or person
then as an instructor you should get them to leave right away. You say that you don't feel "safe" confronting a male student about this. This concerns me a little bit, as you are an authority figure even as a TA and especially as an instructor. If you are not willing to enforce your authority directly then I think you need to have alternate arrangements in mind that will do so: e.g. you could try to call campus security and not continue the class until they arrive. But I think one should realize that one absolutely has the right, and sometimes the obligation, to ask a student to leave the classroom under certain circumstances. If I were in this situation and the student were a 250 pound athlete, I would still ask him to leave unless I had some specific intuition that he would react physically or violently to that request. I don't have to feel like I can physically overpower someone in order to exert authority over them.
Upvotes: 7 [selected_answer]<issue_comment>username_4: First step in any conflict should be communication. If I were in your shoes I would have likely go up to the person in question and ask them what message they want to convey by wearing that particular attire.
The reason I have this belief is that what constitutes offensive is **very** subjective, as one might take offense at anything really. Please note that I am not saying or implying that it was the case for OP, but without knowing the level of "offense" in question, it's hard to make a generalized judgement. In that case, it's always a good idea to *peacefully* confront the person and tell them that you feel offended. That's my first point.
The second point I would like to make is that the primary responsibility for sorting out your disagreements is on your own shoulders. It is in general frustrating to expect someone else to intervene and fight your battles for you. People of authority (the teacher in this case) might not notice the offense, or not realize how uncomfortable it makes you feel, unless you actually make that clear for everyone involved.
In the specific scenario that is depicted in the OP, I cannot imagine why you would not be allowed to point out that your classmate's attire is offensive and not suitable for public spaces, let alone a classroom. If the person reacts badly, then you have more of a case for disciplinary action against the classmate with the offensive clothing. Then the professor/TA/security and even other classmates would likely to be on your side.
If the person reacts in a favorable way (i.e. apologizing for the offense, and explaining that they did not mean to offend anyone) you have even taught your classmate something about good manners.
Upvotes: 3 <issue_comment>username_5: Should professors intervene? Yes. Now, I think there's a level of personal judgement to be made here. Some people might overlook certain shirts. Some might think certain types of shirts are more offensive than they really are (ex: someone who's vegetarian might not like [this shirt](http://site.shirtmandude.com/pot-leaf-vegetarian-t-shirt-sq.jpg) and I'd personally wonder if they have a sense of humor).
So, as discussed above, the "you can't go wrong" points are for race/national origin, gender, religion, and sexual orientation, age, and disability.
If I were a teacher (professor or a TA) and I identified something (or it was brought to my attention by a student) that someone is wearing an offensive shirt, then I would probably start by taking the student aside and letting them know that their shirt is offensive to some people, and that it displays remarks that make others uncomfortable, and that the student should not wear that shirt (and others like it) to this class again.
I would do this in private mostly because I don't feel that there's really a lot of benefit to publicly shaming someone who chose such a shirt - maybe they're a new first-year student who hasn't quite learned appropriate behavior yet, or maybe they're going through a phase, or maybe they just didn't think when they put the shirt on because they were drunk one night in Vegas when going T-shirt shopping. Give them a chance to improve. If they never wear the shirt again to class, to me that's a win.
If it happens again then I would not hesitate to walk up to the student after lecture starts and say quietly, "We discussed that you were not to wear shirts like this in class. Did you understand me last time? Do you think that this shirt is appropriate?" If the student isn't able to change the shirt or cover it up then I'd ask him to leave the class and then at that point would make an announcement about appropriate shirts.
Finally, if you are someone who is made uncomfortable by a shirt that someone is wearing then you should tell someone about it. Don't hold it in. Unfortunately lecturers generally only have control of their classroom (for example it's hard for a faculty member to kick someone out of a building, generally) but things like this can be reported.
Upvotes: 5 <issue_comment>username_6: >
> Should a professor intervene if a student in their class is wearing clothing that is likely to be offensive and hostile to other students? If so, how?
>
>
>
Maybe.
A professor should hold a discussion with a student when that student is in violation of the school's code of conduct.
If the clothing isn't in violation of the school's code of conduct, a professor will have to decide whether asking the student to desist is worthwhile or not. In some specific situations, asking a student to stop doing something may actually bring about a situation where they start intentionally coming close to, but not quite, violating the university code. In that case they may be offensive more frequently than they are currently, randomly picking out what to wear each day.
>
> Are there any scenarios in which a professor should not do anything even though a student's clothes contains material that is hostile towards another student or group of students?
>
>
>
If it isn't against the school's code of conduct, the professor has little room to insist that certain clothing not be worn, but they can request a student stop wearing such clothing.
As above, though, it may actually exacerbate the problem.
>
> If I come across this scenario as a TA, in which a student (who may be a peer in my program of study) in my class is wearing something offensive, what can I do about it?
>
>
>
We will assume, for the moment, that the clothing in question is not against the school's code of conduct, but it offensive to everyone, in every time, every situation, culture, place, etc.
First I'd evaluate how often it occurs. Is this student consistently bringing offensive messages to class, or is this a once or twice a semester problem?
Second, I'd evaluate how much it affects the class. Is the message visible to every student in class throughout the period, printed on the upper back with the student sitting in the front row, or is it hard to see except when the are standing up with arms at their sides, and then only by the instructor? In either case, does it prevent other students from paying attention, learning, asking appropriate questions?
Third, I'd ask others how they felt about the issue. Does it actually bother them, and did it bother them before you brought it up? I'd make sure this isn't merely a slight against me only.
Lastly, I'd decide, based on this information, if intervention is necessary. If it poses a significant, frequent problem, then I'd probably bring it up. If it poses a significant infrequent problem for a few students, I'd probably bring it up.
A simple, "Please don't wear that shirt to this class again," privately and quietly as they walk out of the class might be sufficient for most cases. Some professors excel at public shaming in a simple effective way. A humorous comment during the lesson referencing the student's poor taste in clothing might dissuade them from wearing similarly offensive clothing.
>
> I don't feel comfortable (or safe, for that matter) as a woman confronting a male student about an item of clothing that is offensive to women.
>
>
>
That's a real problem. If they are communicating something, and you, who are in charge of the classroom, choose not to communicate, then who is going to handle the problem?
If you must, get a third person to back you up. Preferably someone with authority, and make sure the student understands not just that it's inappropriate, but how it makes you feel. If it's not just offensive, but threatening, to you then you have all the more reason to make your work environment safe. Tell your instructor that you can't teach a class where students are threatening you, and that you find certain articles of clothing threatening. Make your case according to the student code of conduct and it'll be that much stronger.
But you really shouldn't take a passive role in your teaching. You are learning skills now that will benefit you as an educator later, if that's the career you choose, and you need to learn how to do hard things. This might be one of them.
>
> On the other hand, I feel like it is my responsibility to keep a non-hostile environment in my classroom.
>
>
>
Not just for the students, but also for yourself.
Upvotes: 3 <issue_comment>username_7: I'm a bit disappointed at the number of comments from people who say they need to know what the exact statement on this particular T-shirt was so that they can judge whether it was truly misogynistic before answering the question. The question is clear: What is the appropriate response *given that* a student is ``wearing clothing that is likely to be offensive and hostile to other students''? Debating exactly what constitutes misogyny (or any other form of hate speech) is not the point here. Surely a question about whether a particular slogan is offensive would be too localised for academia.se, whereas the question of how the role of professor and/or TA affects how/whether one calls out offensive speech/behaviour/etc is excellent.
A couple of the other answers express surprise/concern at the OP's comment
>
> I don't feel comfortable (or safe, for that matter) as a woman
> confronting a male student about an item of clothing that is offensive
> to women.
>
>
>
Many of the comments on this page illustrate why it can be so hard to call out misogyny as a woman. Women who call out misogyny are regularly accused of being "too sensitive" and told to "lighten up". On this page we've seen people who think they'd be a better judge of whether something is offensive than the person who *actually experienced* it, suggesting it may have been all in her head, refusing to trust her judgement, and claiming that no young people are misogynists. All this just from outlining a story that inspired a general question about calling out offensive behaviour. Is it any wonder that women may find it difficult to confront the person who's actually wearing the offensive T-shirt?
Finally, to actually answer the question:
Most universities should have something like a code of conduct which forbids discriminatory harassment. The one quoted in the question certainly seems to apply to a T-shirt with an offensive slogan. In this case the professor (or any student in the class) would be within their rights to object to the T-shirt. I might say something like "that T-shirt seems to be in violation of the code of conduct; please don't wear it to this class again", ideally in much the same tone as I would say "If I don't have your homework by tomorrow you will get a zero", but I'd say it loudly enough that anyone paying attention could hear. As with any instance of calling out something offensive, I would only do this if I felt safe enough: you should try to create a safe environment in your classroom, but not at the expense of your own safety.
If you don't feel safe or comfortable enough to call out your student (and I can see this happening especially if that student is also your peer), there might be other people you can talk to, for example the professor of the class you're TAing, or the head of the graduate program in your department, or the student's advisor. This might also be helpful if the student does not respond well when you first address them.
Upvotes: 7 <issue_comment>username_8: I don't get the people trying to suggest a private conversation etc. The whole point is to restore the confidence of female students to work in a reasonable atmosphere.
This just calls for "You are not wearing this T-shirt to my classes. Get out and come back once you are wearing something appropriate." His bad luck if he relied on wearing that shirt through the day.
Don't start class until he's gone, if necessary calling campus security. Ask the other professors to do likewise when encountering similar material in order to maintain a professional and workable atmosphere, to avoid being considered the only one with standards.
Upvotes: 3 <issue_comment>username_9: I habitually wore Black metal T-Shirts, which can have offensive lines, as an Undergrad, which never got me into trouble. But, some people asked me why I like this kind of T-shirt and I tried to explain my reasoning.
This worked very well and I formed strong friendships with some of these people.
And I, personally, tried hard not to judge people from their appearances - which was never 100% sucessfull, but I think is a worthwile attempt.
So, I think the best first step is simply asking the person WHY he is wearing this t-shirt. Every person with a provacating T-shirt gets asked now and then, so it should be no big deal.
Running to your teachers first is similar to running to your mommy and not really grown up behaviour.
Disclaimer: I come from germany, where a much wider range of clothing is deemed acceptable in academia, especially social sciences and CS.
Upvotes: -1 <issue_comment>username_10: >
> Should a professor intervene if a student in their class is wearing clothing that is likely to be offensive and hostile to other students?
>
>
>
It depends on the situation, although IMO the answer in this example is yes. People do not have a right not to be offended, and offending people may be a positive thing in an academic environment, where people need to have their assumptions challenged. Nor is hostility, in and of itself, impermissible in a school environment. But:
1. There's a problem with behavior that is offensive toward a group that is underrepresented in the field being studied.
2. There's a problem with hostility that creates reasonable fear in other people, or that inhibits collegial discussion, or that is directed toward an underrepresented group.
For example, if an 18-year-old comes to my classroom in Goth clothing and acts resentful toward the world, it's not a big issue. It's hostile, but it's hostility that isn't a big problem. If a student wears a heavy metal t-shirt with a satanist message on it, it's not a problem because Christians aren't an underrepresented group; they're the dominant group, and it won't hurt them to be exposed to contrary ideas. Ditto for a t-shirt saying "Darwinists burn in Hell." But in an engineering class, a misogynistic t-shirt creates a hostile environment for women, who are an underrepresented group in engineering. A t-shirt reading "one faggot, one bullet" is also a problem because it could reasonably cause people to be afraid for their safety.
So IMO the t-shirt you describe *is* a problem in the context in which you describe it. The question is then how to handle it. If possible, do your homework and get bureaucratic buy-in before confronting a student about this kind of thing. Otherwise you can end up not being supported by your administration; as we've seen in the answers to this question, reasonable people can disagree about these things. In this situation, I would probably not say anything at all to the student during class. I would then go and have a five-minute conversation with my dean about what school policy is. If it's clear that school policy puts me on strong ground and that my boss will back me up, then I would email the student and say, "Your t-shirt that said X was unacceptable in my classroom for reason Y. I have discussed this with my supervisor and we are in agreement on how school policy applies here. Please do not wear it to class in the future." This private method of handling it lets the student not be embarrassed in front of others (which is a *big* deal to many 18-20 year olds) and makes it unlikely that we'll have a big classroom confrontation that would detract from instruction or possibly put me in physical danger. If the student then shows up wearing the t-shirt again, despite the email, I would tell him to leave class, citing the email warning and chapter and verse as to my authority to kick him out. (In my case, there is a specific provision in the state education code that gives me that authority.) If he refused to leave, I would call Campus Safety.
Upvotes: 4 <issue_comment>username_11: Yes, teachers (including both professors and TAs) should absolutely intervene in such an instance. As everyone else has said, non-discrimination policies would most likely prohibit such offensive misogynistic expression in a *classroom* environment, and these are policies with which I would recommend all teachers familiarize themselves before they enter the classroom.
Let me mention one more reason that you should not just overlook such an incident: **to prevent such things from happening in the future.** Not all students are completely familiar with what constitutes improper behavior in a classroom. If nobody does anything, what's to stop that student from doing the same thing again?
Here's a related story from my own experience. One time, when I was a TA for an engineering calculus course, I got an in-class group assignment back from a pair of students, where one student circled the other student's name and wrote "is gay" as a joke. Being gay myself, *I was pissed.* (I had also hoped that the idea that "gay" could be used an insult had gone out of acceptance by that time, in 2012. Guess not.) At the time, I didn't know who did it; it could've been someone outside the pair, but it certainly wasn't the student whose name was circled due to the handwriting being different. I also didn't want to falsely accuse anyone, or put anyone on the spot. I didn't quite frankly care who did it; I just wanted to make sure my students knew that this behavior was unacceptable.
So the next class, I read the university non-discrimination policy to *both* my classes and I told them what had happened, without naming anyone (or the class it happened in). I said that I didn't care what anyone said or did in their free time, but in my class, I wouldn't stand for people doing things like this.
My first class, I couldn't hide my tension or my anger when I was saying all of this. The second class, which was the one with these two students, I did the same thing, but I was much calmer because I had already done this with the first class. The two students ended up apologizing to me when I handed back the paper, and I (calmly) said that I wasn't accusing anyone of doing anything, but I just needed to make sure that everyone understood that this wasn't acceptable.
In retrospect, the only thing I would have done differently in my case is *practice my speech beforehand* so I could convey the seriousness of what I was saying without the tension and anger I had during the first class. (Some people don't react as well to tension and anger.) That aside, I did feel good affirming for my students, some of whom were likely LGBT themselves, that my classroom was not a place where I would accept any such inappropriate or discriminatory behavior. And finally, *I made it way less likely for anything similar to happen in the future on my watch*.
Your case is different, because it involves quite the open display of inappropriateness. In your particular case, I would walk up to the student and tell this person that wearing such a shirt in the class is inappropriate and goes against school policies, and that he needs to leave and change into something else before he returns to your class. The students who are concerned about the shirt will most likely notice your action and feel relieved that you are addressing it.
I'm not so sure that you should call him out from the front of the classroom, although you certainly have the right to do so. The student might find it humiliating to be called out in front of the entire class (especially in a large lecture), and moreover it's a little impersonal. (Although I addressed my situation with the entire class, keep in mind that I didn't know who had done what and I didn't name anyone. Also, if you just read the non-discrimination policy out loud in your case, it will be pretty obvious to everyone who you're addressing.) I think the best outcome would be with a private or semi-private, direct conversation as I suggested above.
On the other hand, if you're intimidated and worried about possible physical violence, then you could opt to ask this person to leave in a semi-private manner but *with a reasonable physical distance between the two of you, while many other students are around*. This allows for witnesses in case anything goes awry. (This does seem like an unlikely scenario, but your safety is paramount.) If this option is not safe enough to you, then as username_3 has suggested, you should call campus security and wait until they arrive before starting your class.
Upvotes: 2 <issue_comment>username_12: >
> Should a professor intervene if a student in their class is wearing
> clothing that is likely to be offensive and hostile to other students?
> If so, how?
>
>
>
As with most things in life, it can be somewhat complicated. In professional settings rule of thumb is usually to "Praise in public, reprimand in private." Thus, while it may make sense for the professor to intervene, the best time to do so would likely be after class has concluded. How good or bad of an approach this might be in an academic setting might be up for debate, since a "teachable moment" might be lost for the rest of the class, but it also allows the student opportunity to save face.
Another thing of note along this lines, even more so since nobody else mentioned it, is that is also gives the professor a chance to check to make sure the student really understands the mean behind things. I've been in environments where ten to twenty percent or more of the students in a given class might be exchange students. Their cultural norms can be drastically different and they might not even fully understand what a slang term can mean. This allows for a much more robust conversation about things than just a "Don't wear that shirt again." being directed at the student.
>
> If yes: are there any scenarios in which a professor should not do
> anything even though a student's clothes contains material that is
> hostile towards another student or group of students?
>
>
>
This is likely going to be very subjective since the professor may not always be aware of the situation (e.g. quote in a foreign language, very large lecture hall where the professor can't even see the student, etc.). To a certain extent the student body needs to assist the faculty in being aware of some of the situations so they can be dealt with. Another scenario is protest campaigns to reclaim certain terms by effectively displaying those terms yourself. So at the end of the day, situations are going to arise, but likely it would need to be dealt with on a case-by-case basis.
Upvotes: 1 <issue_comment>username_13: The word "misogyny" is is not strictly defined, and applied very loosely and emotionally by different people. You may say *I know misogyny when I see it, I know when I'm offended*, but your shirt wearer almost certainly didn't consider himself a misogynist. Considering how the word is applied across the internet, I can certainly understand how people feel that the best course of action still depends strongly on the actual content of the shirt, even if you personally feel it's objectively misogynistic.
Take the case of [<NAME>](https://en.wikipedia.org/wiki/Matt_Taylor_%28scientist%29). To some, that shirt represents a fun, kitsch re-purposing of 1950s sci-fi pulp imagery. To others, it's a very public sign of how unwelcoming STEM fields can be for women. Again, Taylor probably didn't consider himself a mysogynist, but people have condemned this shirt in terms exactly as strong as those used in the question. (For what it's worth, I expect that in your case, the shirt was actually much more offensive. I googled "[mysogynistic t-shirt](https://www.google.com/search?q=mysogynistic+t-shirt&source=lnms&tbm=isch&sa=X&ved=0CAgQ_AUoAmoVChMIntbT1PXDyAIVhAgaCh1LSw60&biw=1440&bih=798#tbm=isch&q=misogynistic+t-shirt)", and there are certainly some horrendous examples there.)
Another case-in-point is the [punks of the late seventies](http://uk.complex.com/style/2013/05/29-things-you-didnt-know-about-punk-style/swastika-armband) wearing swastikas. They weren't Nazis, quite the opposite, but they felt they needed the strongest, most shocking symbol they could think of, to get the establishment angry. So if we imagine lecturing in the seventies, we could have a situation of a student wearing swastikas to class. That seems like the most clear-cut, unambiguous situation possible, but still, the student is not doing it for the reasons we think they are. And in fact getting angry, singling them out and getting security to escort them off the premises is just what they're hoping for.
People wear what they wear for strange and inscrutable reasons. For outsiders, a hijab may be a symbol of oppression, while for the wearer, it's actually a statement of emancipation. The same goes for t-shirts with movie posters, or death metal paraphernalia. Even swastikas. Of course, **things can still be offensive if they're not intended to cause offense**, but the lack of intent does change the situation, and what the best course of action is.
So let's take an extreme example: say a student comes to my class with a t-shirt that is absolutely shocking and reprehensible, and contains deeply disturbing imagery. Certainly, I would agree that something needs to be done, and it can't wait until the end of the lecture. So do I single him out in front of everybody, make a loud and public stand and force him to leave, possibly with the help of security? Or do I ask him to step outside with me and give him an opportunity to explain his reasons, and generally explain to him why I cant allow him back in?
Other answers have mentioned that the other students need to see that their safety is being guarded, and that the matter isn't iognored. Justice must be seen to be done, that sort of thing. Even so, I would still argue *against* the first strategy. I think there are three main reasons:
* If I single the student out, I will antagonize him, and strengthen his belief that he did nothing wrong. I will lose any chance of actually influencing his behavior. You may feel that he doesn't deserve such considerations, but if I want to actually change things for the better, I have to be pragmatic.
* As the responses here show, even if I think the issue is unambiguous, others may not. Especially with the misogynistic shirt, other students may take the side of the shirt-wearer. So while I'm making a stand and feeling good about myself, I'm actually creating a division in my classroom. This will make the atmosphere **less safe** in practice.
* The student may be suffering from something bordering on mental health issues. Perhaps a compulsion to be socially inappropriate, or a deep self-loathing causing him to lash out at others in whatever way he can find.
* Finally, and I think most importantly, everybody has a fundamental right not to be *ascribed* an opinion. Even if the guy's covered in swastikas, he gets at least one chance to explain himself, and to do so in a non-public setting. I may be wrong in my interpretation, I may not. The point is, **everybody deserves at least on opportunity to explain themselves.** I think that's a fundamental right, and it's not lost simply because you wore something I didn't like, however objectionable it is.
Upvotes: 3 |
2014/03/05 | 1,182 | 5,264 | <issue_start>username_0: I intend to go to graduate school for applied/computational mathematics, specifically a program like this <https://icme.stanford.edu/>.
At this point, I'm trying to decide whether to take graduate level theoretical math courses in areas like Algebraic Topology, Differential Geometry, etc. (which I don't currently have any experience with, but I could still take), or just take courses in application areas (the ones I'm interested in are statistics, biology, chemistry, computer science).
My math education in the latter case will consist of basic linear algebra+calculus, a course on PDEs, a course on abstract algebra, a complex analysis course with an applied focus, a couple mathematical modeling courses, basic number theory, basic probability theory, a couple numerical analysis courses, and upper-level real analysis. So not a ton, but not negligible either.
There's also a lot of math in the non-math courses I will take (algorithms, theory of computing, quantum mechanics, optimization, stochastic processes, machine learning, etc., some of which are grad-level) However, this schedule is perhaps lighter on mathematical theory, and contains zero grad math classes.
Should I drop some of the courses in application areas (although courses with significant mathematical content that yet focus on applications are perhaps my favorite type of courses) and take graduate-level theory courses to increase my readiness for and chances of getting accepted to graduate school? Or will I have ample time for that in graduate school and should I take courses I enjoy more (and have greater aptitude for) while informally studying theory on my own?
Or do I need to take more theory even as an undergrad, and even though I'm not gunning for pure maths? (I do not plan to go into academia after graduate school, in case that matters.)<issue_comment>username_1: I would take the applied maths courses (the second option). If that is what you enjoy and would like to pursue in grad school then it will benefit you more than theoretical courses.
I take more theoretical courses to mean a lot of epsilons and deltas and theorems etc. If you are more interested in statistics, biology, chemistry and computer science then these courses will not be as useful as more applied mathematics courses. In applied mathematics you're more interested in constructing interesting algorithms or models that work without worrying about the theoretical details too much. That's not to suggest that applied mathematics is any easier, it's just that the focus is different.
Do you have any specific ideas what area you'd like to work/study in?
Upvotes: 2 <issue_comment>username_2: This is really quite a subjective question. However, even if you are planning to work in applied areas, it probably makes sense to get a good grounding in basic theory first. For example, you say you are interested in statistics. If so, you should definitely take at least one advanced probability course which uses measure theory, and also a measure theory course. These should definitely help later on, even if you don't wind up using advanced probability theory. Some material at the level of the Billingsley book "Probability and Measure" is kind of what I am thinking of.
More strictly pure courses like Algebraic Topology, Differential Geometry are a bit more debatable. They *might* be useful in applied areas, depending on what you are doing, but will probably not be. They might be worth it from a mental broadening perspective, but that is really subjective. I've worked in applied areas some, and have never needed to reference theory of this kind.
Also, I think if a course is well taught (which may not be the case, of course) and forces one to work on the material, then it is better than self study.
Disclaimer: I have a PhD in Statistics, which may cause some biases.
Upvotes: 2 <issue_comment>username_3: I think that at this stage in your career, you should set yourself the goal of reaching some sort of [**mathematical maturity**](http://en.wikipedia.org/wiki/Mathematical_maturity)
>
> ... fearlessness in the face of symbols: the ability to read and
> understand notation, to introduce clear and useful notation when
> appropriate (and not otherwise!), and a general facility of expression
> in the terse—but crisp and exact—language that mathematicians use to
> communicate ideas.
>
>
>
Courses like abstract algebra, algebraic topology, differential geometry and e.g. combinatorics will give you a combination of breadth and depth that will make it much easier to quickly master any applied field that you will choose in grad school.
In my experience (PhD in string theory, currently applied economist), mastering the really advanced math courses should be done as soon as possible, when you have all the time and energy to immerse yourself. It's also my experience that it is rather straightforward to apply abstract patterns when you already know them, but the reverse (the emergence of abstractions from concrete applications) is much, much harder.
Of course, by all means mix and match your fundamental math courses with a small selection of interesting applied courses. There's no sense in not enjoying yourself.
Upvotes: 4 |
2014/03/05 | 386 | 1,550 | <issue_start>username_0: I'm applying to a Ph.D. program (in the field of robotics) and I have a hard time to fill the personal interests. I initially was going to avoid that section but then I got convinced that it's not a bad idea to have it. But I digress...
One of my hobbies is reading\* about science, mostly in mathematics, [astro]physics and science in general. The problem is, I don't know how to write this in a clear, honest and concise fashion. I don't want to simply write *"reading"* because I don't want to be that generic applicant! Also I want it to be clear that this reading is not related to my professional field. And last but not least, I don't want it to sound fake!
---
\*also watching or listening. Sometimes I also code for fun, generally speaking I *fool around* the science.<issue_comment>username_1: I believe mentioning that is irrelevant to the contents of a resume. But if you *really* want to mention it, do so in the hobbies section:
* *reading science digests every morning*
* *used to reading scientific papers*
* *used to reading peer-reviewed publications*
Upvotes: 2 <issue_comment>username_2: Just don't.
As an applicant to a scientific PhD program, if you *weren't* doing a lot of scientific reading, that would just be weird.
Thinking it's something special to be reading scientific literature every day, is a small warning sign in itself.
And the interests section is there to indicate some kind of balanced personality and the existence of a life away from narrow study.
Upvotes: 5 [selected_answer] |
2014/03/05 | 2,727 | 12,172 | <issue_start>username_0: I teach undergraduate level courses in the humanities. Following practice in my department, I have let students take their texts for consultation during their written exams. They can choose a number of questions they want to answer from a set of questions. After a few semesters, I have begun questioning the validity of such an approach.
I usually had one exam in this form and one final essay in the end of the semester. The issue is, although the subject dealt with in the first part of the course is more or less objective, I find students "copying" my classes much more than using the texts to answer the questions. A complicating factor is that most students cannot presumably understand the material, available only in English. To make it clear, most of my students cannot read English (I could mention the material is not available at the library, but that is another matter). And it is a required course.
What bothers me is that with this approach I cannot, as suspected, measure the level of understanding of the students. Some of the questions deal with very basic issues and concepts. Even then, the overall level of reading, understanding, and writing, as evidenced by their exams and final essays, is very low.
I have thought about changing the syllabus next semester, to one exam (without consultation), perhaps another exam and the final written assignment, but I am quite unsure of the results. Perhaps a lot of students will fail.
Am I too concerned, or is this the way to go?<issue_comment>username_1: Usually exams that allow the students to use their text materials is because of the density of the topics. I mean there is a lot of material to be covered and for that reason not easy to memorize, or in some cases it is futile to make the student to remember some specific topics (like in my field of CS).
For what I see the students prefer to paraphrase your material instead of what is covered in the text book; that could be a direct consequence that they find the questions in the exam too easy to answer, and with the answers directly related to your material. In this situation what I do recommend is to twist the difficulty a little bit higher, so you can force that the student has read the material from the book beforehand; and also that can show that he or she knows the basics.
In general it is not bad to let students to use textbooks during the exam, the bad thing is not to adequately tune the difficulty of the exam.
Good luck!
Side note: English is almost a universal language and it must be a prerequisite before following some courses. In any case if they are too lazy to learn a new language then try to get translated copies of the required textbooks (or just prepare one for your own).
Upvotes: 2 <issue_comment>username_2: I am in mathematics, so my experience will be different from yours. What I have found with tests on which I have allowed students to use their text or notes is that the students have not prepared as well as they should have, and waste a lot of time looking for things in their notes. Ultimately, they end up doing worse, as a class, than they usually would. Now, in math we have a lot less material for a test than in, say, history or political science, so there may be some legitimate reasons for allowing the students to consult other sources during an exam, but I do not recall ever being allowed to do so in the humanities courses I took as an undergraduate (mainly in political science and diplomatic history). I think a significant component of a college level education is learning how to absorb, and synthesize, relatively large amounts of material. So, my students have only their own brains to consult during an exam.
Upvotes: 5 [selected_answer]<issue_comment>username_3: I take a compromise approach to the problem of bringing materials to an exam. I allow students to bring one or two sheets' worth of notes **that they have prepared themselves**. No magnifying glasses or other "reading aids" other than standard prescription glasses are allowed, so they can't simply photoreduce a whole bunch of pages and then use it—it's something they have to hand write or copy themselves.
Such an approach forces students to prepare the material, but still gives them some flexibility not to have to commit everything to memory (Is that sign positive or negative? Is that denominator regular volume or molar volume?).
Upvotes: 5 <issue_comment>username_4: I come from an engineering background, and we were allowed to bring our books and notes. This was in the 80s and 90s so there was no Internet to access. I found that if I didn't know the material that the books and note were no help, and that searching the materials just wasted time.
Unless the test asked the exact same question as was in an example covered, the materials were not useful as a resource during the test.
It should take only one experience like that to persuade a student to learn the material rather rely on the books and notes as a crutch. Consider it a bonus life lesson for the student.
Upvotes: 2 <issue_comment>username_5: I would ask that the teachers try to also be understandable for those of us with testing anxiety. I was always very thankful to have notes or the text with me during tests because it allowed me to relax some and actually be able to focus when answering questions without fear of forgetting some small detail. I also found that I prepared more when allowed to bring notes because I took time creating them which made my foundation of that knowledge stronger. When I didn't have notes, all bets were off.
Upvotes: 4 <issue_comment>username_6: The real question is what you're attempting to test.
If the proposed notes are material which the students are going to need "off the cuff" -- data and operations which are absolutely basic to the discipline they're learning -- then it makes sense to test whether they have memorized it, since consulting reference materials every time will slow them down too drastically for them to work productively at the next level up.
If they're material which a practitioner will generally not have memorized and will look up anyway, then it isn't unreasonable to make references available to the students... while pointing out that being able to work without the hardcopy references will let them solve the problems faster and with more confidence and thus may improve their grades.
Note that these cases presume two very different sets of exam questions.
Upvotes: 3 <issue_comment>username_7: This has been touched upon by the two lowest voted answers, so I am going to give it a go myself to make it clearer.
First of all, except for orientation courses where memorization is the central issue, open book examinations are incredibly good, *however* they are only as good as the questions you ask. The thing you should be aware of is that with examinations like this you're able to far more complex questions which are **not** about specific sections of the text, but rather about the course as a whole. For example, rather than asking what the effects were of the actions of a specific individual on his field of study in his time you can suddenly ask students to describe the trend over a far greater time period. Or you can ask for an analysis of the approach of a set of individuals on a very specific detail. What you should be aiming for is comprehension of the matter, questions which can not be paraphrased from the covered materials as they simply were not explicitly covered.
Another existing 'trick' is to make the exams so huge that students never have any chance of finishing the entire exam. If you go down this route you have to make very clear which depth and length you expect for each question, but the nice thing of this technique is that you can see quite clearly how well students know the full width of the subject matter.
Now, I have to be honest that a lot of these kind of examinations - open book exams in general - are often none the less not well done. It becomes more about just learning the index of the books well then understanding the topic matter, but in essence I believe it is crucial for a good modern education to not be based around memorization - we have the internet for that - , but rather comprehension and recombination.
Upvotes: 0 <issue_comment>username_8: It depends heavily on what you're testing.
Is it a computer science paper, where you want to test your students' knowledge of basic data structures and algorithms? Then you probably don't want to let them have text materials.
Is it a first year linguistics mid-semester test, where you want to test the students' knowledge of the International Phonetic Alphabet and different places and manners of articulation? Then you probably don't want to let them have text materials.
Is it a second year linear algebra exam, where you want to test the students' ability to apply different methods to solve various systems of linear equations, but you don't need them to remember whether a projection is `v.n/n.n * n` or `v.n/v.v * n`? Then you should probably let them handwrite their own notes.
Are you testing the knowledge of something? Or are you testing the ability to apply that knowledge to solve a problem or compare two algorithms, historical events, methods of solving an equation, etc.?
Upvotes: 0 <issue_comment>username_9: Does your school allow tests to be given and answered in your native language? If so, then ask questions in your native language and have them answer in your native language. Those who do not understand the text will fail (they cannot pass by just copying the English text, as you have required their answers to be in the native language), and those who do understand the text will pass.
Upvotes: 0 <issue_comment>username_10: What do you want "create"? Robots like google? or persons that have resolve problems, analyze and creative responses... let robots memorize and let hummans create and be creatives.
Upvotes: 0 <issue_comment>username_11: My answer is based on my own experience as a college graduate, and, in short, is "No, you are not too concerned, and, no books during tests is not the way to go." I feel that a teacher's two responsibilities are to encourage students to learn on their own, and to effectively relay information to them: few things are capable of so thoroughly sabotaging this process than offering an open-book test.
Teachers of mine have tried all kinds of policies during my college career, and, in deed, my entire student life. Open notes, open book, open book and notes, closed everything, study sheets, index cards, you name it.
I found these two strategies in particular to be the most effective:
2nd-most effective:
Usually about a week before each test, our teacher offered for us to turn in a blank blue book with our name on it (standardized bound sheets of paper for essay exams) that she would mark in a supposedly tamper-evident way, gave it back to us the next day, and allowed us to fill it with hand-written notes to our heart's content. We were allowed to use these notes and these notes only during the test, during which she would verify the authenticity of her markings on any study guides in use at her leisure. This was incredibly effective for me because taking the time to *write down* information from scouring books, class notes, the Internet, and even collaborating with others doing the same thing, was, in effect, *actual studying*. I usually found that I seldom needed it at all during the test because I'd committed so much of it to memory while creating its content! Additionally, it was a confidence boost to have it available, and that any time I felt like referring to it, I usually knew specifically which page of notes on which to find the answer!
Absolutely the most effective:
The first day of class we were handed a list of 150 questions, and were guaranteed that our *entire* final exam would be *exactly one randomly selected question* from the list (possibly different for each student), and alone, worth *100% of our course grade*. I had never so diligently studied for a class in my life!
Upvotes: 2 |
2014/03/05 | 593 | 2,480 | <issue_start>username_0: I am doing an undergrad in Computer Science, and am seriously considering entering in the honors program at my school. I fulfill all the requirements (certain courses, GPA, etc) and now I just need to find a professor that is willing to work with me.
An honors project at my school is a two-part process:
1. First there is a Directed Study where the student and the professor work one-on-one to bring the student up to speed on the subject material that may be required for the thesis project. This happens over one semester like a regular course
2. After the directed Study, the student sets to work on the actual project/research, and presents their findings whenever they have completed their project (usually two semesters).
I have read the backgrounds on all the professors in my faculty, and have identified half a dozen who do research which would be similar to my project.
My problem now is that I don't know how to "apply". I have prepared a one page "pitch" which talks about my background, my project idea, and why I think that professor would be a good fit (based on their research interests), which I was going to email to my half-dozen potential professors.
Is this too formal? Too informal?
---
Extra info:
* I go to a Canadian University.
* I study Computer Science.
* I am away on an internship, or else I would go talk to the professors in person.<issue_comment>username_1: For what it is worth, I successfully 'applied' for a similar program as an undergrad simply by sending emails to the potential mentors. I used a one or two paragraph pitch, then asked if they would be willing to explore the possibilities. This meant that the professors could have agreed to meet me to discuss the project without committing themselves to anything. It also meant that if I had decided that I didn't want to work with a particular professor after meeting them in person, I had left myself an option to gracefully decline.
(I ended up with a very successful honors project in this way. Good luck with your endeavors!)
Upvotes: 3 [selected_answer]<issue_comment>username_2: An additional data point: Consider approaching the profs whose classes have intrigued you the most, and talk to them about what projects **they** have on their books, with no man power to do them.
Sometimes\* they'll have fantastic projects even better than what you come up with, because they have expertise in the area.
---
(\*) not always though!
Upvotes: 1 |
2014/03/05 | 967 | 3,789 | <issue_start>username_0: Some background:
I graduated 3 years ago from a big state research university. I had an irrelevant sociology major and graduated with a 2.9 GPA mainly because I had no inclination that I would ever want to go to graduate school and thought it was more important to just make sure that I had a job to support myself and graduated on time.
After graduation I had a crappy office job for 2 years and took some random IT graduate courses which lead me to eventually get a job as a programmer at a large marketing firm for about a year and I got really interested in the more complex world of computer science especially computer vision, modeling and simulation.
Which brings me to now.
I will be beginning a CS Master's program this summer at a medium sized state university with a concentration in Modeling and Sim. I plan to to the thesis option and I've already begun contacting professors about research opportunities.
My GRE score is 162/155/4.5 but I didn't study because I found out I had to take it at the last minute. With some prep I think I can at least bring that up to about 165/160/5.
All that said, do you think that with my thesis, a publication or 2 and a stellar GPA my masters work would over-shaddow my crappy undergraduate career enough to get me into a good PhD program.
My *dream* department would be Caltech. But I'd at least like to go somewhere reputable if I'm going to bother with a PhD.
Diversity note in case this helps my cause: I'm female, 1st generation college student, armed forces veteran.
TL;DR
BAD UNDERGRAD: 2.9GPA/unrelated major
If I do really well on my CS Masters program, get some research published, and get my GRE score up to like a 165/160/5 do you think a top program would ignore the transgressions of my youth?<issue_comment>username_1: Given how far away from your original undergraduate degree you've moved (several years' work experience, change in degree program, and so on), it makes it very hard for a PhD program to give your undergraduate transcript *too* much weight in the admissions process. You will also be able to stress how far you've come as part of your application, either in the statement of purpose or in a "other notes" section—and you should take avail yourself of the opportunity.
As for being female and a veteran, that will matter much more to schools than being a first-generation college student. (Being a female applicant in CS certainly can't hurt your chances if you're a qualified candidate.)
Upvotes: 3 <issue_comment>username_2: My opinion is that you will be a shoo-in somewhere if you can do good work at the Master's level, while building relationships with the faculty. The fact that your undergrad degree was in sociology almost makes your low GPA irrelevant.
Definitely aim for Caltech if that's what you want, and there is faculty you want to work with. But don't get too hung up on big name schools. There are lots of other schools which maybe aren't as highly ranked but could be a good fit for you.
I recommend reading [this book](http://rads.stackoverflow.com/amzn/click/B0097X0FOM), which helped in writing my statement of purpose. The author is another person who graduated below 3.0.
Here's a personal anecdote to give you encouragement: I was a mediocre undergrad at a big public university, then went to work for a number of years before deciding to enroll in a part-time Master's program at a smaller state school. I had a great experience working with my final project advisor, who encouraged me to apply to a PhD program. I am still working on publishing the results, but in the meantime I have been accepted to a PhD program for the coming year. I never imagined it would happen, but it did. If I can do it, so can you.
Upvotes: 4 [selected_answer] |
2014/03/05 | 10,418 | 45,308 | <issue_start>username_0: I am a software engineer and I have been working with people with academic backgrounds for several years. Many times, I've noticed that (even otherwise brilliant scientists) produce code of extremely low quality (unless their background was precisely Computer Science).
Since those people are very good in doing their research - and eventually obtain remarkable results - it seems they are clever enough to write *decent* code. Is it just that they don't think it's worth the effort? Plain arrogance? Lack of time?
Examples
--------
It seems to me that in academia the most popular languages are C/C++ and Python (neglecting MATLAB and other vendor-specific languages). The language where I have seen the most amazing pieces of junk is actually C++. The main points are:
* Really, really naive C++ code. They claim they chose C++ over Java/Python/whatever because "it's faster", but they `new` everything, even an array of 3 `float`s that is deallocated few lines later, where 3 is known at compile time.
* They have learned pointers from C and they use only them.
* Some of them (not most of them) have read some random blog posts about OOP and now put *virtual* everywhere, using abnormal levels of abstraction.
* They are convinced of pointless optimization choices.
* They lack proper memory management.
* They copy/paste massive amounts of code from project to project and within the same project as well.
And in this list I am omitting the problems with the *process*, rather than the product. Scientists use:
* no version control,
* no automated builds,
* no documentation,
* no software process at all (neither [agile](https://en.wikipedia.org/wiki/Agile_software_development), nor traditional [waterfall](https://en.wikipedia.org/wiki/Waterfall_model)).
The workflow is:
>
> I devise the algorithm, I write it as a massive 10k LOC piece of C++ and I click `build` somewhere.
>
>
>
As this assessment could be probably biased by my own experience, I have inspected some open source projects run by researchers (and maybe a few software engineers) and cited in many important papers. Virtually all of them were:
* crashing on corner cases,
* had ugly GUIs,
* and the code - in my opinion - was ready for a complete rewrite.<issue_comment>username_1: I'll have a go at this. This is mainly my personal view based on my use and implementation of academic software. Like many of the comments already mentioned, I don't think bad software is specific to or even more frequent in academia. That said, I think there are a few reasons why it occurs that are specific to the field.
Software is not a priority
--------------------------
In academics, the key performance indicators are all about paper publications. Software, while highly useful in my opinion, has very little value. More often than not, software implementations are side tracks or proof-of-concepts at best used to bump up the citation count (ofcourse exceptions exist).
As a software engineer, I am sure you are aware of the knowledge, time and effort required to produce quality software. Given the *publish or perish* mantra in academia, this time investment is often spent writing a new publication. It's a risk-reward consideration.
My personal point of view is that quality software is very important. For example, often, the first step in comparing a new algorithm with previous state-of-the-art is reimplementing the state-of-the-art due to a lack of implementation. Obviously this introduces a time delay and can cause a series of bugs. That said, I think in general little will change until software gets valued by performance indicators somehow.
Researchers are not programmers
-------------------------------
Most researchers have little or no programming experience, though YMMV depending on the field. I think it's fair to say the majority of researchers learned to program by themselves when confronted with problems where they need it. Superficially learning a language is typically not the problem, but you need more than superficial knowledge to produce quality software (choosing data structures, design patterns, deep knowledge of the language, ...). This is a hurdle for self-taught programmers without a computer science background.
Another problem that less experienced programmers face is not realizing when refactoring is necessary. You can find countless pieces of poorly structured software. In research, this is a natural consequence of iteratively implementing while designing an algorithm which evolves into a monstrous piece of software full of hacks. That monster might still do what it was meant to, though, if you know exactly how to ask it nicely. For many researchers that's where the story ends: get the results and put the beast away forever. It often takes a serious time investment to wash the monster prior to talking it out for a walk in public. This is not always worth it.
Trends in machine learning
--------------------------
My field is machine learning, in which software is being valued increasingly. Examples of this trend include a growing [software repository](http://www.mloss.org/software/) and a [position paper by some big names in the field](http://jmlr.org/papers/volume8/sonnenburg07a/sonnenburg07a.pdf). I am very happy with this evolution, because quality software allows the field as a whole to progress faster and increases reproducibility.
A current patch to the problem is being able to publish papers about peer-reviewed software implementations. I know this is possible in [machine learning](http://jmlr.org/mloss/) and [statistics](http://www.jstatsoft.org/). Such software is usually of higher quality.
Upvotes: 7 <issue_comment>username_2: In addition to @MarcClaesen's answer let me add a chemist's point of view.
* I'm a a programming-affine chemist. From my experience, that's a rare species. Maybe less rare on these sites. Though maybe not more rare than a computer scientist in a chemistry lab who implements good laboratory practice...
* One important point to keep in mind is that students (at least chemists) have **no whatsoever introduction to computer programming during their studies**. They may have to take an introduction to using spread sheet programs and literature database search, but that's it.
I meet students that come to do their research practicum and theses in a subfield that is heavy on data analysis. It is extremely rare to meet a student who has had already any kind of programming experience, even though this specialization "concentrates" maths-affine students. I'm still looking for good courses at the university where I could send them to get an *introduction* to programming.
I like the [software carpentry](http://software-carpentry.org/) concept. But note that even there is no introduction to the ideas and mindset of programming. Similarly, the introduction materials I know don't really start where the students would need to start.
(I've looked hard for good introductions, because I have difficulties to remember how the world looked before I started programming as a teen.)
* So most non-CS scientists I know **learned programming in a autodidactic and swim-or-drown way**. Note that most **natural scientists** have a **mindset** that will **explore a programming language just like they explore the behaviour of unknown substances or instruments**. As programming languages are deterministic, it is comparably easy to get enough understanding this way to put together some script to calculate something. (I remember a colleague who had his first contact with programming language in Matlab during his Diplom thesis. Some year later, he "invented" the concept of functions.) But during your thesis, you don't have time to learn good programming practice, even if you'd like to. And afterwards you're expected to work and produce results. Learning programming languages is not impossible, but it is usually quite outside the expected scope of what you are doing. The expected scope is usually that you know your way around just enough to get through a some calculations.
* A related practical problem I see is that there are **no professional programmers at hand, so no good programming practice tutoring for the students**. I think this still is a blind spot in work group organization. None of the groups where I have been working so far had a professional programmer. Some groups were objectively too small to afford one (at least not unless that programmer would have been also good in the chemical/spectroscopy lab... - finding that is even more difficult than finding a chemist who has learned some basics of good programming practice).
* Most of the "monster" programs I know started their life as a tiny little script by someone who just learned enough programming to put together the first useful lines of his life. During the next 2-3 years (usual setting: PhD studies) this steadily grows, and time will always only permit to change just what is needed right now. As people are nice, they give the monster to other people who are even less of a programmer. **At the point where** one would say there is enough experience **to** step back and **do a thorough refactoring** based on the gained experience, the PhD is defended, the student typically moves on to completely different work and **the programming project is abandoned**.
* As a scientist I have to say that the **software development processes you refer to are not very well applicable to much of the scientific programming** I do: they require that you already have an idea of how the problem can be solved (How do you produce a deliverable [or design your architecture] if you don't have any idea how it could work?). Often, not even the outcome is known.
From a basic research point of view when you know how it works you've reached the *end* of basic research. Then, applied development starts, and there I see how the software development processes can be applied but this is by definition out of the scope for basic research projects.
The basic research part, OTOH, may be described as a trial-and-error approach to producing the first bare-bones glimpse at a deliverable.
You may be seeing only the **tip of the iceberg** of scientific code where it would be (have been) justified to put in the effort of writing properly documented code with a defined interface etc. (other good practice like unit tests I'd prefer to see already with very early attempts...): possibly you don't see the huge amounts of code that are produced for research ideas that then turn out not to work that well and are abandoned.
* There is a **major difference in the usage perspective between most scientific programming I see and a traditional software project**. Most of that programming occurs in Master's or PhD theses. The scope of the whole thing is rather limited. It will usually be a one-person project, because a thesis by definition is a one-person project. So for the one-and-only developer the scope of possible use of the software is at most a few years or this one project, and usually also just this one developer is going to be the only user. This is radically different from "normal" software development. The contrasting basic research perspective is that even if the method should become widely used and it doesn't turn out that the idea worked just for the one problem it was invented for, the next thing that happens is that someone will improve the algorithm, possibly/usually leading to a totally different implementation. Or find out that it fits into a much more general framework, which would correspond to a totally different interface, etc.
In this situation it is not even sure that carefully designing an interface for the method will pay off at all.
I'm not saying that this couldn't or shouldn't be done with a readable implementation. But it is not the situation that encourages putting in the effort to learn how to write readable code.
* A **large part of the programming** I do is in scripting **data analyses tailored to a specific experiment**. Pragmatically, I generalize the code into reusable packages only when I either know from the beginning that I'll need it again, or when I actually encounter this situation. However, this is astonishingly seldom compared to e.g. what I encountered when working as a student as "normal" developer of a database application. However, partly this is probably because for some projects where I know that code will be reused, I set up a package/library from the beginning that I develop in parallel to the data analyses at hand. But then my perspective on that is completely different from the student scope as I expect to keep using this code base for years.
---
* One of the nicest and most *astonishing* (totally unexpected) experiences wrt. scientific programming I made was: I submitted a paper and released the software implementation in parallel. One of the reviewers asked how I ensure that the calculations are correct - which allowed me to answer that I use unit tests, and the package actually contains about twice as much code for the testing than for calculations. This was unexpected because it is already quite unusual in my field to release the code with the paper - but I had not seen before any paper explaining which automatic tests are provided for the implementation - so I hadn't expected that this information could make it into the actual paper.
I take this as an extremely promising sign!
* Another very promising sign is that when I explain to my collaborators\* the concept of version control systems many like the idea and describe it as something they thought should exist but hadn't known actually exists outside the scope of Word change tracking. (Though for the research workflow I still think the VCS I know (svn, git) work as well as for pure coding projects.)
**update a few years later:** getting non-programmer colleagues to use version control is still very much of an uphill discussion (also because VCS dealing with binary files did not improve as fast as I was hoping). I mostly went back one step and we now use nextcloud for sharing data, which while it does not provide real version control, it at least facilitates everyone talking about the same state of the data/files.
\* also the not-at-all-programming ones who feed the measured data into the system, or who do e.g. the medical/biological interpretation.
---
**update:** From the experience I gained since I first wrote that answer, I'd now put the organizational aspects much further up:
I've mostly left academia, I'm freelancing now but still have research projects where I'm in a subcontractor role for (academic) research institutions. For a certain pre-processing procedure I was commissined to supply, I met flat-out refusal to pay for "unneccessary stuff" like unit tests and encapsulating the working code in a library/package with its own namespace. The method is a data analysis procedure, so the most important type of bugs are errors in programming logics (against which unit tests can provide a certain level of guarding). It is to be used in R, i.e. in interactive work scenarios which implies high risk of messing up the user's workspace if third-party functionality is not encapsulated in its own namespace.
That refusal came from the upper management level of the research institute.
@gerrit comments that university burocracy may not allow a group to have a professional programmer. I think this is likely true as a day-to-day reality. However, I also think it is related to this organizational blindness I'm talking about here. If upper management in academia did see the importance of using state-of-the-art working techniques in data treatment and software development, university administration would likely have a different view on this as well. And if grant proposals did include professional data and software management topics, things would maybe improve on that level as well. I think academia is in somewhat of a vicious cycle here: everyone but students are considered very expensive, so project proposals don't dare including any "technical" staff. If the topic of data management or software development was brought up, our academic management did never consider anything but whether "we could have a CS PhD student" which is not a good fit: the projects need well-established and reliable working approaches rather than anything that would be considered sufficiently new to count as CS research so that the student could earn their PhD.
And of course, upper management may be convinced by their colleagues or by examples of how much professional treatment of these aspects helps, but as long as they are not convinced, it will be extremely difficult to find money and permission to try out whether and how much it helps.
Upvotes: 6 <issue_comment>username_3: In many aspects writing software is an art. Many great painters haven't been born like that, they actually learned the art in years of training and with many bad results in between.
Take a sheet of paper and try to draw anyone you know. Even if you have a picture in front of you, it will most likely look comical. Now a real artist would ask you why you didn't see the shadows, couldn't come up with the right perspective, or put the ears at the completely wrong position even though you had a clear picture in front of you. It is however not your arrogance that caused that, it is your lack of experience in looking at the model in the right way. Technically it has something to do with you using the wrong half of your brain, and you can be trained to change that. Just not till tomorrow.
Scientists work theory-based. They have a theory, they want to prove it, they write code focusing only on the actual theory at hand. That is you seeing a nose, but not the shadows around it. If you'd teach them for month or maybe years to use the right techniques and "strokes" to do it the right way, they might change. However you should ask yourself if that is worth the time. Sometimes it is, but sometimes scientists should just stick to scientific stuff as much as painters should not suddenly start radioactive chemistry out of the blue.
If you decide on teaching them, keep the idea of the artist who starts on his first day in mind: it is not their arrogance, it is their lack of knowledge on how to use the right parts of the brain. There is a reason why "Software Development" is a 3-year apprenticeship, after which you are considered "beginner" level.
Upvotes: 3 <issue_comment>username_4: Just to supplement great answers by *username_1* and *cbeleites*. In academia, when people write code, it is common that:
* **Problems are often open-ended** - so perhaps the data structure people start with will be used later for something else, for which it is suboptimal. Also, many things are not properly designed, because everything changes. Compare it to writing a typical commercial software, where things are specified from the beginning and usually far from cutting-edge (even if demanding, not "the first time").
* **Maintainability is not a requirement** - a small piece of software, not used later, with no-one else going to take over the code (and it is way easier to understand one's own code than code by others - even if it is chaotic, you know what is where). Compare it with situation, where after the author leaves someone else is going to look after the code (or one need to constantly consider hiring one more developer to speed up the progress).
* **People work alone** or on **legacy code** - very opposite situations, but giving similar results. In the first case (as above), people can understand their chaotic code; in the second (e.g. modifying pieces in old Fortran code) - people have to adopt in making small, often - not unanticipated, changes, adopting to the existing code base.
Personally (coming from pure academic background), I've learnt most o good coding practices, when collaborating with others:
* by **learning from others** - some coding practices does not require a lot of brainpower, but a lot of experience - the wisdom telling that a given "smart solution" will become problematic to maintain in longer run (as most of [kludges](https://en.wikipedia.org/wiki/Kludge) (= ugly hacks)),
* by **collaborating** - many times I realized that what was reasonably clear for me, was a totally unintelligible `cthulhu fhtagn` (yet powerful) for others (and the other way as well - a nice code for someone else was a challenging riddle for me),
* all in all, many good practices are in fact going to the **least common denominator of skill** (and not-smart people are already close it); clever code by one *will* be difficult for others.
And a dessert - **the curse of the gifted**, a comment to the last point:
>
> You are a brilliant implementor, more able than me and possibly (I say
> this after consideration, and in all seriousness) the best one in the
> Unix tradition since <NAME> himself. As a consequence, you
> suffer the curse of the gifted programmer -- you lean on your ability
> so much that you've never learned to value certain kinds of coding
> self-discipline and design craftsmanship that lesser mortals *must*
> develop in order to handle the kind of problem complexity you eat for
> breakfast.
>
>
>
(Source: <http://lwn.net/2000/0824/a/esr-sharing.php3>; or abbreviated: <http://www.linuxtoday.com/infrastructure/2000082800620OPCYKN>)
And it was addressed by [<NAME>](https://en.wikipedia.org/wiki/Eric_S._Raymond) to [Linus Torvalds](https://en.wikipedia.org/wiki/Linus_Torvalds)...
And as a side note (as programming is more and more prevalent), now scientists realize that good practices and workflows are important, see e.g. <http://software-carpentry.org/>.
Upvotes: 4 <issue_comment>username_5: There are some great answers out there and I would like to throw in my two cents on the subject as it is a very relevant issue I often think about, or discuss with my colleagues. There will inevitably be some overlaps with parts of existing answers, I only hope that I can give a slightly different perspective in those cases.
---
I have done applied maths as my major and did my masters in biological & medical modeling (whatever that means). I am past half-time on my PhD studies in [bioinformatics](http://en.wikipedia.org/wiki/Bioinformatics) and [systems biology](http://en.wikipedia.org/wiki/Systems_biology). I almost exclusively work *in silico* and have come to sire many of those monstrous, ugly and sad pieces of software.
First off I think you are making a small but important mistake in framing your question. You say:
>
> "*Why do many talented scientists write horrible software?*"
>
>
>
I would instead suggest
>
> "*Why do software written by talented scientists end up being
> horrible?*"
>
>
>
The difference is subtle but essential for the rest of my answer. After all it's not like scientists gather around a table and *decide* to write horrible software.
### Many scientists who write code are not educated to write software
There is a serious difference between knowing how to *code* versus knowing how to *write software*. I did almost as many courses in the CS dept as I did in maths, during my undergrad and masters, so I felt pretty confident with my programming skills. That is until I was faced with questions like packaging, dependency management, lifecycles, licensing etc. None of these were remotely within the curriculum during my studies. I don't know if those who do CS as undergrads learn these concepts, but I sure as hell never needed to until I all of a sudden had to know them.
### Bosses/supervisors of many scientists who write code are not educated on writing software
Not only do you need to learn a bunch of new stuff, but imagine you cannot explain why that is important for you to learn that stuff to your boss. I have this issue pretty often, as writing code is often held comparable to doing labwork at our department. People think writing code just happens on its own and preferably quickly. I have often had discussions with colleagues where they jokingly mentioned that all they want to hear from me is "computer say yes/no?" How long something new might take is very often underrated, having to write tests continuously is typically seen as a waste of time. Which brings me to my next point....
### Good software is not valued in academia, at least not in the same way as industry
The measure of competency in academia is publications, and the form of currency is citations. You are constantly in a form of competition to come up with something new and useful, and only the first one out there will get the prize. Clones do not exist or survive particularly long in academia. In contrast, in industry you can win market shares by better advertising, cooler GUI or lower price. In academia, if some method is already published, you need to do something else.
Similarly, if you have already published a method then additional features, clean-up, optimization etc of that proof-of-principle software is often not good enough to warrant a new publication, which practically means that you have wasted months of work for nothing. Sad but true...
### Expectations change, you got to expect the unexpected
Might be a small point but I can't stress it enough as it has come to bite me in the back over and over again. You simply don't get proper specifications for a new project. They are often either all too vague, or way too strict (unrealistically so). At times, something that wasn't **ever** mentioned turns out to be implicitly expected. Then to add insult to injury the specifications change based on a new data format, some other database, new features or just that other cool thing the boss was thinking about when he was away on a conference... You write and rewrite solutions to the same problem, it becomes a clutter.
### You typically don't get the support you need
The few programming PhD students at my dept we try to improve ourselves by keeping up to date with the trends. Learning best practises for instance via SO. But more often than not when you want to try something new you see hinders; either the IT dept thinks you are too much of a nuisance, or the boss thinks you are slacking off, or the people you are asking help from think that you don't know sh\*t and you are wasting *their* time. For instance it's taken me several months of negotiations and mailing back and forth in order to be able to access our version control server from home. Eventually it just works faster to skip certain best practices.
### The newest coolest CS trends aren't always well documented for people who are not experts
I have tried to get my hands dirty with several "new" technologies, which often have some steep learning curve. Sometimes it's really not worth the effort. Best example I have is Maven. As I often work in Java, I thought I should use modern tools for packaging and dependency management. But my conclusion, after having battled with it for so long, is @%&$ it! I really don't have the energy or time to go through that mess of a documentation.
### Bottomline
After giving myself grief over these in the past years, I came up with the following conclusion which gave me some inner peace:
"*I am not a software developer. I am neither educated nor paid to write software. Writing software is **not** my job; learning to solve certain problems **is**.*"
Hope this answer gives you some insights as to why software written by scientists (exceptionally talented or otherwise) often don't live up to the standards established by software developers.
Upvotes: 5 <issue_comment>username_6: Great discussion, and I have thought the same many times. There is another kind of software as well, which isn't really mentioned above - programs developed within academia which aren't part of a research project. There are lots of software development projects that are more focused on logistics, teaching etc (clickers, video capturing, intranets, library software etc). Sometimes these are handled professionally and become collaborative projects etc, but very often they're the result of someone having some graduate assistants that know a bit of programming, who code up something that perhaps works - but of course has no documentation, testing, version control, requirements documentation etc - and when they graduate and move on, good luck for anyone who wants to try to maintain it... Part of this is also the "false economy" of academia, where certain kinds of labor are extremely cheap/free (for the people who benefit from it).
I am personally probably responsible for some crappy software as well, as a PhD student in computer-supported learning. I'm hopefully a bit more aware of best practices, version control etc than many, but I've never had any professional training, and what's perhaps more important, I've never been part of a community of practice, mentored by better researchers etc. In education, it's in a way even more difficult, because there are very few technically inclined students/professors. I of course use SO, mailing lists etc very extensively, but I am sure my code could be massively improved if I had a senior colleague down the hallway who was reviewing my code and providing feedback, etc.
In fact, one of the comments above made me think of this -- it's quite common for universities to have statistical consultants that are to some extent available to researchers. (We have someone in our library whom we can access for free for one or two hours, and then we have to pay a fee, but I know a lot of people take advantage of it, and found it very helpfull sitting down with them and going through their research design, their assumptions, the statistical design etc). It would be an interesting concept to have a "software development consultant" (not sure about the title), who would basically be a professional code reviewer... But could also extend to help people think through their needs, figure out useful frameworks or libraries, navigate version control, open source licenses etc.
And of course, changing the incentive system to rewarding releasing (or improving upon!) high quality code, is incredibly important, but very difficult. I think Mozilla's [Academic Code Review](http://mozillascience.org/code-review-for-science-what-we-learned/) exercise is a really interesting experiment in this regard.
Back to writing Python scripts to parse MOOC clicklogs :)
Upvotes: 3 <issue_comment>username_7: summary: **replication** or lack thereof.
details:
My observations (unfortunately, small n and a single POV) are that the main reason both non- and talented scientists write horrible software is simply "the opposite of replication." They see no value in reproducible research, they don't anticipate their work will be replicated, and they certainly don't desire that their work be replicated. (They just want their work to be **cited** :-)
I'm a BSCS who did time "in industry," including one well-known faceless acronym. All the coders I knew at least used and valued open-source software, and many contributed (esp @ my last straight-up code gig). OSS is only valued to the extent it is used and extended. (AFAICS--am I missing something? exotic languages that are studied but not used?) Of course, a given OSS is only used if it's robust, tested/testable, well-documented, etc (and only extended if it's public).
Now I've gone back to school as an environmental modeler (mostly atmospheric). The folks with whom I've worked mostly don't even put their code in a public repositories (even the ones much younger than I--this is not a generational issue AFAICS), much less create documentation, modularity, comments (whether in code or in commits), and the other affordances one expects in OSS. This appears to be due (based solely on conversation--not strong empirical data) to their assumption (and, usually, hope) that their code will not only never be used by anyone else (and indeed probably never be used again by the coder), but never even *be seen*.
Unfortunately I didn't understand this when I "hired on" as a grad student. (I knew almost nothing about graduate academia--I had just "jumped out a window" at the faceless acronym and only knew what I wanted to work on--and was esp clueless regarding the cultural differences between informatics ("computer science" being famously misnamed) and "hard science.") I chose as my advisor the professor whose area of work most interested me. Once I started looking at his code, my vomiting was projectile. I tried to engage him about it, and his attitude was approximately (i.e., not a quote) "papers matter, code does not." He never submitted code with papers or made code public, and used almost exclusively a fairly obscure, {proprietary, expensive} {language, development environment} (as, to be fair, do many of his colleagues, some of whom are Very Big Names in our smallish field). Having some philosophy-of-science background, and knowing that the models on which we work have real public-policy implications (e.g., serious spending), I asked how one might reproduce his results. He said (and this is a quote) "they hafta trust us--we're scientists." He is no longer my advisor ...
While I suspect (again, on small n) the observations above do "measure central tendency," all is not darkness and void :-) In my own field, there is exemplary software like [GEOS-Chem](http://geos-chem.org/). Unfortunately, GEOS-Chem "is what it is" largely (IMHO) due to the [GEOS-Chem Support Team](http://acmg.seas.harvard.edu/geos/geos_chem_support.html), which provides a sort of infrastructure astonishingly rare in my field. Hence I suspect GEOS-Chem is, were software quality being measured over this domain with high coverage (am I missing something?), probably 2-4σ better than the mean.
Upvotes: 2 <issue_comment>username_8: Even being a professional software developer, could you develop that you call a good software when requirements vary in unpredictable ways on a weekly basis? This is the world where researchers live an survive.
Doing scientific research means going into unexplored areas. Researchers do not know which features they may need from the monster tomorrow morning. This depends on the results they obtain today midnight. A scientific program accumulates too many iterations, adding features that nobody though would ever be needed.
Any attempts to leave room for new features, add modularity, often even make the things worse when these "generic approaches" must be later hacked around to make further alterations much more drastic than it was expected (and supported) by the "generic framework".
As a result, a program that evolves during research process directly is often only usable as a prototype and must be rewritten before releasing it as commercial or also FOSS software. Professional programmer, if hired, could probably do somewhat better but instability of requirements most likely prevent "arrival" to the really great final design anyway.
Upvotes: 4 <issue_comment>username_9: Why didn't <NAME> pave a highway to the South Pole?
Why didn't <NAME> build a ski lift on his way up Mount Everest?
The job of academics is to find solutions to problems previously thought impossible, to teach others (and despite the boilerplate in their grant proposals, this target audience is *other researchers*) how to solve the problem, and to do the above as efficiently as possible.
Academics care about the quality of their code only to the extent that it works "well enough" as a proof of concept of their ideas, and, possibly, that it can be reused in future projects. Refactoring code, writing documentation, carefully error checking, setting up automated builds, etc. is a waste of time *unless* the time invested improving the software saves them at least as much time generating working results. For the four items I've listed, this is almost never the case.
To be sure, when their research turns out to be practically important, many researchers will go back and write well-engineered implementations of their earlier algorithms (usually as part of a consulting agreement with professional software developers), and they had the training and talent to do so earlier -- it just wasn't worth the time.
Upvotes: 4 <issue_comment>username_10: Qualities that make for a good scientist don't always make for a good software programmer (except, as the OP pointed out, when the "science" happens to be computer science.
Computer coding is a highly precise art. Many people, including good scientists aren't sufficiently precise to write good code easily. This is particularly true of the more "intuitive" types of scientists.
Many scientists find computer programming "boring" and for this reason, don't do well at it. It's true that "regular" science requires detail work, but not to the degree of programming, which many (including yours truly) find "mind numbing."
Basically, if there is zero (or a very weak) correlation between a good scientist and a good programmer, you will get the whole gamut of programming ability, good, middling, and terrible from a population of scientists.
Upvotes: 1 <issue_comment>username_11: Because programming is like riding a car:
Everyone needs it but most of us are not professionals.
How one learn programming? Generally buys/lends a book "Python or whatever for beginners". It will be all about how to save a file, how to run a program and how to call a function. After this, I can do most I need urgently now or yesterday.
Where I will learn about design patterns, good software dev practices, agile development, how to write nice code? NOWHERE!
Just like after taking a course driving a car I believe I am OK, and I do not read 5 hours a day about how to drive on wet road faster, I don't go to bookshops and read every single CS books, that it may be related to me. Even if I go to the net, maybe Software Carpentry is the only relevant resourse! Seriously, if anyone knows anything similar, please, post is somewhere here!
I will not moonlight to learn a full CS in MIT courseware to hack together my "Hello word" of the day. And I wouldn't be surprised if even CS guys in MIT would learn half of good programming practices outside the school, on job, and not after 4-5 years sitting in school.
Upvotes: 3 <issue_comment>username_12: I don't think I'm a talented researcher, but I have done research **in** software and which actually produced a piece of software. For that piece, I also collaborated with a bunch of folks at a major tech company. Obviously, a lot was different between our approach to software.
* When I wrote the software, I chose the easiest path to demonstrate that my ideas should be correct. I did not spend a lot of time engineering the software, because I did not have time! I tried hard to get it to work, and I did try it to make it readable and hackable (because I know someone else will take over eventually), but I did not spend a whole lot of time engineering it. I was mainly interested in getting it to work so that I can do stuff with it (and show that my ideas are correct!)
* That was actually the good part. In my research, we used a research software library created by another research group in another country. They were actually not aware that people were using their software! As a result, they checked in changes which resulted in the software not to build. Moreover, the code was hard to read and we can't fix it ourselves (it was C++, so error messages aren't that helpful either). We had to contact them personally to get that fixed.
* So, can academics write good software? Yes, at least many could (or they wouldn't be able to teach programming courses). In fact, there was a professor of mine who was a phenomenal programming instructor, but whose personally-written code aren't that pleasant to read. Academics simply do not have time to hack around and make their code good-looking. If they've had more time to improve their software, I am sure they would.
* OTOH, the folks at the tech company were actually interested in producing software they could use (and, by way of that, produce some research papers perhaps). They followed their coding standards. They engineered it deeply. They used build management, integration tests, and coverage tests. They do it because (1) they have time and money, and they're paid to do it, and (2) they're going to actually use it!
Upvotes: 2 <issue_comment>username_13: The short answer to this question is that **research scientists are (mostly) not software programmers** (although they do publish software from time to time).
I work in a computationally heavy field, which means that a lot of the miscellaneous stuff that I have worked on in the past can be packaged into a software. From my own experience, here are some obstacles that are keeping me from writing a full-fledged software based on my work:
1. A lot of my **research involves open-ended investigations, hence the code is also written for that purpose**. To write a software implies that I have an essential understanding of everything I have researched so far, which is not the case (and will not be the case in the foreseeable future).
2. Each of the things I have worked on is **too small individually to be written as a software**. To expand the scope of the things requires research, not software.
3. A lot of the research is outside of my control. There will always be new opportunities and directions for research, which means **some research (and the code written along with it) will be aborted**, this goes back to the points made above.
4. Most of the time I write MATLAB code on Windows OS. Even if I don't switch my OS (to say Linux), my options are to translate MATLAB code into some .NET language or to export it as a C/C++ object. I might be wrong but I think **software development in either language is hard and time-consuming**.
5. The reason most research engineers write code using MATLAB is that the turn-around time in research has a high variance. Most of the times, things need to get done on a weekly basis, and this may involve very novel experiments. When it gets busy, the turn-around time might be within the day. These **experiments are optimally done using a very stable platform such as MATLAB or Mathematica**, whereas, for some other languages, your code can't compile if you accidentally insert a tab somewhere or misses a colon. Again, this fuels the things I have mentioned in the points above and further deteriorates proper software development skills, even though you are writing code.
6. A commenter mentioned that software development is doing great in machine learning. From my perspective, the reason for that is because, in fields such as optimization, machine learning, signal processing, A. things are visualizable, B. things have been studied since the 1950s, C. a lot of people are trying to get into this field, D. a lot of things are honestly easy, computationally speaking and many times an inexact solution (that works in very special cases) suffices. Unfortunately, **most of us do not work in such convenient fields**.
7. Frankly, **we all have lives beyond research**. Software development could require personal commitment or collaboration that lasts longer than the entire duration of a research project.
Upvotes: 2 <issue_comment>username_14: There are three main reasons.
One is that scientists are not professional software developers. That's true even for computer scientists, and more so for mathematicians, physicists, chemists, biologists, social scientists and so on. Not that they couldn't, most people who are reasonably clever could become professional software developers if they wanted, but most are not.
Two is that scientists are not interested in creating whatever would be the opposite of "horrible software". They are usually only interested in the results. Where this is bad is if their software contains bugs that produce results that are wrong, but close enough to the truth to seem plausible. Fortunately, many bugs will produce results that are obviously wrong. It is also bad if the software is confusing enough that nobody can declare for sure whether it is correct or not, but to my knowledge there are not many complaints about that.
And three is that scientists are often under time pressure. They might write software quickly that they know *should* be improved, and they might even know *how* to improve it, but they just don't have the time.
One thing that I really, really hope is not the reason is that some people think anything they understand must be simple and anything they don't understand must be hard. With these assumptions, any scientist writing software that nobody can understand would be assumed to be a genius, while anyone writing software that is easy to understand would be not very impressive at all. So what a professional software developer does, making software that is easy to understand, would be damaging your career in the view of these people. I really hope this is not what happens, but I wouldn't be surprised.
Upvotes: 0 |
2014/03/05 | 1,348 | 5,330 | <issue_start>username_0: I am starting a MS in CS program this summer. My department is small and my particular concentration is both narrow and new. Currently only 3 professors at the school are listed as having research interests in my area of interest.
I have contacted 2 of them. One is tenured and one is an assistant professor.
I know that the tenured professor has *lots* of published work in the field already and would probably be the better recommender come PhD application time.
I'm not eligible for assistantships yet until I finish some prerequisites I lacked from my undergrad. But I want to get in and prove what a good little research assistant I can be as soon as I can so I can beat out the other assistantship applicants when the time comes.
**My question is:** If they don't respond to my emails, how long should I wait to follow up with them as not to be annoying? Should I just wait until classes start and go talk to them then? I'll be taking a class from the assistant professor this summer in a subject outside my research interest. What's the best approach to get people collaborate with me?
(Sorry this sounds like lots of questions in one, but I only have a year and a half to become the best PhD applicant ever and I want to make sure I do it right!)<issue_comment>username_1: >
> Whats the best approach to get people to let me help them?
>
>
>
Impress them. Demonstrate that you genuinely can help them.
The problem you face is that professors are busy. Though professors need students, any decent professor that "takes on" students (be it in an advisory, mentorship or official supervisory role) knows that it's a commitment. In as much as working with a good student can be rewarding and productive, working with a bad student can be a huge time-sink and personally draining.
And good professors often get lots of offers of "help".
So you need to demonstrate that you'll be one of the good ones. You need to surprise them, show your motivation, your interest, your enthusiasm, your skills, your ideas.
Just a couple of thoughts:
* Read a difficult paper authored by the professor in the area. Approach them to tell them you found the paper interesting and to talk about the finer details of it. Try to challenge them about weaknesses of the paper (<- depending on their character). This demonstrates your ability to read papers independently, as well as your knowledge of the area, your ability to think critically and your enthusiasm for the subject.
* Try putting together a list of elevator-style ideas for research topics in the area. Tell the professor that you are interested in doing research, why you want to do research, and try pitching some ideas to them. Try flesh out an idea or two with them: ask them questions.
Like any sort of work relationship, if you meet with them face-to-face, it's also important that you come across as someone *easy-going* who would be pleasant to work with.
>
> My question is: If they don't respond to my emails within some time frame when is it no longer annoying to follow up?
>
>
>
Approach them in person. Emails from strange students don't last long in the harsh environs of the INBOXes of senior professors.
Upvotes: 3 <issue_comment>username_2: I agree with username_1's answer. I would also like to add the following:
I don't know what was in your original email, so this may not apply.
However, if your email requires some thought or effort on the part of the professor to respond to, then it will sit in his inbox until he has a chance to sit down and compose a proper response. Since you are an unknown student and he is very busy, this is low on his list of priorities and he may never get to it.
On the other hand, if you politely follow up a week or two later with an email in which you ask to *meet*:
>
> Dear Professor Y,
>
>
> I am going to be a student in your department next semester.
>
>
> I have been reading about your paper on Y and I am very interested in talking to you about it; do you think we could schedule a time to meet next week?
>
>
>
This email is easy and quick to respond to ("Yes, how about Tuesday at 10?"), so you may be more likely to get a response.
(I know this doesn't make sense, since the professor would still have to expend time and effort to meet with you. But spending a half hour on a meeting on some future date seems much less of a burden than spending ten minutes *right now* to send an email to an unknown student.)
Upvotes: 4 [selected_answer]<issue_comment>username_3: I did several internships when I was an undergrad. Here is my advice
1) Meet the professors in person.
2) Look out side of your department, the best internship I did was for NASA. CS degree is highly sought after in other colleges.
3) Talk to and buddy up with other students that are in an intern position. They will make good references and will tell you when there is a spot open.
4) Be prepared to take the first one or two for no pay. My first two were for credit.
5) Talk to all the Professors, Some of them will take you on more to do some quick task they don't want to do.
6) Don't be afraid of a few no's. It's not personal, it just they may not have the time to train you.
My internships are what set me apart for my first job, and they really will help out.
Upvotes: 2 |
2014/03/06 | 1,540 | 6,971 | <issue_start>username_0: I noticed that my university has been considering "internal" candidates more strongly for permanent faculty positions. These are candidates who have received their masters degree and/or a PhD at the same university.
So when candidates are interviewed, the top contenders are invited for an "on-campus" interview/presentation.
Are there any ethics involved with one internal candidate attending the other internal candidate's "faculty interview presentation"? Wouldn't there be bias established in that manner for or against both candidates since one candidate would receive "pointers" (in)advertently?
Would it be a different situation if the competition for the internal candidate was to be an external candidate?<issue_comment>username_1: As an internal candidate you should talk to the search chair about which events you plan on attending and not attending. One of the jobs of internal candidates, like all member of a department, is to help recruit the individuals being interviewed such that whoever is given the offer, hopefully you, is more likely to accept the offer. That said, you do not want it to appear you are trying to gain an unfair advantage. Many departments interview the internal candidate first which helps deal with this issue. When you are interviewing after an external candidate, the thing to remember is that in most cases watching what someone else does is not going to help you. One case where there could be a definite advantage is if the search committee has some fixed interview questions and you attend an event that gives you access to these prior to your interview.
A case where it might be appropriate to attend a job talk would be if your department struggles to reach critical mass at job talks. In this case having an extra body present can be very helpful. Unless attendance at the talk is pitiful, I would suggest you refrain from asking questions or talking to colleagues about the quality of the talk. Another example would be if you are currently a PhD student applying for a TT position, it might make sense for you to have lunch with the external candidate with the other PhD students. This is especially true if you have a post doc lined up in the department such that you will be there the next year and potentially collaborating with the new candidate.
Upvotes: 2 <issue_comment>username_2: First, as I inquired about in the comments, most job interviews have both public and private aspects. Public means that they are open to the entire department (or maybe, and probably in some formal sense are, open to the entire university community): e.g. there should be at least one *job talk*, and most often this talk will be advertised on the calendar like any other talk, sometimes called a "special colloquium talk" or something like that. That makes it a departmental event. As a department member you certainly have the right to go. There are also usually some private "interview" portions: this is several faculty members asking questions of the candidate, either one-on-one or in groups (or at meals; even lunches and dinners are part interview, really). These are usually not public events; rather they are activities on the part of the faculty search committee, though in some contexts, e.g. at a small liberal arts college (SLAC) students and other non-faculty members may play an auxiliary role. Certainly as a candidate for a job you can't be part of the search committee to hire into that job: that is the Cadillac of conflicts of interest! So you should try to distinguish in your mind between "search committee activities" and "public events" and be sure not to attend the former. (As usual, if you are in doubt, ask.)
Now I want to make a few comments on username_1's (good, I upvoted it) answer.
>
> One of the jobs of internal candidates, like all member of a department, is to help recruit the individuals being interviewed such that whoever is given the offer, hopefully you, is more likely to accept the offer.
>
>
>
At least at a large research university, it probably not the case that all department members are involved in recruiting for faculty jobs (either temporary or tenure-track). For instance students are usually not involved at all other than maybe wandering into the job talk (though at a SLAC they might be), and temporary faculty are not on search committees for permanent faculty. So I don't necessarily agree with the above sentence. Moreover being involved in "recruiting" for a position one has applied for again sounds like a whopper of a conflict of interest.
>
> When you are interviewing after an external candidate, the thing to remember is that in most cases watching what someone else does is not going to help you. One case where there could be a definite advantage is if the search committee has some fixed interview questions and you attend an event that gives you access to these prior to your interview.
>
>
>
Yes, I completely agree with that, and it answers one of the OP's main questions. Most academic interviews I know are not sufficiently formalized that watching someone else's in advance would help you with yours. Most questions asked in academic interviews are not content questions or "gotcha" interview questions: they are questions *about you*. I also think that a job talk is a strange place for asking pointed interview questions.
So here is my answer: being an internal candidate for a job is potentially awkward enough so that you should seek some guidance about what to do. If the job talk and the interview aspects are not separated sufficiently clearly in your department, then I think the ethically correct thing to do is just stay away from the whole thing (but tell the faculty in advance that you are planning to do this, in the unlikely event that they have other plans). If the question really is attending another job candidate's talk: I see no ethical problem with it, and if you're in a department where it really is expected that faculty in your position attend all such departmental events (again I'm thinking of a small department) then maybe you should go. It is an academic talk after all and you might learn something. However I think that you should really (forgive my colorful language) shut the hell up as an audience member in this situation. Interacting with another candidate in any active way is also a huge conflict of interest. If you have a sincere academic question, of course you can find a way to communicate your question to the candidate later on.
I don't see how multiple internal candidates makes much difference. I guess that if I knew the other candidate very well and was very friendly with her, I would be more inclined to come to her talk rather than skip it...and especially I would be more inclined to ask what her preference is. I would not do that for an external candidate because it's a potentially loaded question that they shouldn't have to deal with.
Upvotes: 3 |
2014/03/06 | 1,463 | 6,402 | <issue_start>username_0: I recently reviewed a paper for a (reasonably reputable) journal and found that it was already published (verbatim, by the same authors) in another journal (a [fake](https://academia.stackexchange.com/questions/17379/what-are-fake-shady-and-or-predatory-journals) one).
I wrote the following review: "*This is a duplicate publication, it appeared two weeks ago in Journal X, here is the link to the copy on Journal X's website*."
The timing suggests that they submitted the paper to Journal X around the same time they sent it to us.
In retrospect I should have emailed the editor instead of going through the review site. But anyways, within a day or two the review site showed that the associate editor had seen my review, and the editor listed for the piece changed from the associate editor who was originally assigned to the editor-in-chief.
However, the decision letter that went out to the authors was just a standard rejection letter, with my review appended to the bottom. (The standard rejection letter thanks the author for submitting the piece for consideration and wishes them success in finding another venue to publish it in...)
It seems to me that this is a poor strategy for disincentivizing attempts at duplicate publication; at worst, one risks a rejection if found out.
(Disregarding for the moment what the consequences would be if they were successful, both papers were published, and then they were found out by someone else.)
I checked the publisher's website and though it specifies clearly (and authors have to certify at the time of submission that the piece is not published or under review somewhere else) that duplicate publications are not permitted, I didn't find any specific details on what the consequences might be.
**Is this normal procedure? Do editors usually follow up and try to impose consequences for attempted duplicate publication?**
If so:
**What kind of consequences are usually imposed for *trying* to publish the same paper twice, if caught by a journal while in review?**<issue_comment>username_1: Good question. First of all, as far as I know, there is no penal system in academic publishing. Reputation is everything. (BTW Many of us have made big or small mistakes when we were young and ignorant, learned our lesson, everyone moved on.) Of course, a journal could always shun authors "indefinitely", if they attempt to violate codes of conduct. There is not much more a single publisher can do. In cases like yours, the responsibility falls back entirely on the academic community - whose members are also editors and referees like you - and the control power of one's reputation within it.
In your particular case, as you said, the editor presumably consulted with the editor-in-chief, which shows it is not a trivial problem to handle, and they decided to proceed as they did. If the journal had rules in place, such as that they would have to refuse future submissions from the same authors, they would inform the authors, but that's obviously not the case.
You did not specify if those authors are known in your field or obscure. If the editor knows the author personally (which I doubt in your case), the dynamics is different.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I stumbled upon last year on a "black list" of authors in the context of an IEEE conference. I don't know if such lists are common and known among editors. I think that each area of publishing (engineering, medicine, biology, etc.) have its own habits. But again, this is at editor/journal level and not at reviewer level.
On the other side, I think that you should give the authors some benefit of doubt.
Take one busy PhD coordinator, an eager to publish student, some communication discontinuities, add in the mix the publishing invitation spam and you get a good paper submitted to a fake journal.
Maybe, after the advisor recognized the error, was too late to withdraw the paper. So they sent the paper to a regular journal so a hard worked paper doesn't go to waste.
My 2 cents.
**EDIT**
Usually it is allowed (and sometimes encouraged) to extend a paper already presented at a conference with new results, new comparisons etc. and submit it to a journal. However, in the cover letter, the authors must clearly specify that! And cite the older paper.
Upvotes: 3 <issue_comment>username_3: I am personally aware about the history when as little as several sentences from the *introduction* of the own article has been reused in another article (not even results or conclusions). The history became public, dragging various of events that were not very severe but unpleasant enough to avoid.
So I suggest better not to do this. Of course, this also depends on the policies of the both journals but most of them disallow duplicate content. To make the long story short, it might be sad consequences.
Upvotes: 0 <issue_comment>username_4: No, I don't think that there is any standard policy for treating these cases.
Still, remember that no matter what, all the editors and reviewers are humans like everybody, and they do know it when someone does it, and usually they are from the same branch and meet the authors at conferences etc. And no matter what, some information always leaks out, and such a strong negative information seems to leak out faster.
Breaking the ethics in such a way is seen as a strongly negative thing by many people, and they will likely treat you accordingly, while never saying that they know it. You probably can't speak about a career suicide, but think of it that way a bit.
However (to make the post less negative and more fair): As the other answers say, we all do make mistakes, and others know this.
**What should you do?** Nothing probably. Especially if one of the journals was a fake journal, I would say.
Upvotes: 1 <issue_comment>username_5: I am familiar with one case of duplicate submission (simultaneously, to two reputable journals). I know that one of the publishers banned the authors for five years from submitting to any of their journals.
In my opinion, that is an appropriate consequence. After all, the authors gave their word to both journals that they had not simultaneously submitted the paper to another journal. In reply to some of the other answers posted here, I would say that **lying** is not the same thing as **making a mistake**.
Upvotes: 3 |
2014/03/06 | 4,404 | 18,283 | <issue_start>username_0: What is the proper course of action if while teaching an undergraduate or even secondary school course an assignment violates the religious beliefs of a student?
For a more concrete example of where this might happen, let us consider an art class with a Muslim student (*Disclamer: I am not an adherent to, or scholar of, Islam; please forgive me for any misunderstandings this post might contain*):
Within Islam it is considered [*haram*](http://en.wikipedia.org/wiki/Haram) (forbidden by God) [1] to [produce images of non-plant living creatures (including humans)](http://islamqa.info/en/39806), this is called *tasweer*.[2]
Now if I were to assign a portrait of a person to the class as an assignment, what would be the most ethical option, should a student raise a concern to me about this? Would it be appropriate to assign an alternate assignment?
---
[1]: Similar to a christian sin, but with a stronger connotation from what I can tell; literally: taboo.
[2]: I believe this is from a [hadith](http://en.wikipedia.org/wiki/Hadith), but one that is deemed to be the most accurate/reliable.<issue_comment>username_1: If you can make an accommodation that allows the student to participate
* without violating his religious observance, and
* without compromising the educational goals of the class, and
* without requiring an extreme amount of effort on your part,
then it is reasonable to make the accommodation.
I regularly miss classes and exams due to religious observance. My school has a very clear policy on the matter:
* If students have to miss a class session, exam, or are otherwise unable to participate in a course requirement due to religious observance, they must notify the professor and a certain dean in a timely manner (the definition of "a timely manner" is further specified in the policy)
* If said student follows the above requirement, they cannot be penalized for their religious observance and the professor must offer a fair alternative (e.g., makeup exam or assignment)
If your university has no policy on the matter, feel free to adopt mine, and specify it in your syllabus at the beginning of the semester.
However, I would not take a class where I know the main requirement of a class would violate my religious observance. Indeed, I know people who have refrained from pursuing a *career* because a non-negotiable required class for that field would require something that violates their religious observance. \*
So, if the course is Figure Drawing and someone registers knowing that he cannot draw the human figure... I don't think you are required to let him pass the class by doing still lives instead. If the course is Introduction to Art for Non-Majors, it may be possible to offer an alternative to the portrait assignment.
This applies more generally as well. If a student in good faith (i.e., not to get out of doing work) considers an assignment
* illegal,
* unethical,
* compromising to his health/safety,
* etc.
it seems reasonable to offer an alternative assignment if it does not compromise the educational goals of the course.
\* See: [Can a Kohen become a doctor?](https://judaism.stackexchange.com/questions/33937/may-a-kohen-become-a-doctor)
Upvotes: 7 [selected_answer]<issue_comment>username_2: It's not the job of places of learning to give way to superstition. Indeed, quite the reverse: the whole Enlightenment Project was about bringing light into darkness, and all the Academia I'm familiar with puts itself broadly in the Enlightenment tradition.
So yes, this answer will read as uncompromising. Because, from experience, I've found that rigorous education is incompatible with compromising that rigour in favour of molly-coddling someone's religious beliefs.
There is no sane middle ground. If you're going to start compromising the quality of your teaching to avoid offending someone's belief, you'll quickly find yourself running out of space. Someone's going to get offended that you're teaching males and females at the same time, sat next to each other. Someone's going to get offended that *anyone's* drawing the human figure, let alone that they have to. Someone's going to get offended that you don't mention their pet crank theory alongside science as if they were somehow of equal merit.
If a particular course's actions are in contradiction to a student's religion, then there are two routes here. If the student is legally a child, then the student completes the actions - they are under the school's guardianship when in school. If the student is legally an adult, then they have the problem, and it's not fair on any of the other students that they should make their problem, the institution's problem. They can either fail that part of the course, or they can do the work.
If a student's beliefs contradict knowledge, science or art, that's not the problem of the place of learning. That's the problem of the student.
If this is about children, then the responsible adults are guilty of abuse, for bringing that state of affairs about, and the school should do as much as it reasonably can to make amends for that failure. Note that I am not saying that a religious upbringing is necessarily abuse. I am saying that teaching children nonsense such as creationism is abuse, because it can cripple that child's future opportunities.
If this is about adults, then they've taken responsibility for failing that part of their education, and should be marked down accordingly.
This has been something of a hot topic in Britain recently, where the teaching of [creationism](https://humanism.org.uk/2013/05/06/public-funds-being-spent-to-send-children-to-creationist-charedi-and-steiner-nurseries/) and other ignorances is on the rise, where state-funded schools have been breaking equality laws by selecting staff on the basis of [gender](https://humanism.org.uk/2014/03/04/three-muslim-state-schools-identified-discriminating-hiring-staff-basis-sex/), sexuality and religion, and where pressure has been put on educational establishments to subvert the teaching of several branches of knowledge, including [the censoring of some exam questions on evolution](https://humanism.org.uk/2014/03/02/ofqual-exam-boards-collude-faith-schools-censor-questions-evolution/), and [the censoring of two university atheist society's display of the Flying Spaghetti Monster](https://humanism.org.uk/2014/02/10/satirical-spaghetti-monster-image-banned-london-south-bank-university-religiously-offensive/) and of ["Jesus and Mo" t-shirts](http://www.lse.ac.uk/newsAndMedia/news/archives/2013/12/FreshersFairStatementDec.aspx), because these were inconsistent with some extremist religous interpretations.
Academia is the bulwark against ignorance and superstition.
I'm not saying that religion = ignorance and superstition. Creationism = ignorance and superstition. Refusing to draw the human figure = ignorance and superstition. Avoiding listening to or playing music = ignorance and superstition. Preventing females from being educated = ignorance and superstition.
Upvotes: 5 <issue_comment>username_3: You really cannot expect that the assignment should be counted as done just because of your religion. However, if you are a student,
* Try to ask the professor to adapt the assignment. If picturing humans
is not allowed, maybe picturing geometric figures is ok.
* Ask representative of your religion if the activity is really disallowed in your context. Most of religious restrictions are about actions, not about studying (may be exceptions of course).
* If you know you should drop studies but are too weak to do this, the representative of your religion may just forgive you.
If you are the person teaching, you may think about adapting course (is the disallowed activity essential?) and still check with representative of religion if the students do not interpret restrictions unnecessarily broadly. Additionally, you may discuss with your administration the possibility to suggest the alternative but equally serious and difficult course for such students. Some universities like Zurich ETH allow to choose between many alternative courses, with only small percent being mandatory.
Still, if there are many assignments contradicting the religion, this probably shows that it may be lots more problems at work later. If you are not allowed to kill, that is the point of attempting the carrier of the jet fighter pilot? Even if you can actually *study*, saying nobody is killed in flight simulator or during bombing tests, this may not make much sense.
Upvotes: 3 <issue_comment>username_4: As an educator, the most appropriate response is to immediately escalate the matter in a neutral way - present only the facts. The educational institution has staff and lawyers to interpret scenarios like this and provide recommendations to the teacher. I would not recommend making any immediate compromises or snap-judgements with the student. Educating students is stressful enough. Let those who specialize in this type of issue resolve it, and you can focus on the education of your students.
Upvotes: 3 <issue_comment>username_5: If the student chooses, or sucumbs to parental directives to refuse some components of education, then the student or their parents have to accept that they can't achieve so much in that realm. One day the student will have to make A PERSONAL CHOICE as to their direction. Offering them a free ride is not appropriate to that choice. If they persist with their choice, to not participate in some aspects of the multicultural society they live in, then they surely will be happy that they are not 'infected' with whatever perceived ill they deem to spring from the offensive activities.
Why do they want to be seen as masters (ie high grade scorers) of a system they partially or wholly reject? Do we really want to teach children to lie to themselves and others like this?
Make your choices (yes, even as a child) and take the consequences.
Upvotes: 3 <issue_comment>username_6: I think this depends on what level you are teaching at. Below college-level I think you may need to find an alternative (but just as difficult or more difficult) assignment.
It is a slippery-slope when people institute their private beliefs on a teacher's assignment. It is not like the assignment was for them to go to a Sunday mass. If the assignment was hitting a lot of religious notes, you as the teacher should have a plan. Have the students/parents sign-off on the topics/assignments or offer them another assignment to do.
If we are talking college level courses the assignments and tests should be on your syllabus. If they don't want to do them then they can drop out of your class or they can get an F on the assignment.
As a teacher you are trying to teach them a skillset. If that includes something that is against their beliefs they shouldn't get to pass the class because they don't have the knowledge/skill. There is just too much grey area here and obviously the students could tell you whatever they want and it allows for animosity from students that have to do the assignments.
Upvotes: 3 <issue_comment>username_7: The most important thing first: it is not your job to know every religion, belief or variation of it, it is up to the student or his/her parents to tell you beforehand what they cannot accept in class. Even between two of the same religion, no one can tell how direct one rule is taken by the one or the other. That said, your responsibility however is to give your students and their parents enough time in advance to notify you of potential issues.
If you work with a class where this can be an issue, a good idea would be to write down the topics for the upcoming course and hand it over to your students, and then explain why you are doing this. Depending on the age of your students, they might misunderstand this as cheating away from class. In addition you should give your students the chance for a fair alternative. If such alternative is not obvious, you should talk to the student or the parents in question to find one. If drawing an animated object is considered a sin, you could hand out a picture of someone to draw. Whether or not this complies to the religion, I however cannot tell.
Religion is nothing you can come by with logic, people believe the strangest things. For some of them compromises can be found, for others probably not. It is a good thing to try to find a compromise but in strange cases also valid to refuse them. If one's religion for example expects that boys and girls are taught in separate rooms, the only thing you can offer to the parents is for their child to change school.
Also you need to keep in mind how the other students will treat a kid, which is the reason why they can't do something they want, cannot watch a certain movie or must skip topics they would have been interested in. Some things they can accept if it is explained properly, others probably not. And in the later case it might be for the better of the kid in question to not give into the believes, as the "torture" following that would be much worse.
Upvotes: 0 <issue_comment>username_8: Educational institutions are not there to reaffirm religious beliefs. Anyone with a drivers license has already violated this belief. My heartless opinion on the matter is you do the assignment (or come up with the closest possible alternative) or you fail. How is not doing it for religious reasons different than just not doing it? Its also an insult to all those that did do it, particularly the ones that struggled through it.
Upvotes: 1 <issue_comment>username_9: I'm going to have to disagree with the majority on this post that you should adapt your assessment for the student **if this is a college level or university level course.**
Absolutely students **who miss exams/class/need extensions** due to religious observance should be accommodated. But at a college or university level, if a student disagrees or does not feel they can complete a particular assessment due to religious observance **they shouldn't take the class**. Assessments and topics are laid out in the beginning of the course, plenty of time for that student to switch classes. There's a difference regarding the style of assessment which can be altered (i.e. an exam to an essay) and the content being assessed, which is generally what would be the controversial subject.
I teach a number of controversial subjects because I'm situated in gender studies/sociology. A number of the courses I teach have controversial material and assessment tasks that are not well-suited for everyone. Many students find the material confronting, and yes, I have absolutely encountered students who find it too uncomfortable. My response? I am sympathetic to their issues, but their only option is to drop the class. A subject such as gender studies is a controversial subject, and when we delve into critical examinations of things like women and pornography, or men's aggression and violence, I cannot 'water it down' so it's accessible for those who find said material confronting. Otherwise, there's no point in teaching it.
Any student who does have a particular religious observance needs to take the time to review the course syllabus and get in contact with the unit coordinator before the start of the course. If the syllabus is not available prior to enrolling, they should still get in contact to discuss their concerns and determine whether or not the course is a valid option for them.
At some point, students have to take responsibility for their own choices regarding what classes they will take. They cannot expect to be accommodated to the point of completing a completely different assessment task to everyone else because the material is too confronting or is in direct violation to their religious observance. While the style of assessment can change (i.e. a student with a disability might prefer a take-home exam over a traditional sit-in exam) the content needs to remain the same.
Your example of an art class is a tricky one though. It would depend as to whether figure drawing is the main purpose of the course (in which perhaps the drawing of a naked figure makes up a huge portion of the assessment task/overall grade?) or if it's a relatively small component (like 5-10%?). If relatively small, they can skip it and forfeit the grade if they are able to complete everything else.
Upvotes: 3 <issue_comment>username_10: I think the solution depends on wether the method used to reach the education objective is problematic or if it's the actual education objective itself.
I have observed this kind of situation as fellow student. We have here a religious branch that forbids watching television. On a course about topic A, we had an exercise where we were supposed to watch several episodes from a TV series and observe topic A related things from characters. In this kind of case where the topic A itself has nothing to do with the problematic method, I think it's reasonable to accommodate student, if possible. In this case student was allowed to do the exercise from book instead of TV-series, and observe topic A related things from those characters.
Had the course topic been related to media or cinematography, and the method (watching TV-series) itself would have been important to reach the education objective (such as observing how lights or cuts or positioning was done in the TV-series), then I don't think accommodation has to be made. It is up to the student to realize that the course topic itself causes problems and either decide not to take it, or just do the exercises anyway.
Upvotes: 2 <issue_comment>username_11: I really do not believe religious beliefs should be THAT much considered. Islam also suggests that you should not be in the same classroom with opposite sex (also haram). Then why is that student studying in university?
Another question is, what if I believe in HurdyGurdyism and the letter F is very much sin in my religion? Then should you not give me an F?
Every belief is of course deserve respect, but I don't believe the purpose of conducting science is much more holier. All and all, the place is where you conduct science, not a sanctuary.
Upvotes: 2 |
2014/03/06 | 1,829 | 7,574 | <issue_start>username_0: Coming from engineering, when you write a paper, the goal is to be objective and analytical. We use references and try to define terms for a clear understanding. The goal at least is to provide facts that support a hypothesis in as unbiased a way as possible. I think there may be some difficulty in always remaining unbiased, but recently I came across an article that seems to be written like an opinion piece more than a piece of academic work. After going through it I looked up the author, who is a professor of women's studies. She writes about robotics being sexist. Does women's studies have different goals in academic writing than engineering?
To be specific, it is these types of sentences that confuse me on the intent of the academic writing in women's studies:
>
> Enter HRP-4C, a new-generation gynoid that was unveiled in the spring of 2009 as a body double of and for (or to replace?) the average human female.
>
>
>
Why would a journal allow the publication of the unsupported rhetorical question "or to replace"?
>
> The android wears his maker's unfashionable beige shirt, dark trousers and black windbreaker jacket."
>
>
>
Is it professional to call a world-famous robotics researcher "unfashionable", and why is it necessary?
>
> . . . exact body consists of silver and black plastic molded to resemble a *Barbarella*-like custome, which accentuates her ample breasts and shapely, naturalistic buttocks."
>
>
>
There is no supporting information of how the square, minutely curved metal is purposefully making the visual the author interprets. But what I don't understand with this is instead of referring to the robot by its name or the paper, the author continues to refer to it as "robo-Barbarella."
Did I just happen to come across a unique piece of writing, or are there very different styles of writing in academic disciplines?<issue_comment>username_1: Many academic fields work on the premise that you examine the evidence and present an unbiased analysis of that evidence. There are fields (e.g., creative writing and the arts) that take a very different approach. That said, many fields that take an objective unbiased approach to the analysis focus on the readability much more than in the sciences. For example, the use of "robo-Barbarella" seems much more informative that the clinical/objective HRP-4C. Had the original creators of HRP-4C called it robo-Barbarella, there would presumably be no issue. In terms of the fashion comment, I am not sure "maker" refers to the actually person who did the welding, soldering, and programming, but rather the stereotypical fashion sense (or lack thereof) of CS people in general.
Upvotes: 4 [selected_answer]<issue_comment>username_2: CS and WS are each concerned with a different kinds of questions, and the language their respective practitioners use simply reflects this difference.
This is familiar to me, given that I've been on both sides of the divide (English Lit as an undergrad minor, cognitive science for my PhD). I got a glimpse of why it exists many years ago when I was in a course with a bunch of Lit Crit majors and we were discussing a certain poem by a certain contemporary poet. The Lit Crit people where pointing at a specific part of the poem and going "Is this an allegory for his lost love? Is this a metaphor for the senselessness of war?", and so on. Then, in a youthful display of naivety, I said "hey, the guy who wrote this poem is still alive, why don't we try to get in contact with him and ask him what he means?". The lecturer leading the discussion looked at me very sternly and said "that is *not* the point".
That was very illuminating, I think. I'm sure that the poet in question had something specific in mind when he wrote the poem (and we'll never know what it was because he's dead by now), but the Lit Crit guys don't care about that. The *intended* meaning of the poem is irrelevant to them, what they care about is the meaning(s) that *others* might extract from the poem. In the same way, in the passages you quoted, the author doesn't care about the android in and of itself, but rather about how others feel about the existence and characteristics of the android, and how these feelings affect other feelings and beliefs we might have about related issues. That is what allows him/her to make judgments about fashion and other things.
In comparison, people in the sciences care about "objective truth" (for lack of a better term), not about how other people feel about stuff. If I write in a paper "stimulus A caused neurological response B (p < 0.01)", asking about the Marxist/feminist/whatever interpretation of this finding is about as pointless as trying to get to the "objective truth" underlying a piece of poetry.
Upvotes: 4 <issue_comment>username_3: Yes there are differences in writing style between Women's studies and Engineering, just as there are between Engineering and Science, Science and Mathematics, Law and History, and so on. The purpose and objectives of each department are different, as well as their intended audience. It would be more surprising if there were very similar writing styles between different departments.
You do not say where the article was published but this also affects the style. More narrowly focused journals will normally restrict themselves to papers written in a certain style. So technical engineering journals would normally publish technical papers, while such papers would not be accepted for a Women's studies journal, or a general science journal.
Upvotes: 1 <issue_comment>username_4: I'm going to write the answer that the OP is surely hinting at and that a substantial number of readers are thinking (minority or majority is hard to say).
Yes, engineering and women's studies have different writing styles because they have different aims. The goal of engineering is to build new technologies and thereby benefit mankind. Objectivity is highly valued because nature itself is objective. You can't trick nature. The goal of women's studies, on the other hand is about identity politics. Its aim is to further the power wielded by a subset of the population. Power is determined by people and people can be manipulated. As has been confirmed in numerous psychological studies, people respond to emotion far better than reason. Hence, the most efficient way to further its goals is naturally through subjective persuasion rather than objective analysis.
I find it ironic that someone would write about a roboticist being sexist while throwing out sentences like: "The android wears his maker's unfashionable beige shirt, dark trousers and black windbreaker jacket." But it isn't surprising if you keep in mind that the goal isn't objectivity, but rather to promulgate dogma, since the success of the field is entirely determined by how many people buy into its core edicts.
One of those core edicts, incidentally, seems to be that men and women have exactly the same distribution of innate intelligence (mean, variance, probably all higher moments). There is not any room for discussion on this point even though it seems self-evident to me that this should be a purely empirical fact, and that especially now with full genome sequencing, biology would have a lot to say about this issue. How you respond to this answer likely correlates heavily to your view of the following incident in the math community:
<https://terrytao.wordpress.com/2018/09/11/on-the-recently-removed-paper-from-the-new-york-journal-of-mathematics/>
Upvotes: 0 |
2014/03/06 | 1,543 | 5,667 | <issue_start>username_0: I got the following info from the [Chicago-Style Citation Quick Guide](http://www.chicagomanualofstyle.org/tools_citationguide.html) website in relation to citing Kindle books or ebooks.
>
> If no fixed page numbers are available, you can include a section title or a chapter or other number.
>
>
>
I am wondering is there a preferred method of citing in this case?
I am history Mlitt student and have only recently started to use a kindle for research purposes. At the moment I use the loc (location) reference that is produced by the kindle in the notes text file you can download from it, that shows all your highlights/bookmarks, but I am confused as to the proper citation method as my supervisor queried if this was the standard way of citing a kindle. I have a copy of my history department style-sheet but it makes no reference to kindle editions.<issue_comment>username_1: According to the [APA blog](http://blog.apastyle.org/apastyle/2011/06/how-do-you-cite-an-e-book.html), the location number is actually a bad idea because it has limited retrievability. The blog also mentions that since Kindle's third generation, e-books have started to have real page numbers, you may try looking into that.
Another post on [apastyle.org](http://www.apastyle.org/learn/faqs/cite-website-material.aspx) suggests that for materials that are not paginated, consider citing chapter number or chapter heading plus paragraph numbers.
Upvotes: 4 [selected_answer]<issue_comment>username_2: The reason for citing a page number is so the exact quote can be found in context. In an electronic book, one can just do a search and find the exact location faster than just looking for the page number in a traditional book, so I would say it is not necessary.
To provide a better idea of the context, you can cite the chapter and section.
Upvotes: 3 <issue_comment>username_3: As far as I understand, Chicago frowns on Kindle. Depending on how rigorous the context (class paper, proposed article), I would look at the print book.
Upvotes: 1 <issue_comment>username_4: I got a look at a copy of the Chicago manual at my college and the guidance in that is a lot clearer than it is on the web site where the whole manual is not available without a subscription.
The reason given for citing a electronic edition of a book is quite clear.
>
> The majority of electronically published books offered for download
> from a library or bookseller will have a printed counterpart. Because
> of the potential for differences, however, authors must indicate that
> they have consulted a format other than print. This indication should
> be the last part of a full citation that follows the recommendations
> for citing printed books [...].
>
>
>
The manual further goes on to state that;
>
> [...] electronic formats do not always carry stable page numbers (e.g., pagination may depend on text size), a factor that potentially limits their suitability as sources. In lieu of a page number, include an indication of chapter or section or other locator.
>
>
>
Further in the section it deals with unpaginated electronic sources in more detail.
>
> For such unpaginated works, it may be appropriate in a note to include
> a chapter or paragraph number (if available), a section heading, or a
> descriptive phrase that follows the organizational divisions of the
> work. In citations of shorter electronic works presented as a single,
> searchable document, such locators may be unnecessary.
>
>
>
It seems form reading the manual that the following are the preferred methods of referencing;
1. Page number (where stable ones exist/some new kindle books match the
print edition)
2. Chapter or paragraph number
3. Section heading
4. descriptive phrase that follows the organizational divisions
If you want to cite in MLA; [this blog](http://www.noodletools.com/helpdesk/kb/index.php?action=article&id=206) recommends the following.
>
> MLA 5.7.18 defines digital files as neither on the web or a published
> CD-ROM. MLA recommends citing a book on a digital device using the
> guidelines for citing a book but replacing the format type (Print)
> with the name of the digital file format, followed by the word "file.”
> For the “Digital file type” field on the form, enter a file format
> such as "EPUB file" (a non-proprietary file format used by Kobo, Nook,
> Sony and others). If an e-Book reader uses a proprietary format
> (e.g., Kindle), you may use the name of the file type ("AZW file") or,
> if this is not visible to you, the name of the device ("Kindle file").
>
>
> Example:
>
>
> Slawenski, <NAME>: A Life. New York: Random, 2011. N.
> pag. EPUB file.
>
>
> If you are only citing a section or chapter:
>
>
> * To cite a chapter or section written by the author of the book, cite
> the book and use an in-text reference to identify the specific section
> you're quoting or paraphrasing.
> * If the introduction or preface is
> written by another contributor, fill in the section of the form called
> Chapter or Section to cite the author, section title and page numbers.
>
>
>
Upvotes: 2 <issue_comment>username_5: I have a new Kindle and I purchased two books from Amazon. One I am able to see real page numbers, the other only location.
I was told for APA:
In text citation
For Paraphrase: (Atkins, 2014).
For Quotation: (Atkins, 2014, Location No. xxx)
For Reference listing to use:
<NAME>., & <NAME>. (2002). Dual Disorders: Counseling Clients with Chemical Dependency and Mental Illness (3rd ed.) [Kindle version]. Retrieved from Amazon.com
Upvotes: 0 |
2014/03/06 | 5,095 | 21,055 | <issue_start>username_0: In one of my classes, I had a student who generally understood stuff faster than the others. In tutorials, he would ask a lot of questions, mostly of the kind
>
> "I tried this method instead of what you suggested, is it correct?"
>
>
>
Now, this probably sounds like the dream student, but I quickly realized that he was not really after my input, but rather seeking acknowledgement of his superiority (possibly showing off before his friends).
The exchange would often go like this. If what he suggested was correct, fine, I would say "*great!*" and move on. But quite often there would be flaws in his argument, which I would naturally point out. He would always assume that I misunderstood him and when I was (finally) able to show him that his argument did not stand, he would say something like
>
> "Oh yeah, that's what I meant to say, but I phrased it wrong"
>
>
>
Note that this was an math class, so "phrasing it wrong" really means "failing to prove". When he asked a genuine question and I started providing an explanation, he would cut me off halfway through with something like
>
> "Right, I get it, it's because this and that"
>
>
>
and convincing him there was actually more to it was yet another struggle.
I am concerned because I really feel he could be an amazing student if he would only accept that he does not know everything beforehand and therefore sometimes makes mistakes. Also, it seems like my time could be better used than in convincing a student that I'm worth listening to.
**How can one explain this to such a student without humiliating him?** Simple reasoning and proof by example (you'd think after the tenth time I pointed out his mistakes he would have learnt that he sometimes does them!) apparently just bounces off of him.<issue_comment>username_1: I would suggest against [a deleted answer telling to put a lot of weight on simplest textbook examples] (I consider "ha! you missed the example from chapter 2" to be highly unpedagogical, as it is "appeal to authority" and "you should read and remember rather than think").
I think two things may help:
* give him problems and require *written* solutions (it is harder to boast, or mask omissions, when one writes), solved "down to the last $\epsilon$", as you "want to see the proof, not just be convinced that he can prove it",
* give more advanced problems (still, which are within his reach), for which answer cannot be hand-waved (e.g. "what is sum of", "give an example of set, such that...") OR don't tell the answer in advance (so instead of "prove X given Y" ask "decide whether X holds for all Y").
In both cases, I would tell him (in person) that he is very smart, and you give him more advanced problems because of that. (And that he still needs to learn to be precise enough for mathematics.)
Maybe he does want to show you that he is smart, and as you do not acknowledge it he, well, tries again.
In any case, I would not undermine his skills. In mathematics & theoretical physics I know a lot of people who underestimate their skills, but not many who overestimate *for a longer time* (such subject makes one humble, sooner or later).
Upvotes: 7 [selected_answer]<issue_comment>username_2: I feel that generally your focus is misplaced. It appears to me that by focusing on how to deal with this one student, the learning environment of the other students is somewhat compromised.
First, I would strongly suggest that you need to put this dude into his place. Let's not ask "How can one explain this to such a student **without humiliating** him?" Let's think "How can one explain this to such a student **through humbling** him?" There are numerous indications that what he does is disruptive:
>
> *Example 1:* In tutorials, he would ask a lot of questions, mostly of the kind "I tried this method instead of what you suggested, is it
> correct?"
>
>
>
If the tutorial is about A and he decided to use K method to solve it. Wonderful, but were the actual materials covered? If he seeks this kind of "Look at me! I am awesome!" feeling, I will deter that gratification by allocating the last 10 minutes to discuss alternative approaches on solving the problem set. Don't feed the troll, make him wait and train on his patience.
>
> *Example 2:* "Oh yeah, that's what I meant to say, but I phrased it wrong". Note
> that this was an math class, so "phrasing it wrong" really means
> "failing to prove".
>
>
>
Did you actually tell him this thought? You do not need to say "You are wrong." You can, however, say that a certain attitude or a certain pattern is wrong: "While your attempt is well intended, I wish to stress that in mathematics, one wrong phrasing can buy you the direct ticket to failure."
>
> *Example 3:* When he asked a genuine question and I started providing an
> explaination, he would **cut me halfway** "Right, I get it, it's because
> this and that"
>
>
>
That's an unprofessional behavior and you should have slapped him right there. Simply smile, slowly raise your palm to signal a stop, and calmly say "Please hold on and let me finish, there is more to it, and I want to present a full picture."
Overall, I think this student needs circumstantial challenges (in other words, come to feel how the world actually runs) and he should turn out fine. I am more concerned for the rest of the class; If I were one of the students, seeing this alpha male's obnoxious behaviors go unchallenged, I would seriously doubt why I should be here, and should I even ask a question.
Upvotes: 5 <issue_comment>username_3: Your student was probably smarter than all of his teachers in high school, and has never experienced academic failure --I think a lot of college freshmen start out that way.
I would sit him down, and tell him plainly:" You aren't getting your money's worth from this class, because you aren't taking advantage of my teaching."
Chances are, nothing will get through to him but a F on his report card (maybe not in your class, but it will happen sooner or later). However, you will have done your best. If he doesn't respond, I would limit the amount of time you spend on him, and focus instead on students who might have less raw talent, but who are more open to learning.
Upvotes: 3 <issue_comment>username_4: I also think that he seeks recognition, from you and maybe from his peers. I also think giving him more advanced questions might humble him, but I think this wouldn't be the most effective way, because it wouldn't give him the recognition he seeks from his peers.
The following is how I have dealt with these kind of students, and how my teachers dealt with a much younger version of me :) And it proved to be very effective.
Whenever he offers his new version of a solution, don't argue with him, but tell him "this sounds like an interesting approach, I think you should share it with other students". And ask him to come and write his solution on the board and explain it to the other students. You should also go and stand or sit in the far end of the class, so basically you temporarily switch roles. This has three effects:
1- **He will ask fewer questions:** Even though he wants to show off his superiority he doesn't want to stand in front of all students and explain a solution EVERY TIME. Even if he likes to, his fellow students will be fed up with him, and they will communicate it in their own way, in or out of the class.
2- **It humbles him** When writing and explaining to other fellow students, they are going to ask him questions and point out his mistakes. If they don't, you can ask them to ask him questions if they have any, or to point out the mistakes. If they manage to do so, he will feel that he is not that smarter than his fellows after all. If they don't, you should give them hints to find his mistakes. Also, since this is in written form, and in front of many people whom he communicates with on a daily basis (much more than you), it is much difficult for him to hide his lack of understanding.
3- **He will think more about a solution before jumping in and offering a wrong one** Because he is explaining it to many people, and not only you. Thus, the cost of making mistakes will be higher, and he will think his new solution through before proposing it.
Now these effects will not happen over night, but after 2-3 class sessions I expect him to improve his attitude enormously.
Let us know if this, or any other suggested solution, helped.
Upvotes: 5 <issue_comment>username_5: I recognize this student's desperate need for approval. If you are going to try to work on him, try to nudge him in the direction of deriving satisfaction from the math itself, rather than impressing others with his abilities. You might try this line of approach--"I knew a guy once [Hi, my name is Mike, I'm the guy, btw, just so you don't have to feel you are being deceptive.] who had a light turn on in his head at one point that changed his life. He realized that he got much more out of loving the math itself than loving the attention he got from being good at math. He had a lot of raw talent, but he realized he was wasting effort trying to impress people when he could have spent that effort getting better at, and more out of, the math."
This is a really delicate thing for you to approach, however, because that need for approval is probably due to some severe self esteem issue which you don't know the source of (and possibly the student doesn't either). Seeking approval from other people is like an addiction--you can't change it over night, and you find yourself returning to it again and again even when you wish you could stop. [Eventually, you post on message boards all over the internet seeking upvotes... :) ]
Another approach you could take would be to be much more indirect about his psychological issue, but possibly accomplish something in the same direction, while overtly focusing only on the math.
"I think you know that you are better at this than the other students. But I think you're aiming too low--you can really learn to excel at this. Do you want to do that?"
If he answers in the affirmative, tell him what you think he needs to do. A critical skill in academia is learning from other people, so he needs to not cut people off in the middle of an explanation. A critical skill in mathematics in particular is precision. He needs to learn that "I meant to say x" is not going to cut it in math. If he meant it, he should have said it, otherwise, it's wrong. Tell him that up to now you have not wanted to be too hard on him, but if he wants to be really good, you're going to quit pulling your punches and not let him weasel out of a failure to make a complete argument.
Basically, ask him if he wants a coach, or a coddler. If he wants a coach, you're going to judge every statement like a mathematician reviewing a submitted paper would. If he doesn't want that, then it's his choice; you tried. At that point you have done what you can for him, and all that's left to decide is whether he's enough of a distraction in class that you need to address that aspect for the good of the other students.
Upvotes: 4 <issue_comment>username_6: I am going to write an answer based on the student's perspective. I'm graduating college now, but I feel like I was almost just like this kid in Freshman year of college / Senior in High School.
You mentioned the kid is smart, and quite talented. Too much for his own good. I hated mathematics and still got A on it while not even paying attention to the teacher. The kid needs to be challenged but not at the expense of the class. I later figured out I like challenges the most, and more hands on things than math even though I was super quick to learn it. I sat in the back, solved a rubik's cube, and waved the teacher off because I didn't need to do my homework. The previous week I corrected a question on the midterm exam and got extra credit because no one else saw the flaw.
Here is what my genius math teacher did to me that worked wonders looking back.
A) Instead of just numbers on the sheet, take what he is excited about learning and have him a problem on that. For me it was programming so he said can you graph this on a computer where it shows and calculates the area under the curve? Hah! I didn't even know what I was getting into. Yeah! sure no problem. By trying to do it in a way that made sense to me I willingly did more various types of problems than we would have in class because I was trying to graph areas under all sorts of various curves. He tricked me, and I liked it.
Just use what he likes, and find a real world problem :D. Even if it were surfing or something crazy ask him to calculate how high and fast he could go on a wave given a certain value.
B) Group project that is required to participate in and peers give a portion of your grade based on rating. I didn't really care about my grade honestly because I knew I could pass any test. I cared more about what others thought of my abilities. When we did the group project and some of it was extra credit we all got a's because I wanted to solve the extra credit and I let my group do the other 2 regular problems. It's a way of allowing your student to satisfy his show off ability and still learn.
we did not always do group projects, only a couple in the semester.
Upvotes: 4 <issue_comment>username_7: I am writing from a student's perspective, and I've got this to say:
Why try to humble this student? To me it seems that this student has his own pace at which he learns mathematics, and his pace and yours just don't happen to be perfectly synchronized. There are hard lessons in his future, to be sure, eventually something won't come as easily and will require deep and concentrated thinking on his part. But I think lessons like these are things that should be experienced with the body. Nothing teaches better than failure (or equivalently for well-motivated students, anything less than shining success), and these lessons you wish to impart to him will probably only be truly appreciated when he comes across his first real obstacle.
I am more concerned about the potential to hurt your student's youthful self-confidence. Let his energy drive him! Definitely press (as my professors have) the need for practice and getting your hands dirty! But it would be a great crime to him to shoot him down just because you see him as overconfident - if he is, then he'll have to pay for it eventually, but that lesson is well earned and well learned and should really be experienced for oneself.
Upvotes: 2 <issue_comment>username_8: I am currently taking part of an 8 week software development hack school. The class has about 14 students with varying levels of experience and knowledge. The informal format has been the best learning experience I've ever had for a couple reasons..
* The teachers meet each student where they are at. Each student is encouraged to excel at their own pace. This is great because it doesn't slow down the students with more development experience... This is HUGE! Fast learners shouldn't be forced to learn at a slow pace.
* All assignments are project based (typically created small apps) and are generally "graded" on effort and not getting the correct or best solution.
* Varying levels of experience creates a highly collaborative environment. I'm somewhere in the middle of the class and am not afraid to ask the more experienced students for help and they are not afraid to share their advice.
I don't believe it's a teacher's job to tell a student they have a lot to learn. Life generally teaches us all how much we have to learn :-)
Upvotes: 2 <issue_comment>username_9: You should not take this as an affront against you, but rather as a strange way of learning, and then work *with* the student to allow him to improve.
It seems like your student has a psychological need to be "right" and feels mental pain from being "wrong". As these things tend to be irrational, it doesn't need to make any sense to you or even himself. You can take this into account and instead of trying to prove him wrong, you teach him to take his mistakes not as failure but as key-points for tasks to improve.
Phrases that could help your relationship:
* "I see that in its core your explanation is correct, but some words could be misleading to a reader, so it might for example be better to use *this* instead of *that*."
Concept: Accept that he got 60% right, help him on the other 40%.
* "I see that you are a very eager student. How about you formulate your findings into an essay so that the others can benefit from your knowledge as well? I will of course give you a bonus for it."
Concept: Accept that he has a strong need to improve outside the bounds of the class, give him the opportunity to do so, and a reward if he does right.
* "That is a great answer, but have you taken into account that *this*?"
Concept: Reward him from being a good student, then encourage him to optimize the details without explicitly calling them wrong.
>
> “Problems are Only Opportunities in Work Clothes.”
>
> - <NAME>
>
>
>
Upvotes: 1 <issue_comment>username_10: Possibly he is more comfortable processing problems in written format, but a little bit tongue-tied in verbal expression? It seems quite a common combination - to be talented at abstract thought but less confident in oral communication. I've definitely had that experience, of being frustrated and impatient with stumbling over concepts in speech, when they are perfectly clear in my head. The fuzzy verbal arguments and the early interrupting could be signals "I'm embarrassed / not comfortable in this mode of communication". Maybe you could meet one of his non-standard solutions by saying "can you write it down step by step and I'll check it over to see if it's correct".
Upvotes: 0 <issue_comment>username_11: Sometimes people are so un-selfaware that subtlety won't work. You may have to just directly tell him how he's coming off. It's possible to tell someone in a way that it's clear that you're at least trying to give blunt but helpful criticism rather than just putting that person down. It's definitely possible that the student will still feel humiliated if you choose to do this, but I think it's also an important life skill to be able to maturely deal with valid (or even invalid for that matter) criticism that you don't like.
Upvotes: 2 <issue_comment>username_12: I think you should feed his curiosity and let the rest of your class benefit from his way of thinking alternate solutions to the problems. One way could be where you have already established there is a flaw in his solution; but, are having a tough time getting through explaining that to him and/or for him to accept it, share a portion of the board with him during class while you continue to teach the next concept to demonstrate that while his questions are encouraged, they are taking up too much of yours and other student's time just to level set with him. This will force him to think twice before defending a wrong solution. Who knows, he might also teach you something.
Another suggestion is to have different levels of homework based on your lectures over a span of a day or a week and increasing levels of difficulty for extra credit. The ability to do the more complex should exempt the students from doing the lower level. Finally, share the solution at the end of the deadline for submission and dedicate a 15 minute Q&A session over the complex solutions.
1. To demonstrate understanding of concepts
2. To use the concept in a real-life problem
3. To partially use multiple concepts over the span of the lectures or the level of understanding you expect your students to have at the point in the semester.
Note: The use of him implies him/her.
Upvotes: 1 <issue_comment>username_13: >
> Note that this was an math class, so "phrasing it wrong" really means "failing to prove".
>
>
>
That's something important to learn in a math class. If some of your students don't get it, it might be worth to spend half an hour of your next class to explain the context so that everyone gets it.
I think teachers way to often spend to much time on trying to teach techniques while not teaching fundamentals of their subjects like this idea.
Upvotes: 2 <issue_comment>username_14: I had a very similar situation. And my first thoughts were: "Great! I have a smart active student". However, soon other students started to complain that he is confusing them, and they lose focus when I explain material outside of lecture material to him. Therefore, I stopped answering his complicated, though good questions. Ans said something along the lines off: "This is a very good point; however, it is beyond of the scope of this lecture, but I would be happy to discuss it after class." We had some great discussions after class.
My main point: Your active student might be confusing others by bringing up new concepts.
Upvotes: 1 |
2014/03/06 | 1,320 | 4,968 | <issue_start>username_0: Is it possible at all to do a PhD without a Master or a Bachelor's degree?
Every now and then I meet someone who claims he knows someone who knows someone who was able to do a PhD without previous degrees (maybe only with high-school).
Is that true, was it true in some specific cases?<issue_comment>username_1: My (former) thesis advisor, [<NAME>](http://en.wikipedia.org/wiki/Barry_Mazur), has only a PhD. In fact, according to <NAME>'s *Mathematical Apocrypha Redux*, he does not have a high school diploma either, having left Bronx High School of Science after his junior year to attend MIT.
The story is that he had not completed an ROTC requirement at MIT but had already been accepted for graduate school at Princeton. Princeton was not insistent that this requirement be completed, so Barry did not take it seriously. (I have heard more colorful stories about this, but not from him, so I won't repeat them here.)
You might say that this is a technicality. I would agree with that but still claim it to be an interesting (even slightly inspirational, in some weird way) case. Moreover, Barry was 22 when he attained his PhD, so some actual schooling must have been skipped (or highly abridged).
Upvotes: 5 <issue_comment>username_2: I have a BA and a PhD, but no MA/MSc. More common than this are people who get into graduate school with an undergraduate degree in a completely different area of study. Just in my household, my wife's did Information Technology as her undergrad degree, and then got into a Chinese History graduate program.
Really, all it takes to get into grad school is convincing the admissions committee that you are a good enough student of their field. Completing a lower degree in the appropriate area of study is typically the easiest way to convince them, but if you have the necessary background knowledge and a lot of potential, nobody is going to turn you down just because you lack a diploma.
Upvotes: -1 <issue_comment>username_3: It **was possible** in some departments of German universities to start studies after a high-school diploma ("Abitur") directly with the PhD as target degree.The German Wikipedia page about the [PhD degree](http://de.wikipedia.org/wiki/Doktor#Anforderungen) discusses that point. Unfortunately the English version doesn't mention it. While that possibility was abolished about 25 years ago, there's still people around who got their PhD in that way.
One such person is the former German minister of research and education, [<NAME>](http://de.wikipedia.org/wiki/Annette_Schavan). She got her PhD as first degree with six years of study after the high-school diploma. But now that the university disclaimed her degree due to plagiarism in the thesis, she is essentially left without any academic degree.
Upvotes: 4 <issue_comment>username_4: [<NAME>](http://en.wikipedia.org/wiki/Mortimer_J._Adler) is one case. [<NAME>](http://www.brainpickings.org/index.php/2014/03/05/buckminster-fuller-education-automation-1962/) got in, was kicked out then invited back. There are other ways of earning stripes.
Upvotes: 2 <issue_comment>username_5: In mathematics, it is indeed *possible* to be accepted to a PhD program without a bachelor's degree, but only in special cases.
First, the person (the candidate) has to be exceptionally precocious and gifted with mathematical aptitude.
Second, the person has to apply to a very strong PhD program - the kind where the math faculty might have enough sway to convince the university to accept the person. At non-elite schools, the graduate college is likely to veto anything like this. And extremely strong letters of recommendation will be needed.
Third, the person must have at least one strong faculty advocate at the destination university who is able to sway opinion to get the person accepted.
As you can guess, this is not something that happens very often.
And that is for the best. It is a serious risk for a school to accept someone to a PhD program who does not have a bachelor's degree - perhaps the person will fizzle out. Worse, perhaps the person would have been able to complete a PhD if they earned a bachelor's degree first, but they ended up not earning the PhD when they were accepted early to a PhD program. For these reasons, it takes a truly exceptional candidate - more than just "seems able to get a PhD" - to convince a school to accept them to a PhD program without a bachelors.
Upvotes: 2 <issue_comment>username_6: One example is <NAME>, professor at Columbia University, Economics Department. According to her April 2015 CV (which can be found [here](http://econ.columbia.edu/graciela-chichilnisky)),
>
> Education:
>
>
> High School: Instituto National de Lenguas Vivas, Buenos Aires, Argentina
>
>
> No undergraduate studies
>
>
>
She has two PhDs, both from Berkeley, one in mathematics and one in economics.
Upvotes: 3 |
2014/03/06 | 2,537 | 10,377 | <issue_start>username_0: I'm a few months into my PhD, but I'm finding that I'm not interested in the my project. I joined my research group with a masters in electrical engineering. I wanted to do research that incorporates electrical engineering and biology, but somehow I ended up with a project that is exclusively biology. A I'm worried that if I continue down this path, I'll end up stuck in a career I don't like. I've talked to my adviser about my concerns, but he made it clear that I must work on my current project. Here's what I see my options are:
1. Finish my PhD with the project I currently have and switch fields immediately after
2. Try to switch research groups
3. Quit school and get a job
Eventually I would like to work in industry developing medical devices. How feasible is it to reach this goal from where I am now? I'm also interested in hearing from people who were in a similar situation I am in now and what they choose to do.<issue_comment>username_1: Before I try to answer, I think it's really difficult to advise on such a major decision without knowing you and your situation. Personally, the best advice I can think of is to seek out a senior researcher who is unbiased, who knows the area, who knows your context and who can talk you through your decision.
But based on what you've said, since you are early in the PhD process and since your reason for wanting to switch seems genuine and not a temporary disillusionment (i.e., your PhD project is in a field you are not interested in and that's not going to change), you should probably try to apply to other PhD positions, keeping your current position until you find something better (or at least see how difficult it would be to find something better).
>
> I've talked to my adviser about my concerns, but he made it clear that I must work on my current project.
>
>
>
Also, need it be said, you should take the advice of your supervisor with a pinch of salt: they are biased and will want you to stay in the PhD, even if it's not in your best interest. You don't *need* to work on the current project: you can always quit and go somewhere else.
Upvotes: 3 <issue_comment>username_2: First, and foremost, **this is your PhD**. Not your supervisor's.
The question that you choose to address in your PhD **needs to come from you**. You might take guidance on this from your supervisor(s), but if you don't own the problem wholly, then I doubt you will have the drive to survive the slings and arrows that will come your way during your PhD, including the general malaise that hits almost all PhD students during their thesis. **If you are working on a problem in which you are not fully invested, I doubt you will succeed.**
You've done the right thing in bringing your concerns to your supervisor. If he is resolute that you must continue on a path that you don't feel sufficiently in control over, or which is not addressing the question you seek to answer, then I suggest you switch research groups - or indeed find a suitable job for the time being.
Regarding the job you mention - medical devices - I suggest you make contacts with the recruiting agents for a number of firms and get their feedback on the general qualities that their successful hires have. Try and identify recent hires yourself and introduce yourself, asking them for a little about their training. This will give you a good idea of how to proceed with your own training.
You might find people on this website to offer advice, however I think that their answers might be off-topic for this question.
Best of luck.
Upvotes: 3 <issue_comment>username_3: Now this is just **my** opinion and it worked for *me*. Per <NAME> in the movie "Mixed Nuts", "In every POTHOLE, there is HOPE" (if you re-arrange the letters and so on).
I was faced with a similar situation. I sought out the challenges of the project I disliked and tried to, as dispassionately as possible, look at the aspects of the project however small that I would consider working on. I did exactly that. Yes, I had to construct a convincing argument for my adviser as to why focussing on a sub-aspect of my project whilst not losing track of the bigger chunk would be useful for me and the project. Yes, I did have a quasi-supportive adviser and an quasi-supportive department chair who wanted me to succeed since the "greater good" of my public university was at stake.
Since you have already "talked to your adviser", you may need to wait it out for a semester or so before you broach the subject matter again. i.e., if you find aspects of the project you like (time can change our perspective).
If you don't... perhaps you would need to cut and run to another research group or university since your professional life would be at stake.
**So in summary, those are the two options that *I* had:**
* Focus on a sub-project of the main project. Link that to the success of the main project and work in that direction. Worked out for me! I am happier with my contributions to the field.
* If things go from "bad to worse", cut and run to another group/univ.
**Subplot**: Yes, per Badroit's answer, you should generally heed your adviser's er... advice. Since they would generally look at the greater good and the bigger picture. But you would know best about his/her personality and you may need to use your gut feelings in such situation.
Good luck! Either way, it will be a character building exercise which will also provide you with interesting technical skills and temperament which are the subtle skills necessary for success in industry or academia (or so I am told).
Upvotes: 2 <issue_comment>username_4: >
> Eventually I would like to work in industry developing medical devices.
>
>
>
Did you tell your advisor this when you had your discussion ? Did your advisor tell you why you need to stay on this project ? It's possible that they had a better vision of how this might help you with your long term goals, but it's also possible they aren't really thinking about your goals.
I've had students who had strong opinions about what they wanted to work on, and did NOT want to work on what I suggested. **They always won**, and that's how I think it should be. But it took time for me to understand their reluctance to work on my project, and it helped immensely when they came to me with their **own** project ideas.
**So I'd suggest you think about a project that you'd prefer doing, and go to your advisor with that idea**. Hopefully it's not too far removed from your advisor's expertise (otherwise they'll have a hard time - well - advising).
Upvotes: 4 <issue_comment>username_5: >
> I'm a few months into my PhD, but I'm finding that I'm not interested in the my project. I've talked to my adviser about my concerns, but he made it clear that I must work on my current project.
>
>
>
"A few months in" is still very early in a doctoral program, so it shouldn't be too early to switch research groups. If your adviser said that you "must" work on your current project, I assume that means, "If you're going to keep working with me, you'll have to work on this project." (It's not too unusual for an adviser to balk at the prospect of advising a student who wants to start a new project, particularly if they don't feel they have sufficient expertise or interest. Some research efforts take months or years to get underway, and students can't expect every faculty member to put aside what they've been working on just to accommodate a new student's whim.)
I would start looking around your school to see if there's another research group working on something more closely related to your interests. If not, you've left out a fourth possible option, which is to transfer to another school. If you're not even a year in, then it may not be too late to switch, particularly if you can find a faculty member who is working on exactly the kind of work you are hoping to do.
Upvotes: 2 <issue_comment>username_6: You present three options:
1. Finish my PhD with the project I currently have and switch fields immediately after
2. Try to switch research groups
3. Quit school and get a job
Let's take a closer look at each of them:
1. "Finish my PhD": This is what you are doing now. This is the default option, the base with which you have to compare everything else. Be aware this is not a particularly easy path.
2. "Switch research groups": Do a quick search, can you find something? Because this option only makes sense if there is some research group to switch to, where it's possible to switch (being admitted, it's not like saying hi and crossing the door...), where the advisor is not a jerk/moron/whatever, where it's possible to do the research you want to do (it could be the same or even worse). We don't know, you don't know, nobody knows. Search, get information. It's not easy, IMHO, but you may have skills for this, I've no idea.
3. "Get a job": what kind of job are we talking about here? How interested are you in doing a PhD? (the one in option 1 and the one in option 2), would it be a dream job in what you want or simply something to earn some money? how much money? Again, search.
Try not to invest/waste more time in searching than what you need to make a decision, in either case, you have already decided (by default), you are in option 1, you have not enough information to make a **specific** decision and the clock is ticking. Any of those three options could be a life-saver or a death-in-life depending on *specific* details (but most probably it will not be any of both, there is a lot of room in between).
Good luck.
PD: I know people that have started a PhD on a different topic after 3 or 4 years of working on a different topic. People that have started a PhD after years in industry and paying the loan for the house (big deal). People that have quit a PhD after 2 or 3 years. People that have published the thesis as a (free public) book, so that it's there in the record, but have no made the defense (and probably never will, because there is a limit in time for that).
In short, this ((academic?) life?) is not like the rails of a train where you make specific decisions that cannot be changed (ever), is more like a sea you sail, think about it in a more open way.
Also, shipwrecking is possible, as well as going adrift, getting lost, etc.
Upvotes: 1 |
2014/03/06 | 911 | 3,373 | <issue_start>username_0: I have a huge collection of PDFs of research papers. Many of these have valuable annotations. I also have a huge .bib file containing citations for these and many other works. Is there a reference manager software where I could import the .bib file and the collection of PDFs and somehow the entries in the .bib file could be magically linked to the corresponding PDFs? I would then like to use that tool to access my PDFs (of research papers).
I think this was a feature request for mendeley long back
<http://feedback.mendeley.com/forums/4941-general/suggestions/80946-automatically-find-pdfs-link-them-to-imported-me> .
As of today, I don't think that it has been implemented.
I tried Quiqqa (<http://www.qiqqa.com/>) , but had no luck.<issue_comment>username_1: One options is [BibDesk](http://bibdesk.sourceforge.net) (OS X), which can track links between files and associated citations.
Personally, not a fan of what it does to the `.bib` file, but could suit your purpose.
Upvotes: 2 <issue_comment>username_2: Try [Tellico](http://tellico-project.org/)
A collection manager for linux which "provides default templates for books, bibliographies, videos, music, video games, coins, stamps, trading cards, comic books, and wines."
The [reference manual states](http://docs.kde.org/development/en/extragear-office/tellico/importing.html#importing-pdf) the following:
"If Tellico was compiled with exempi or poppler support, metadata from PDF files can be imported. Metadata may include title, author, and date information, as well as bibliographic identifiers which are then used to update other information."
Is that useful? If so, then you can check the site for [reviews of Tellico](http://tellico-project.org/reviews) and it works on the following:
* Debian
* Ubuntu
* Gentoo
* FreeBSD
* openSUSE
* PC-BSD
* Fink (Mac OS X)
* Fedora
* Linux Mint
* Pardus
* ArchLinux
Upvotes: 1 <issue_comment>username_3: [EndNote x7 has this feature, known as "PDF auto import."](http://endnote.com/training/mats/enuserguide/eng/endnote7/enguide-full.pdf#page=15)
I tried it and it got 0/3 of my sample PDFs correct, all from IEEE conferences initially downloaded from IEEE Xplore.
One of the three articles was *closer* to having a correct reference (the others were useless). But that article had PDF metadata visible in Acrobat Reader (Title, Author, Subject). EndNote got page numbers right (somehow), the DOI, but failed at the conference name, and reference type (EndNote mistakenly thought it was a journal article).
Upvotes: 1 <issue_comment>username_4: Here is one nearly automatic way to do it using Zotero (<https://www.zotero.org/>):
1) import the PDFs in Zotero. One way is to select multiple PDFs and drag them into a collection (in the LHS pane) of Zotero.
2) Select the PDF items (CTRL click in Windows for multiple selections), right click and select "Retrieve metadata from PDF". Note that this step searches online databases for missing information and seems fairly robust.
3) import the .bib file in Zotero
4) Go to the duplicates collection in the LHS panel and merge all the duplicates.
Issues:
1) In step 4, there may be false negatives if the automatically retrieved metadata (in step 2) is too different from the corresponding entry in the .bib file (step 3)
2) Step 2 might fail on old scanned PDFs.
Upvotes: 3 |
2014/03/07 | 2,578 | 10,945 | <issue_start>username_0: Okay, this is a problem I am really scared of lately. I am always willing to learn something exciting whenever I do any coursework. However, this adventurous mind of mine is risky and can often put my grades in jeopardy. Now, the problem is, there are "unfortunate" times where I have to collaborate with other students. It then becomes a serious issue of ego clash, when my classmate wants an easy way out, while I want to be all out conqueror (which I think should be the goal, since I am spending my time at college to learn something new, not to respect and elongate my comfort zone). So, it often happens that this internal squabble is very detrimental to the overall group performance and at the end of the semester the overall outcome results in a hodgepodge. Moreover, I end up having a bitter relationship with my classmates.
**So, my question is how can I motivate my project partners to do something interesting and challenging, without distracting them to think that the project is hard(which it ultimately can be).**
*P.S: Thanks for pointing out my mistakes, I will be more careful with my attitude.*<issue_comment>username_1: I think you are doing things the wrong way. Although university is a place to learn new things, it is also a place to learn how to allocate proper resources for solving particular problems within certain timeframes. This is a valuable lesson for all careers (inside academia or in industry). In that sense, your co-students are right (which makes you wrong). If a certain assignment requires 10 hours of work to deliver almost perfect results, allocating another 10 hours for a mere 1% improvement is (in most of the cases) a waste of time. Time is not free. You think it is (you are in university after all) but it is not. This extra 10 hours, may be spent better elsewhere in another assignment, in a preparation for an exam and so-on. This will follow you in you future life. If you work in a company that needs something done in a month, it is enough / encouraged to deliver "almost perfect" results within a month than perfect results in three months. After all, aiming for 100% perfection is a dead-end. All things may be improved one way or another and someone must learn where to stop and move-on to greater problems.
So, I think it is you that needs to change his mindset. You must always find the best way to be productive and deliver requested results but with minimum effort. Do not get me wrong. I do not mean slacking or doing the minimum to pass courses. I mean doing the best to conserve effort and still deliver almost perfect results. You can always strive for absolute perfection but only in your free time (not in expense of your co-students time). If you still do not want to do that, then take assignments on your own. We cannot change other people. Only ourselves.
Upvotes: 3 <issue_comment>username_2: >
> I am always willing to learn something exciting whenever I do any coursework
>
>
>
Great.
>
> However, this adventurous mind of mine is risky and can often put my grades in jeopardy.
>
>
>
Less great. I am concerned that you are setting up a false dichotomy. Being ambitious in coursework does not have to involve risk of poor grades. It is part of being a mature student and researcher to learn how to reach for the stars in such a way that not attaining your full expectations does not result in complete failure but in work which is itself still valuable. (This is admittedly an "advanced lesson": I have known people who have made it to the tenure-track without learning it...sometimes with grave consequences.)
It sounds like you are getting an assignment -- let's say a coding assignment since I see you are a computer science student -- and planning something much more ambitious than is actually asked of you, even to the degree that the chance that you will not be able to pull it off gives you a bit of a thrill. But you don't need to work in this way. With planning -- and applying some insight early on; it's not all grunt work -- you can design a project in which you first complete what is asked of you and then move on to the more ambitious aspects that you are (happily) more interested in. Coding work in particular is best done incrementally. If you work on code over a long period of time, it is much more useful for everyone if *some* of the code that you write can be used (and tested, responded to...) right away than if you write things in a way so that you have nothing that works until the very end.
So far this advice is just for your own work. In collaborative work I think you should be clear about (i) what you are definitely going to do -- i.e., what your collaborators can count on you doing -- and (ii) what you would like to aim for in your remaining time (which if you are talented and hardworking, you will almost certainly have). Also, if you are actually more talented/quicker/have more time to put in, then it is reasonable to use at least some of your leftover time to try to help your collaborators with their projects. That will certainly go a long way to getting your collaborators on your side and avoiding bitterness.
In general, when you are working with other people you should stop every so often and really try to view things from their perspective. This sounds almost condescending, but it is not meant to be and it is really a skill: some people are good at putting themselves in others' shoes and others just can't let go of their own perspective; the former are much more valuable team players. One tip here: would your classmates describe themselves as wanting "an easy way out"? Or do they have goals which are just different from yours in some way? Moreover, do they see your flirtation with failure that, in your own words, can often put your grades in jeopardy and think, "Gosh, I wonder whether whoknows is going to come through with what we agreed he would do or come back proud of the fact that he bit off more than he could chew?"
Upvotes: 5 [selected_answer]<issue_comment>username_3: If you want to learn something exciting about your coursework and do something risky, it does not have to be in the context of a graded project.
You can do the collaborative work in a "normal" way, so as to be considerate to your classmates and their needs. Then do your "risky" project related to the course material on your own, just for the fun (and for the sake of learning).
I think if you ask nicely, the professor would even be happy to look at your independent project and give you feedback on it :)
Upvotes: 2 <issue_comment>username_4: Some tips from personal experience:
* Ambition is good, and as a student, the consequences of failure at this point in your life are lower than they will ever be again for as long as you live. Take risks if you can learn from them: the most important thing is to really think hard about why a project failed or succeeded.
* If you have a drive to stretch the limits of your capabilities, take advantage of them now while you have a chance. Even though I have a fantastic employer, I wish every day that I could go back to high school and college so I could work on whatever projects strike my fancy.
* Be conservative when you hold someone else's livelihood in your hands. If what the other students want is so much smaller than what you want, deliver the "easy" version fast. Change the challenge from creating something big to creating something small, but *quickly*. It's important to learn the limits of how quickly you can do things. Programmers are notorious for pushing deadlines.
* It's one thing to have a personal failure, it's quite another to be responsible for the failure of an entire team. You want to be the guy who gets stuff done, the guy who everyone wants on their project, not the guy who only dreams and never ships.
* Force yourself to make solid commitments and then deliver on them. Cut out a portion of the project and take responsibility for it. Then you can handle that part as you please, so long as you get it done when you're supposed to and it is what you promised. This isn't optional when you have an employer, so it's a good idea to get used to it.
All of that said, the difference between a good developer and a great one is not their ability to write software, but their ability to manage *time*. Great developers are able to consistently deliver quality products in the established time-frames. Perhaps when you face a group that wants to "play it safe," you can use it as an opportunity to show that you have both programming skill as well as the pragmatism to manage your time effectively.
Upvotes: 3 <issue_comment>username_5: >
> Now, the problem is, there are "unfortunate" times where I have to collaborate with other students. It then becomes a serious issue of ego clash, when my classmate wants an easy way out, while I want to be all out conqueror
>
>
>
I have been in both sides of this situation. I believe that the seriousness of the member depends on how useful or interesting they find the course or their personality (apathetic or enthusiastic). Everybody does not have to share your passion for a course.
For example, in an engineering program, there may be some business courses. People who simply want to be well-rounded or want to create their own company might be highly interested in those courses, which is fine. Then, some members might only be interested in technical courses. These guys might know the importance of business, but might not find it interesting. Also, there may be some courses which are truly fluff or of little value.
If you can justify the need for a course, then you can try to convince the others. If you cannot justify it, then perhaps you need to rethink why you took the course. If some are not interested in the course, then you don't have to penalize them for it. Try to see where they can be of help.
I suggest that you gauge the interest levels of the members in the beginning itself and agree upon the bare minimum effort (within reason) that is expected of each member. If you like the course far more than the others, then you can put the extra effort if need be. But, make it clear that the scores should be distributed according to the work and results. Also, I suggest that you don't try to be perfect all the time. When your income is in millions, why bother about tracking the pennies ?
Upvotes: 0 <issue_comment>username_6: Other students may not have the kind of free extra time you have. You need to, as a group, do what you are assigned to do, and it is up to you to figure out what that means, but creating extra work for others is certainly not the way to go. It is good that you have ambition to learn above and beyond, but you do that on your own time. You do not force your group mates to do this, because they may have other priorities (quite possibly including doing the same thing for another class!)
Upvotes: 0 |
2014/03/07 | 938 | 3,792 | <issue_start>username_0: **Downloading full-text PDFs is often too slow:** My university has a subscription to most journal articles. Thus, most of the time I have full-text access to journal articles. However, for various reasons, accessing an article still takes perhaps 30 seconds. Although sometimes it's quicker, it's often still a 7-step process (1) search and find the article on Google scholar (2) login to the university system (3) get to the list of sources at my university that provides full-text; (4) get to the journal article page; (5) get to a full-text page; (6) save the pdf; (7) open the pdf in my preferred viewer.
**Managing a library of PDFs is tedious:** That said, I find it tedious to have to manage my own library of PDFs. It often takes longer to work out whether I already have the article or not (Thus, I have to first search my hard drive and then search Google Scholar). I also have to enter the PDF into the library with no guaranty that I'll ever need it again.
In general, there are two kinds of PDFs. There are those that I'm accessing for the first time, and there are those that I come back to again.
Thus, I imagine a good system would be if some online system kept track of what I'd downloaded. If I did a search on Google Scholar and I'd already downloaded the PDF, it would just be a single click away (i.e., in a kind of cache).
**Is there a way to meet the following requirements?**
1. **Near immediate access to previously accessed PDFs**
2. **Almost no time to store a PDF (or ideally something that operates in the background)**
3. **Integrated search through Google Scholar that works both for new PDFs and previously accessed PDFs (i.e., for a previous PDF it pulls the article out of the cache; for new PDFs you go through the normal process).**<issue_comment>username_1: You can always try zotero (<http://www.zotero.org/>). It is a browser plug-in and a stand-alone app. You can store PDFs in their "cloud" (free up to a limit or for a price if you need larger space) or in your web-dav server of your choice (<https://www.zotero.org/support/kb/webdav_services>). It syncs accross devices (laptop, work PC) and also has several tools for web scraping popular sites like ACM digital library... So, I think it should cover most of your needs.
I do not know about Google scholar integration though.
Upvotes: 2 <issue_comment>username_2: Google has brought a whole new experience how we tend to manage stuff. Searching is so fast and efficient, that in some use cases it has become obsolete to maintain a rigorously structured, personal library. The drawback, however, is that you have to complete the tedious task of retrieving the full text PDF again and again.
I am a scientist myself, and I am also one of the developers of [Paperpile](https://paperpile.com/), where we exactly tried to make this use cases as simple as possible. Paperpile runs in the background and **you do not have to leave Google Scholar or Pubmed to quickly get the full text PDF**.
Next to each item in the Google Scholar search result you will find the little Paperpile toolbar. Just click on the Paperpile logo and it will find the full text PDF for you and will add it to the library. That's **all done in background**, and you do not have to open any user interface or go to another window.
Paperpile automatically screens all the items on the Google Scholar results page, and shows a link to the PDF for those that are in your library. Since Paperpile is totally web based, this **will also work if you login on another machine**.
I have also attached a screenshot to get an impression how it works. Articles that I already have in my library are marked by the green logo.

Upvotes: 3 |
2014/03/07 | 1,458 | 5,040 | <issue_start>username_0: I remember reading a journal article where a researcher in psychology analysed their entire manuscript submission and publication history. The way I remember it, the researcher had around 40 publications, many in top-tier journals. A large number of the publications were rejected 2,3,4 or more times before they found a home. As an early career researcher, I found it to be a really interesting read for highlighting just how much rejection is a normal part of the publication process.
* **Does anyone know the reference to this article?**
* **Equally, are there other equivalent documents written by other academics?**
**Update:** I found the article. [See answer below.](https://academia.stackexchange.com/a/84425/62)<issue_comment>username_1: I don't know if this is the article you're referring to, because the researcher is in biology, not psychology; and the journal article doesn't include the list of manuscripts, only the recommendation to keep one:
>
> <NAME>. "A CV of Failures." Nature 468.7322 (2010): 467. Doi:10.1038/nj7322-467a. Web. <http://www.nature.com/nature/journal/v468/n7322/full/nj7322-467a.html>.
>
>
>
I can't seem to find the link to the author's actual CV of Failures (though I feel like I've seen it before), I can only find the article explaining it. However, if you search "CV of Failures" you will find some publicly available examples from other researchers.
P.S. I keep an ongoing CV of failures as described in the Nature article and I **highly recommend** the exercise.
Upvotes: 5 <issue_comment>username_2: After popular request, I'm posting this as an answer. Hopefully it's interesting to see a different example (or inspiration) of a successful researcher sharing how much rejection is part of academic life.
The economist <NAME> keeps an [online paper diary](http://www.powdthavee.co.uk/diary.html) that displays the warts and all rejection trail for many of his (very interesting) papers. He also wrote some [musings about his most rejected paper, now in the OTEFA Newsletter
from p.7](http://www.krannert.purdue.edu/faculty/knaknoi/otefa/OTEFA_Newsletter_June_2009.pdf) - 14 times - but I'm sure many can beat this.
Upvotes: 3 <issue_comment>username_3: <NAME> details his history of publishing, including the Paxos paper, on [his website](http://research.microsoft.com/en-us/um/people/lamport/pubs/pubs.html#lamport-paxos). It took eight or so years to publish the Paxos paper.
Lamport received the Turing Award in 2013; and Paxos and its derivatives are now at the core of almost all large-scale web-sites (Google, Microsoft, Amazon, Netflix, ... ).
**From the comments:** Lamport's [paper on Buridan's Principle](http://research.microsoft.com/en-us/um/people/lamport/pubs/pubs.html#buridan) took 28 years to get published! He explains the long road to eventual publication, *ibid.*
Upvotes: 4 <issue_comment>username_4: I think you are talking about this article;
* [Overcoming Fear of Rejection, by <NAME>](http://www.martynemko.com/articles/overcoming-fear-rejection_id1346)
Even if it is not what you are looking for, the following article might definitely brighten up your day if you are just interested in knowing that rejection is not the end!
* [Nature rejects Krebs's paper, 1937, By <NAME>](http://www.the-scientist.com/?articles.view/articleNo/28819/title/Nature-rejects-Krebs-s-paper--1937/)
Upvotes: -1 <issue_comment>username_5: I finally found the article.
The ArXiV [link](https://arxiv.org/abs/1205.1055) and [PDF version](https://arxiv.org/pdf/1205.1055v1): Scientific Utopia: I. Opening scientific communication.
It reports the publication history of "all 62 unsolicited articles co-authored by <NAME>" at the time. <NAME> is a well-published researcher in social psychology and more recently has been a leading figure in the reproducibility and open science movement.
The mean time to publication for published articles was 1.7 years and longer for not yet published manuscripts. That said, I could not see the distinction between published online and published with page numbers in an issue. Presumably, for some purposes published online is sufficient (e.g., for appearing in database searches, and for showing research productivity).
The table is shown below, or otherwise, go to [this pdf](https://arxiv.org/pdf/1205.1055v1) where the tables are listed at the end of the document.
A few comments on the table:
* This is the track record of a very successful scientist.
* It's interesting to consider the relationship between this long publication lag and annual performance reviews. The submission is often the hard work (although revisions can be time consuming also), yet the recognition may come quite some time later.
[](https://i.stack.imgur.com/qimIA.png)
[](https://i.stack.imgur.com/jjUwy.png)
Upvotes: 3 [selected_answer] |
2014/03/07 | 445 | 1,888 | <issue_start>username_0: Although graduating from a low ranked college I was able to become research associate at a well known research institution. My work there provided me with three research papers.
Is there any possibility of getting a full scholarship for a MS program?<issue_comment>username_1: Yes of course. I personally had a full scholarship for my MS. Even more specific, for example, no one from MIT media lab can be accepted WITHOUT a full scholarship (<http://www.media.mit.edu/admissions/faqs>) . Your full scholarship is based on how much the university/professor/department needs you. If you have the ability to perform at the level of a researcher or PHD student, there is no reason they would not want to pay for you to be working there.
Of course, this depends on both school and country. Some places may have specific rules on how you can have a scholarship. It may be contingent on teaching assistant or research assistant positions.
Upvotes: 0 <issue_comment>username_2: I did an MS in math. My tuition was waived and I received a stipend that was enough for me to live on (albeit very cheaply). In return for this I taught one class of either business math or college algebra every semester I was there. I think everyone that was there with me at the same time was on the same deal.
It shouldn't be hard to get this information from the school or schools of your choice. If they have a program like this they probably recruit for it.
This might be different from what you are asking, since it more or less involves working for the money, but for anyone that needs a way to pay for a Master's degree, this is definitely a way to explore. I can't say how common an arrangement like this would be outside of mathematics, however.
You should contact the school(s) you are considering attending and find out what kind of arrangements are made there.
Upvotes: 1 |
2014/03/07 | 2,324 | 9,372 | <issue_start>username_0: I have a completely introverted personality. I am seriously struggling with interpersonal relationships with my colleagues, to a point that I am actually at the brink of quitting my job. If I go abroad for a research degree, would there be any problem that may jeopardize my endeavor? My family is constantly warning me against leaving the current job and going abroad for a higher degree; they say that I will surely return without completing my course as soon as I face any rudeness, animosity, or harshness, be it from the adviser or the environment.
A relative of mine in the USA has told me that she has been experiencing harsh behaviors from her PhD adviser due to her religion. One of my cousins in France has complained that he can never get any attention from his instructors because of his ethnicity. He is in service with UN.<issue_comment>username_1: It is not going to magically solve your problems. You will undoubtedly face similar issues. People outside often think that Academia is the life of the mind and the [ivory tower](http://en.wikipedia.org/wiki/Ivory_tower). It is not. A successful career, including tenure, often depends on how well you interact with your fellow academicians never mind your students.
Being an introvert can be very difficult. You can take this opportunity to re-invent yourself. You'll never not be an introvert but you can work on skills to minimize its impact on your life and career. Search out some books on the subject (there have been some popular ones published recently like [Quiet: The Power of Introverts in a World That Can't Stop Talking](http://www.thepowerofintroverts.com/about-the-book/)).
Very carefully consider who your advisor will be. You don't need the prima donna, but someone with whom you can work with successfully.
Join Toastmasters or a similar organization where you are now or where you end up. Look for social, cultural, or technical organizations for your interests--and get involved. That may help you find connections and friends who can help you when the going gets tough.
You might also look into counseling, not to become extroverted, but to learn techniques to mitigate your introversion.
On a personal note: go for it! I'm introvert who's fine with friends. I'm a SME (subject matter expert) so knowing my stuff makes it much easier for me to speak up in a meeting, raise my hand in class (when I was in class), and so on. Don't let imposter syndrome get you either.
Upvotes: 3 <issue_comment>username_2: It is just not true that your problems always return back to you wherever you go. I have seen people who feel lots better after they have changed the position. For instance, if the laboratory used to focus on exhausted topic or gets much less funding than before, unable to continue existing projects, or the new supervisor is trying to push some weird concepts, it may be very difficult climate that is not your fault at all.
From the question looks at least you are self-critical enough. Try to be careful at the new job, remember lessons you think you have learned and do not repeat the mistakes. Take this into consideration and **go**.
Upvotes: 0 <issue_comment>username_3: I am an american who studied in europe for several years. Here are some things I think you might want to consider.
* It is always difficult to move to a new cultural environment. If you
already find social situations difficult, then these difficulties are
likely to intensify in a new setting. You will have to communicate
primarily in a foreign language, you will have to learn a new set of
customs and manners, and you will have to make all new friends in
your host country. This can be hard, depending on the country. In my
experience Americans are usually quicker to befriend foreigners than
Europeans.
* Think about where you want to live. If you want to live in america,
then maybe doing a Ph.D. in American isn't a bad idea. In general
though it is hard for people who do their degree in one country to
get a job in another country, unless the degree is from somewhere
really famous like Harvard or Oxford. (See my post [here](https://academia.stackexchange.com/questions/17405/how-would-a-small-liberal-arts-college-view-a-phd-from-germany-or-the-uk-factor/17406#17406) for an
explanation why.
* Dealing with academic advisors is really difficult, even for people
with excellent social skills. This is a subject that deserves its own
separate consideration. Your advisor will have a lot of control over
your future career and this fact can sometimes make one be overly
deferential. Some advisors use their power over their students in bad
ways. It is very common for advisors simply not to do their jobs. It
is also possible for your advisor to steal your research, although
this is less common. The proper way to deal with your advisor is as a
senior colleague who is paid to help you. You want to approach him or
her with reasonable questions, and you want to insist on frequent
meetings to discuss your work. You need to be comfortable making
these kind of demands, even if you aren't the person in charge,
otherwise you run the risk of a bad advisor taking advantage of you.
This is true outside the academy too, but it remains a real factor
even in universities.
* Do not go into debt to do a graduate degree. It's almost always a
horrible return on investment.
* Scrutinize your motives. Why do you really want to do this? Do you
really have a burning research question that you want to answer? It
doesn't sound like it. It sounds like you are in a bad situation and
want a lifeboat to take you somewhere else and try something new. If
that is your mindset, I strongly encourage you to reconsider graduate
education. Graduate school is a very difficult environment, and when
you get to the point of academic hiring it becomes absolutely
viciously competitive. Graduate school isn't a good way to find
yourself, or explore and adventure in a foreign country--it's
professional training for a very difficult, very stressful job that
doesn't tend to pay very well. If you don't love the material itself, you probably won't last.
I wish you the very best!
Upvotes: 3 <issue_comment>username_4: I looked at your other questions, which were quite enlightening. Given that you had already publicly revealed other relevant facts about yourself, e.g. [How will a "local" master's in CSE look when I apply for a Graduate funding in the USA?](https://academia.stackexchange.com/q/16438/285), I don't know why you didn't include them in your question. Any such details are highly pertinent, and would help people give better and more "customized" answers.
To summarize from your previous question, you are 32, live in Bangladesh, have an IT background, work in a bank, and are afraid your job is taking you nowhere.
I'm Indian, and left India to study in both the UK and the US, at different times. So my background is not so dissimilar, and I can relate to some extent. However, the fact remains that your question is so broad that it is difficult to give useful information without knowing you.
Given that you are from Bangladesh, I'm guessing that part of your interest in further studies abroad is to get away from Bangladesh. Also, there are presumably not that many routes out of Bangladesh besides being a student. If so, I sympathize. However, from your previous question it sounds like you were planning to get a degree locally before going abroad. If so, it sounds like your question might be premature. If you aren't currently associated with a university or do not already have a relevant higher degree, then going abroad as a student would be very difficult. Are you still planning to enroll in a local Master's program? Or have you decided to go for a Master's degree abroad?
The answer by shane, I think, covers some of the issues you will probably run into as an Asian student in the West. How good or bad a situation you find it depends on a complex set of factors including:
1. Your area of study
2. Your university and location
3. The local community from your area/country. (Assuming you get on well with people from your background)
4. How successful you are at your subject
5. What kind of advisor you end up with
6. How much you dislike your native country. If you really dislike it,
you may have an easier time adjusting to a foreign culture.
If you happen to be from a very "sheltered" background, which is not uncommon in traditional Asian cultures, then living away from your family and culture could be good for you. In many ways the West (which it seems you are contemplating) is much more open culturally then a place like Bangladesh. However, how you respond will be up to you. You may find it frightening rather than empowering.
In my case, I was from a sheltered background, and had a difficult time. Grad education is a rough business, and study abroad is not for the faint-hearted. However, I can say that I do not regret it at all. It was (I think) very good for me. I learned to be much more confident and independent. I am now quite a different person than the person I would have been if I never left India.
Having said that, as shane says, grad school isn't the ideal way to go about self-improvement, if you have a choice. Of course, as I have observed above, you may not have a choice.
Upvotes: 3 [selected_answer] |
2014/03/07 | 507 | 2,159 | <issue_start>username_0: How does one leave one collaborative group for another, prior to the start of the research? This relates to switching project teams for graduate course work, switching research labs, or basically dropping any collaboration. How does one do so politely?
My specific situation:
I have been asked today by one of the best teams to join their group for our major University project. However, I have promised my friend 2 months ago that I will join his team.
How can I join the better team without jeopardizing my relationship with my friend?<issue_comment>username_1: First make sure that this is what you want. This is a very important thing to do. Being in a better team is a chance, but also a burden. They will expect you to cooperate at a certain level, which means: stress, lots of work and being on time. If you think that's worth the result and that you are willing to dive into that, then do it.
If your friend is a good friend he/she will understand. I'd suggest to invite your friend for an activity that both of you enjoy a lot, then bring up the topic. Make it clear to your friend that you value the relationship and how difficult it is for you to take that decision, but also can't put down the chance. Ask him/her what he/she would do in your position to get your friend to see it with your eyes.
True friendship will survive that. And if not, then knowing that you took the right decision will help you getting over it.
Upvotes: 2 <issue_comment>username_2: Make sure that if you are going to do this, it leaves him enough time to find a new group (assuming this is related to a class or an undergraduate/masters project and not a phd collaboration). Going back on your word is one thing (subjectively unethical), but it would be extremely unethical to go back on your word if he does not have the time to adjust to your news accordingly.
If you two have already formed a project idea together, view every idea you have told him as his. Absolutely do not take anything from your current project idea to the new team (especially if he is unable, or too lazy to, find a new team that has their own idea)
Upvotes: 1 |
2014/03/08 | 832 | 3,639 | <issue_start>username_0: This is somewhat similar in broad topic to an earlier question I've asked, but not the same. I intend on applying to interdisciplinary mathematical/computational science/engineering programs like this: <https://icme.stanford.edu/> . As a result of the interdisciplinary nature, there are a lot of junior-and-senior-level courses that are relevant, and I want to take as many of these as possible so I have a good grounding for whatever more specific path I follow, within the field of computational science. However, I also know that graduate classes are looked upon favorably by many admission committees; so, should I drop some of the undergrad courses (that may be more relevant in subject matter) and take some grad ones? Because of scheduling and prerequisite issues, I would otherwise not take any graduate-level courses (or take at most one) until my senior year, by when it may not even matter in terms of admissions because some graduate programs don't look at your senior grades)
[As a follow up to this, would it be considered a bad thing to take a undergraduate course that may be considered important or even "crucial" in the final semester of senior year - when admissions decisions are already coming out - because of taking graduate classes earlier on instead?]<issue_comment>username_1: I just received an acceptance to a very computational cognitive science program. I had a single grad-level AI course, and I doubt very seriously if they even noticed it. I don't regret taking it, it was probably my favorite course because it was very loosely structured and allowed me to develop research experience, but it was also harder than a typical undergrad class.
Basically, from my (admittedly limited) experience, I would say take as many as you can earn A's in if you want to challenge yourself, but don't do it solely for the grad apps.
Upvotes: 2 <issue_comment>username_2: Graduate-level courses are helpful, but there isn't any magic formula that "X courses with a grade of A or A- will guarantee admission." Every case is different, and good grades are by no means sufficient for admission, if your letters of recommendation are weak.
If you're applying for a terminal master's program that is primarily coursework, then the rules are different, but for anything relying on research, your research experience and letters of recommendation will carry significant more weight than your classwork and test scores. What doing well in the graduate courses does is signify that you will be able to handle the coursework in your program, but it does not shed light on the rest of your abilities.
So my recommendation is to take graduate classes because you're interested in the subject and want to learn more about it, rather than just to impress an admissions committee.
Upvotes: 3 <issue_comment>username_3: You should be thinking about taking the "best" Masters courses, not the "most" courses.
So how does one define "best" courses: 1) They are courses closest your current areas of interest. 2) They are the courses most relevant to your future (projected) areas of interest 3) they are courses in which you can get the best grades.
And I'd certainly take the "crucial" undergraduate course in preference to the graduate course if "timing" is an issue.
Don't take masters courses that are "irrelevant" just for the sake of taking master's course. Take a master's course because it fits your long term needs better than equivalent bachelor's courses. username_1's experience, from another answer, "only" one master's course, just because it was of interest to him, is relevant.
Upvotes: 2 |
2014/03/08 | 3,558 | 15,260 | <issue_start>username_0: I teach a writing course to different sections, across different days of the week. From the schedule, some students will take their mid-term exam early in the week, while others will take it later in the week. Recently, I found one student who was “just checking to see what the exam was like”, but was planning to go to another session to actually take the test. I realized there might be a variety of ways that students taking the exams on Friday would have an advantage over those taking it on a Monday.
I've taken the following steps in an attempt to address this problem:
* I build multiple tests, e.g. "A", "B", "C", etc., which are of similar difficulty, have the same types of problems, but different subjects. "A" is given to one section, "B" to another, etc.
* Provided a sample test that students can print and practice with and discuss with me in class.
Note that, there are several challenges to administering this:
* The classes are large, and the school offers me no assistance in managing the exams.
* Many students regularly attend my lectures during sections that they did not register for, so I will not easily recognize who belongs in which time.
Are there any additional steps I should consider to make sure the test is administered fairly?<issue_comment>username_1: I have faced with *all of the above situations* this semester alone! I teach two different sections spread over all 5 days of the week. How do I ensure heterogeneity of examinations? By administering different versions (different questions) of the examination.
Throw into this mix, my need to "accommodate" students with "needs" and different time frames that they require. The only way I managed this was have different versions of the exam and had to invigilate these examinations myself (no help from uni) at different times of the week and well into 6pm on a Friday. I ended up having 5 different versions with varying number if questions but *potentially* the same difficulty level.
Yes, some students cribbed at the end of the day about them not solving the same problem set as their peers but I had to (politely) ask them to deal with it. If they are thrust into a real world situation, outside the safe confines of a university setting, they may have to face non-homogeneous conditions on a regular basis with difficult time lines.
I self-tested these different versions and all of them took the same amount of time withing +/- 5 minutes for me. Yes, perhaps not the best metric but fair enough, I supposed. However, a lot of teaching may be subjective. You can try to use a grading rubric that you publish after the examination so that the students feel that it was a fair metric/yard-stick.
This was for a first year engineering class on **Engineering Mechanics**.
Upvotes: 3 <issue_comment>username_2: There are various ways of preventing students from showing up on the wrong day or showing up more than once. The first step would just be to tell them straightforwardly that it's not allowed. You could also assign seats for the exam. Assigning seats has the added benefit of reducing opportunities for copying from a neighbor's paper; they can't just choose to sit next to a friend or sit next to someone they know is doing much better in the class.
>
> I teach a writing course [...] The classes are large, and the school offers me no assistance in managing the exams.
>
>
>
The enrollment of writing classes taught by a single instructor is usually limited because it's not practical for the instructor to grade large amounts of written work from a large number of students. At some schools, writing classes are large, but there are TAs who read papers. If you're a single instructor teaching a writing class to a large number of students, then that's a problem in and of itself; these issues with exam arrangements would then be a symptom of the more general problem.
Upvotes: 4 <issue_comment>username_3: Disclaimer: I have no idea what exams for writing courses even look like, so this is a general answer.
Students **will** talk to each other and thus information about the contents of the first exam is going to leak, no matter what you do. Therefore your goal should be that the information about the content of the first exam does not give any advantages on the second exam (keeping both exams at the same difficulty).
There are basically two ways to achieve this (and you do not need to consistently follow one throughout the exam):
* Make information that could be obtained from the first exam available to everybody beforehand, e.g., clearly state that there will be a question of a given type.
* Make information that could be obtained from the first exam useless in the second exam, e.g., by not asking a similar question.
The main difficulty is now to recognise useful information. One pedantic example to illustrate this:
Suppose, there is a topic which is not central to your course and which is considered difficult or unfit for exams by your students to the extent that students have a good reason to assume that it is not relevant for the exam. If you (do not) ask a question about this topic in the first exam, students taking the second exam have a better estimate of the probability that you ask questions about this subject and thus have an advantage if you (do not) ask a question about it in the second exam – even if it is entirely different. So, if there is such a subject, you either have to announce beforehand that there will be a question about it in each (none) of the exams or you have to ask about it only in one of the exams. The latter case is however problematic considering the difficulty of the exams, unless you have two such topics at hand.
Some further aspects to consider:
* Another not so obvious information contained in the first exam is the general way you ask questions and what general aspects you consider important. Giving a sample test is a good way to provide this information beforehand.
* Neither somebody who focusses on the questions of the first exam nor somebody who focusses on everything that was not asked in the first exam, should have an advantage. Thus some deviation from the rule of not repeating anything without announcing it is ok in my opinion.
* No matter what you do, somebody will complain about it.
* In some cases, you can compile a huge catalogue of potential exam questions beforehand (part of this can be the exercises to your course) and give them to your students sufficiently long before both exams. Announce that you will pick questions randomly from this catalogue for each exam (under the constraint of equally difficult exams) and do so. This is very fair, but a lot of work and if your catalogue is too small (which is unavoidable for some subjects), it encourages learning without understanding.
Upvotes: 2 <issue_comment>username_4: >
> Many students regularly attend my lectures during sections that they did not register for, so I will not easily recognize who belongs in which time.
>
>
>
This is handled as follows at my university:
* Students entering the exam room go to the exam proctor before they sit down
* Exam proctor has a list of students enrolled in the class
* Student shows a photo ID to the proctor
* Proctor notes on the list that the student attended the exam on that date, gives the student an [exam book](http://ecx.images-amazon.com/images/I/61YOHQPHG-L._SL1500_.jpg), and tells him where to sit.
This way, students can attend only one exam (and if you want, you can enforce that they attend the section they're assigned to).
>
> I build multiple tests, e.g. "A", "B", "C", etc., which are of similar difficulty, have the same types of problems, but different subjects. "A" is given to one section, "B" to another, etc.
>
>
>
This is OK, although if students aren't sure what types of problems you're going to give they'll still gain an unfair advantage if they hear about the exam from earlier sections. However, you can avoid this if you...
>
> Provided a sample test that students can print and practice with and discuss with me in class.
>
>
>
Giving plenty of sample exams ensures that students thoroughly understand the type of questions to expect. Then, if each section is given a different exams of the same type, but different problems, there is no advantage to hearing about the "type" of the exam from earlier sections.
Sometimes you can avoid having to build many exams by giving *everyone* about 80% of the exam "content" ahead of time. Then vary the remaining 20% between sections.
For example, in a writing exam where students have to read some text and write an essay in response to a "prompt": give everyone the text ahead of time, then give every section a different prompt to respond to.
(You can do something similar in problem-set type exams, such as those given in science and engineering, by giving everyone the problem scenarios ahead of time and then asking each section to solve something different about the scenario.)
Depending on the exam type, it may or may not be possible to design an exam of this type. However, if it is possible, it certainly reduces the likelihood that students will gain an advantage from hearing about an earlier section's exam.
Upvotes: 3 <issue_comment>username_5: A few of my math profs really REALLY didn't want their questions being posted online as they reused them. Their strategy to make sure no one left with a test was usually a variation of the following.
1) When the student enters the exam room, he or she goes up to the prof at the front and turns in everything to the prof. The students name is marked off the list of students. Book bags are left at the front of the room where everyone can see.
2) The prof or TA hands the student the exam along with paper, pencil, calculator, etc. The student takes a seat and begins the exam. Each student receives the exact same thing, and is not allowed to bring outside paper, notes, calculators.
3) Leaving the room constitutes finishing the exam (i.e. no bathroom breaks). When the student is ready to leave the room he or she turns in ALL materials handed out at the beginning of the exam.
Upvotes: 2 <issue_comment>username_6: I have had this issue occasionally when a student is sick and sits an exam at a later date from others. Obviously you can write them a new exam, but that is quite onerous, and it is preferable to be able to use the same exam that other students have undertaken, so that there is consistency in the assessment. To deal with this, I always instruct students that if they want to sit an exam at a later date than other students, they need to avoid finding out any information on the content or structure of the exam. This puts the onus on them rather than the other students in their course. Additionally, whenever a student needs to sit an exam at a later date than other students, I have a (pre-prepared) [statutory declaration](https://en.wikipedia.org/wiki/Statutory_declaration) attesting to the following:
>
> 1. I am scheduled to sit an exam for \_\_\_\_\_\_\_\_\_\_\_\_ (course) at \_\_\_\_\_\_\_\_\_\_\_\_ (time) on \_\_\_\_\_\_\_\_\_\_\_\_ (date);
> 2. At the time I am scheduled to sit this exam, other students in this course have already undertaken the same exam;
> 3. Except as detailed in point 4 below, I have not received any information from any other student in this course about the content or
> structure of this exam, including information communicated to me
> indirectly through a third party.
> 4. I have received the following information communicated to me about this exam (use additional page if you run out of room):
>
> \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
> 5. I do not believe that I have obtained any information about the content or structure of this exam that would put me at an unfair advantage over other students in this course.
>
>
>
When the student applies to sit the exam late, I ask them if they would be willing to fill out this form, to satisfy the university that they are not at an unfair advantage over other students. So far, all the students I've dealt with have been happy to do this. Students fill out and sign this form, and as part of the form, they disclose any information they have heard about the exam. Some will say things like "I heard it was hard", etc. Based on their answers I make a decision on whether they have received any information that would disqualify them from taking the exam. Assuming everything is okay, I let them know that based on their answers I am satisfied that they are not at an unfair advantage and they then sit the exam.
So far as I can tell, students seem to take this document pretty seriously. I cannot say for sure whether any students have lied on this document to sit an exam, without disclosing information they have found out about it. Still, they take care when they're filling it out, and I make sure to advise them that it is a criminal offence to intentionally make a false statement in a statutory declaration. So far I have not had any student who did not want to make a declaration, and I have not had any occasion where I have discovered a student who has given a false declaration.
Upvotes: -1 <issue_comment>username_7: A relatively simple way of making sure that people take the exam only once is clearly telling that once a student receives the exam questions, they have to sign it with their name (and possibly an identifying number assigned by your institution to avoid ambiguity), and when leaving, they absolutely need to hand in the exam (even if they did not complete even a single task). In practice even people who would cheat if given a chance won't write a fake name (especially given that writing a fake name could be considered forgery in many, if not most countries).
However, you can be almost sure that if people take the exam on different dates, at least one of the students will take a picture of the questions and share with the later group, or at the very least tell them what was on the exam. If you can't have all of the students take the exam at once, this is something you need to take into account - so at the very least, avoid questions which are too similar, unless a variation also appeared on the practice exam.
Upvotes: 0 |
2014/03/08 | 3,713 | 15,931 | <issue_start>username_0: I applied to various US Pure Mathematics PhD programs for entry this fall and the responses from my top preferences were not favorable, though some of my lower preferences were. I also have an offer to do "Part III" at the University of Cambridge, which is a 1 year masters degree via coursework (although there is an essay component worth about 1/6th of the assessment). I'm considering the option of accepting this, and then re-applying to the US schools with the hope that my application has improved and I get accepted into one of my higher preferences.
>
> My question is: How do US Mathematics departments view the Part III
> program, and if I do it then will it have a good/bad/no effect on my
> subsequent application?
>
>
>
Some things to consider: The Part III program is intended as preparation for a PhD degree, unlike many Masters degrees that are intended as terminal degrees. Is this well known?
An essay is part of the course - it usually involves giving a unified exposition of several recent papers on a certain topic. Though the content of the essay can be quite advanced, it often does not contain much original research.
The course is from October '14 to June '15, while applications to US schools are due around December '14. This means I will not have had enough contact with Cambridge faculty to get a letter of reference from them. The marks for all the subjects taken are released around June '15, so I won't be able to include any of those marks in my application either. However in the time between now and October I will continue work with my undergraduate supervisor, which should improve my main letter of reference at least a little. My field is algebraic geometry, and my supervisor has said that it would be better for me to go through Hartshorne in the next ~6 months rather than forcing myself to research just yet. So I won't be getting any original research done by the time I apply again.<issue_comment>username_1: I have faced with *all of the above situations* this semester alone! I teach two different sections spread over all 5 days of the week. How do I ensure heterogeneity of examinations? By administering different versions (different questions) of the examination.
Throw into this mix, my need to "accommodate" students with "needs" and different time frames that they require. The only way I managed this was have different versions of the exam and had to invigilate these examinations myself (no help from uni) at different times of the week and well into 6pm on a Friday. I ended up having 5 different versions with varying number if questions but *potentially* the same difficulty level.
Yes, some students cribbed at the end of the day about them not solving the same problem set as their peers but I had to (politely) ask them to deal with it. If they are thrust into a real world situation, outside the safe confines of a university setting, they may have to face non-homogeneous conditions on a regular basis with difficult time lines.
I self-tested these different versions and all of them took the same amount of time withing +/- 5 minutes for me. Yes, perhaps not the best metric but fair enough, I supposed. However, a lot of teaching may be subjective. You can try to use a grading rubric that you publish after the examination so that the students feel that it was a fair metric/yard-stick.
This was for a first year engineering class on **Engineering Mechanics**.
Upvotes: 3 <issue_comment>username_2: There are various ways of preventing students from showing up on the wrong day or showing up more than once. The first step would just be to tell them straightforwardly that it's not allowed. You could also assign seats for the exam. Assigning seats has the added benefit of reducing opportunities for copying from a neighbor's paper; they can't just choose to sit next to a friend or sit next to someone they know is doing much better in the class.
>
> I teach a writing course [...] The classes are large, and the school offers me no assistance in managing the exams.
>
>
>
The enrollment of writing classes taught by a single instructor is usually limited because it's not practical for the instructor to grade large amounts of written work from a large number of students. At some schools, writing classes are large, but there are TAs who read papers. If you're a single instructor teaching a writing class to a large number of students, then that's a problem in and of itself; these issues with exam arrangements would then be a symptom of the more general problem.
Upvotes: 4 <issue_comment>username_3: Disclaimer: I have no idea what exams for writing courses even look like, so this is a general answer.
Students **will** talk to each other and thus information about the contents of the first exam is going to leak, no matter what you do. Therefore your goal should be that the information about the content of the first exam does not give any advantages on the second exam (keeping both exams at the same difficulty).
There are basically two ways to achieve this (and you do not need to consistently follow one throughout the exam):
* Make information that could be obtained from the first exam available to everybody beforehand, e.g., clearly state that there will be a question of a given type.
* Make information that could be obtained from the first exam useless in the second exam, e.g., by not asking a similar question.
The main difficulty is now to recognise useful information. One pedantic example to illustrate this:
Suppose, there is a topic which is not central to your course and which is considered difficult or unfit for exams by your students to the extent that students have a good reason to assume that it is not relevant for the exam. If you (do not) ask a question about this topic in the first exam, students taking the second exam have a better estimate of the probability that you ask questions about this subject and thus have an advantage if you (do not) ask a question about it in the second exam – even if it is entirely different. So, if there is such a subject, you either have to announce beforehand that there will be a question about it in each (none) of the exams or you have to ask about it only in one of the exams. The latter case is however problematic considering the difficulty of the exams, unless you have two such topics at hand.
Some further aspects to consider:
* Another not so obvious information contained in the first exam is the general way you ask questions and what general aspects you consider important. Giving a sample test is a good way to provide this information beforehand.
* Neither somebody who focusses on the questions of the first exam nor somebody who focusses on everything that was not asked in the first exam, should have an advantage. Thus some deviation from the rule of not repeating anything without announcing it is ok in my opinion.
* No matter what you do, somebody will complain about it.
* In some cases, you can compile a huge catalogue of potential exam questions beforehand (part of this can be the exercises to your course) and give them to your students sufficiently long before both exams. Announce that you will pick questions randomly from this catalogue for each exam (under the constraint of equally difficult exams) and do so. This is very fair, but a lot of work and if your catalogue is too small (which is unavoidable for some subjects), it encourages learning without understanding.
Upvotes: 2 <issue_comment>username_4: >
> Many students regularly attend my lectures during sections that they did not register for, so I will not easily recognize who belongs in which time.
>
>
>
This is handled as follows at my university:
* Students entering the exam room go to the exam proctor before they sit down
* Exam proctor has a list of students enrolled in the class
* Student shows a photo ID to the proctor
* Proctor notes on the list that the student attended the exam on that date, gives the student an [exam book](http://ecx.images-amazon.com/images/I/61YOHQPHG-L._SL1500_.jpg), and tells him where to sit.
This way, students can attend only one exam (and if you want, you can enforce that they attend the section they're assigned to).
>
> I build multiple tests, e.g. "A", "B", "C", etc., which are of similar difficulty, have the same types of problems, but different subjects. "A" is given to one section, "B" to another, etc.
>
>
>
This is OK, although if students aren't sure what types of problems you're going to give they'll still gain an unfair advantage if they hear about the exam from earlier sections. However, you can avoid this if you...
>
> Provided a sample test that students can print and practice with and discuss with me in class.
>
>
>
Giving plenty of sample exams ensures that students thoroughly understand the type of questions to expect. Then, if each section is given a different exams of the same type, but different problems, there is no advantage to hearing about the "type" of the exam from earlier sections.
Sometimes you can avoid having to build many exams by giving *everyone* about 80% of the exam "content" ahead of time. Then vary the remaining 20% between sections.
For example, in a writing exam where students have to read some text and write an essay in response to a "prompt": give everyone the text ahead of time, then give every section a different prompt to respond to.
(You can do something similar in problem-set type exams, such as those given in science and engineering, by giving everyone the problem scenarios ahead of time and then asking each section to solve something different about the scenario.)
Depending on the exam type, it may or may not be possible to design an exam of this type. However, if it is possible, it certainly reduces the likelihood that students will gain an advantage from hearing about an earlier section's exam.
Upvotes: 3 <issue_comment>username_5: A few of my math profs really REALLY didn't want their questions being posted online as they reused them. Their strategy to make sure no one left with a test was usually a variation of the following.
1) When the student enters the exam room, he or she goes up to the prof at the front and turns in everything to the prof. The students name is marked off the list of students. Book bags are left at the front of the room where everyone can see.
2) The prof or TA hands the student the exam along with paper, pencil, calculator, etc. The student takes a seat and begins the exam. Each student receives the exact same thing, and is not allowed to bring outside paper, notes, calculators.
3) Leaving the room constitutes finishing the exam (i.e. no bathroom breaks). When the student is ready to leave the room he or she turns in ALL materials handed out at the beginning of the exam.
Upvotes: 2 <issue_comment>username_6: I have had this issue occasionally when a student is sick and sits an exam at a later date from others. Obviously you can write them a new exam, but that is quite onerous, and it is preferable to be able to use the same exam that other students have undertaken, so that there is consistency in the assessment. To deal with this, I always instruct students that if they want to sit an exam at a later date than other students, they need to avoid finding out any information on the content or structure of the exam. This puts the onus on them rather than the other students in their course. Additionally, whenever a student needs to sit an exam at a later date than other students, I have a (pre-prepared) [statutory declaration](https://en.wikipedia.org/wiki/Statutory_declaration) attesting to the following:
>
> 1. I am scheduled to sit an exam for \_\_\_\_\_\_\_\_\_\_\_\_ (course) at \_\_\_\_\_\_\_\_\_\_\_\_ (time) on \_\_\_\_\_\_\_\_\_\_\_\_ (date);
> 2. At the time I am scheduled to sit this exam, other students in this course have already undertaken the same exam;
> 3. Except as detailed in point 4 below, I have not received any information from any other student in this course about the content or
> structure of this exam, including information communicated to me
> indirectly through a third party.
> 4. I have received the following information communicated to me about this exam (use additional page if you run out of room):
>
> \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
> 5. I do not believe that I have obtained any information about the content or structure of this exam that would put me at an unfair advantage over other students in this course.
>
>
>
When the student applies to sit the exam late, I ask them if they would be willing to fill out this form, to satisfy the university that they are not at an unfair advantage over other students. So far, all the students I've dealt with have been happy to do this. Students fill out and sign this form, and as part of the form, they disclose any information they have heard about the exam. Some will say things like "I heard it was hard", etc. Based on their answers I make a decision on whether they have received any information that would disqualify them from taking the exam. Assuming everything is okay, I let them know that based on their answers I am satisfied that they are not at an unfair advantage and they then sit the exam.
So far as I can tell, students seem to take this document pretty seriously. I cannot say for sure whether any students have lied on this document to sit an exam, without disclosing information they have found out about it. Still, they take care when they're filling it out, and I make sure to advise them that it is a criminal offence to intentionally make a false statement in a statutory declaration. So far I have not had any student who did not want to make a declaration, and I have not had any occasion where I have discovered a student who has given a false declaration.
Upvotes: -1 <issue_comment>username_7: A relatively simple way of making sure that people take the exam only once is clearly telling that once a student receives the exam questions, they have to sign it with their name (and possibly an identifying number assigned by your institution to avoid ambiguity), and when leaving, they absolutely need to hand in the exam (even if they did not complete even a single task). In practice even people who would cheat if given a chance won't write a fake name (especially given that writing a fake name could be considered forgery in many, if not most countries).
However, you can be almost sure that if people take the exam on different dates, at least one of the students will take a picture of the questions and share with the later group, or at the very least tell them what was on the exam. If you can't have all of the students take the exam at once, this is something you need to take into account - so at the very least, avoid questions which are too similar, unless a variation also appeared on the practice exam.
Upvotes: 0 |
2014/03/08 | 1,275 | 5,443 | <issue_start>username_0: I've been working in the "private sector" since my late teens. I've gathered good web-development experience to be able to find work quite easily thankfully. I was never really interested in getting a degree, I only have a high-school diploma. I was passionate enough to study for myself and learn new things from colleagues and friends.
Web-development is still my passion, however recently I've been thinking about the actual work that I've been producing over the years for the companies I've worked for. It seems as though that even if the work is fun and challenging, I still get very little recognition for it: once the work is produced, my name is barely ever mentioned. I've been asking myself if all these things I'm building are really going to change someone's life.
Then I look at academia and research. Here people are doing things they actually love and study those subjects deeply with passion and generate new knowledge to help others within a specific field.
Has anybody ever switched from an industry job to an academic job? I'm wondering if it's possible to do research without a degree. What suggestions might you have for someone in my situation that wants to get a glimpse of the life of an academic where you can build and study something you actually love and not always do what "the company" wants.<issue_comment>username_1: If recognition is your key desire, academia will only be marginally better for you than your current situation. Just because your name is on a paper, doesn't mean anyone is going to care about your work.
I feel like there's a couple of threads in your question outside of recognition though. I'll make some comments where I can.
* To transition into academia and start working on your own ideas you'll need to probably start a degree program. Masters and PhD programs, more than anything, are training programs on "how to work on stuff".
It seems obvious right now that you have hundreds of ideas and the skills to pursue them -- but the academic context is a bit different. Research typically fits into a larger context than a single project, and you'll need to be able to sell your ideas to people who are experts in the area; most research projects that aren't 'consumer' oriented produce papers, not projects, so you'll need to learn how to write papers. The requirement for evidence is (or at least ought to be) high, which means that you'll need to learn what kind of evidence you have for your hypothesis, how to gather it, how to present it. All of this, the politics, the nitty-gritty of putting together a paper, is what you ought to get from a degree program. [You can try to do some of this on your own](https://academia.stackexchange.com/a/15094/1165) to be sure, but it's not an easy ride.
* If you want to get a feeling for what working in academia is like, without getting the degree, I'd seriously look into the possibility of becoming a programmer for a university. This helps you get a feel for what the work, environment, people are like. It could be that if you find the right project, your influence will be sufficient that you can get the recognition you want, without having to get the degree.
* Reading your question, I get the feeling of a grass-is-greener illusion. One thing that might be worth considering is: How much of the problem just your job? Could you find work at another company, in another niche, doing some other kind of programming that could be better for you? I feel like you could get 99% of your desired outcome not from academia, but from a job change. Maybe you need to go deeper into the stack; work with a company that builds the web technology you use. Maybe you need to go higher in the stack; start building client applications to the web technology you use. Maybe you need to get away from the web... perhaps start looking at transitioning into games, or hardware, or.... the list is endless.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Like Matthew, I get the feeling reading your answer that you don't have an extremely clear idea of what switching into academia would entail, and a pretty starry eyed view of what life in academia is actually like. While it's true that at its best, being an academic lets you work on exactly what you want to, pushing forward the boundaries of knowledge. But it's a long slog to get to that point, and even when you're there, there are a lot of other issues at play.
Of course, if you're independently wealthy, you can do whatever you want, but for most of us, we need to have a job, and just like any other job, a lot of the parameters are dictated for us. You also often have to get outside funding. In both cases, it's true that you aren't literally assigned research projects from on high (though some days that would be a lot easier than coming up with them yourself), but you do have an eye constantly on impressing your colleagues (on various levels) who make or influence decisions about funding, promotion and hiring. Not to mention that one often has teaching or service duties which really are assigned from on high (though admittedly, I think these still have a lot more autonomy than jobs in most fields would). I'm sure you can find plenty of horror stories about how this can go awry just reading the archives on this site. And that assumes you can get a job, which is far from assured (especially if you want to stay in Italy!).
Upvotes: 4 |
2014/03/08 | 9,819 | 39,976 | <issue_start>username_0: When I am preparing slides for a course (as I have been today, weekend be damned), aside from questions of pacing and exposition, I often find myself asking various questions **about the layout of slides** such as:
* Should bullets generally be introduced one-by-one using animations or should the entire slide be displayed immediately?
* Is animation generally good or bad for lecture slides?
* What is a good approach to take for titling slides?
* Should I include "separation slides" to chunk content?
* What is the optimal amount of information per slide?
* Slides should be numbered, but is it better to give the total number of slides on each (e.g., `4/20`) or just the current slide (e.g., `4`)?
These are all small questions but I think they add up to something non-trivial in the overall didactic potential of the slides.
I have my own opinions on these questions and I feel that I have a good intuitive sense of how to structure slides, but my hunches are just hunches.
Hence this question is looking for either:
1. Pointers to scientific studies or other well-argued material on good slide design for teaching
2. First-hand answers to the above questions (and related ones) accompanied by solid argumentation/anecdotal experience (rather than just subjective preferences)
I'm looking for answers that specifically target teaching rather than research presentations. (For me, there is a significant difference.) I'm also looking for answers that target slide layout and design rather than talk structure.
And though I appreciate that the notion of good slide design varies between different subjects, I still think that there is a meaty intersection of good practices that one could follow across all disciplines. It is precisely this intersection that the question targets.
My students thank you in advance!
---
On a side note, [here's a nice slightly-related question on what to do with the last slide](https://academia.stackexchange.com/questions/11014/final-slide-when-teaching).<issue_comment>username_1: This morning I watched a video about [how powerpoint is killing our ability to teach properly](http://www.slate.com/articles/life/education/2014/03/powerpoint_in_higher_education_is_ruining_teaching.html). It can be seen as a lesson in bad slide design. One key message is *if you are reading your slides to the students, you are not teaching.* Ultimately, slides should just contain key information, and you should tell the rest of the story.
Regarding some of the other points:
animation of text should be left to Walt Disney; there should be one idea per slide (slides are free!); separation slides are a good idea to let the listener know when topics have finished, which is especially helpful if the separation slides are blank (or even black!).
Less is more. If you want to give the students more, prepare a handout.
Upvotes: 6 [selected_answer]<issue_comment>username_2: I agree with @<NAME> and I think the thing that will be most beneficial, no matter how your slide show is arranged, is making your slides available ahead of time. There's 2 reasons I think this is helpful.
1) If I have the slides before class I have a more clear picture of where we are going than that one phrase about today in the syllabus does.
2) When I don't have to worry about trying to write or type every word on your slide it's easier for me to be able to listen to what you are actually saying and ask questions without being scared I'll run out of time to write everything down.
Upvotes: 3 <issue_comment>username_3: As a graduate student, I want to share my experience from students' point of view. I am assuming that you somehow make your slides available for students as study material. So if that is the case, the amount of information you put on the slides becomes a crucial trade-off. And we have to accept that some students who are not paying much attention during the lecture use those slides to learn the subject. Given that, I like the slides that somehow gives me a good idea about the topic, independent from the lecture.
I know that you should keep the amount of text on the slides at a minimum level, but to increase the teaching strength of your slides, you can put as many references to other course material or online sources. So that your slides are actually helpful outside of your lecture, too.
As for the layout (again from a students' perspective);
* Bullets should be OK to support the flow of ideas, but definitely no animations for bullets.
* In general, I am not against animations but they should be subtle and not for text. Maybe you need to explain something on a figure, and a gentle animation helps students to better see what is actually changing from the previous figure.
* I would support titling the slides, as it helps students to understand what you are talking about at that moment, in the highly unlikely event of distraction :)
* Separation slides are also helpful for students to see what the main topics in the lecture are and how they are separated.
* Slide numbers, well I do not think it matters much, but it doesn't
hurt to put a page number like "4/20"
These are what I think about ideal lecture slides. And for the third time this is a perspective from a student, and somewhat my personal preference, but I used to discuss these with my peers a lot.
Upvotes: 4 <issue_comment>username_4: I teach mathematics, which is traditionally done using blackboards. I try to design my slides to recreate the best features of blackboards, as far as possible.
* I use LaTeX with beamer rather than Powerpoint
* I make the slides appear bit by bit, not even a full bullet point at a time. Each step in a calculation appears on its own, so I can talk about it before the next step appears.
* I have occasional animated diagrams, in contexts where they actually add something to the explanation.
* Conventional wisdom says that you should have a small amount of information per slide. I don't know if that is good advice for other subjects, but it is very bad for mathematics. Traditionally you can use several large blackboards and thus have definitions, diagrams and all stages of a long calculation or proof visible at the same time. It is much easier to get lost if you do not have all those things in front of you. I spend a lot of time tweaking my slides to get as close to that ideal as possible. Often I have to put a bar across the middle of the slide, with reminders from previous slides above, and new content below.
You can see examples from a number of different courses at
<http://neil-strickland.staff.shef.ac.uk/courses/>
Upvotes: 4 <issue_comment>username_5: Why do you need to use slides? They change the classroom dynamic in a subtle but very important way, making the students look at the slides rather than at you. This is subjective, but none of my best professors used slides, except to show pictures. I strongly recommend using the board whenever possible.
Upvotes: 4 <issue_comment>username_6: I also teach math, as username_4 said he does. I use technology in my classrooms daily -- these days, an Android tablet with screen mirrored to a projector via a Miracast device (similar to an iPad with AppleTV/Airplay). All that said, I consider PowerPoint and its ilk to be bad use of the technology for everyday classroom purposes. I use the tablet in order to have quick and easy access to software for graphing or other visual demonstrations. For example, in a geometry class, I use something like the Geogebra app to show constructions and other highly visual content. But for the basic display of classroom notes, I use the Lecture Notes app for Android to show my handwritten notes, written with a stylus on the tablet screen, like a whiteboard replacement. This addresses Neil's concern about possibly needing to keep whole boards of definitions showing: the app saves the whole, virtually unbounded sequence of pages of notes, so if I need to refer back to a definition, I just scroll back. I rarely if ever use slides in a class. Well, OK, on the first day of class, I put all of the syllabus and other administrative garbage that I have to recite onto slides, because I think my hand would fall asleep having to write it all over and over again for each class.
That said, I think there are times and places when slides are appropriate. If time is at a premium, such as in a 20-minute research presentation, I will use slides to keep things efficient and on track, but I will also always have the software for handwritten notes available in case audience questions take me in a direction not represented in the slides. I am also a longtime TeX/LaTeX user, so these days when I make slides, I usually use HTML-based slides with MathJax.
Upvotes: 3 <issue_comment>username_7: It's hard to answer this without a lot of context--namely the content you will be presenting on the slides.
In general, your slides should enhance and accompany your teaching, rather than attempt to repeat it. The biggest mistake with slide-based presentations is to present the exact same content you are talking about. This leads to an extremely bored audience once they realize they can just read what you have on the slide and then ignore anything you are saying.
I'd suggest two sources of inspiration for you.
The first: <NAME>. While he wasn't necessarily teaching, he was communicating in his keynote speeches and a big part of his show was his slides. [Google will show you a large collection of them.](https://www.google.com/search?q=steve%20jobs%20presentation) Again, he wasn't an educator, so don't literally go with what he did, but I do think his emphasis on 'one key point per slide' is a good baseline rule to go with. His slides were there to be a 'succinct conclusion' to what he was communicating verbally. You will also note that he pared down the content and rarely used bullets and other bits of noise.
The second: <NAME>. I'd of course encourage you to read all of his works, as he's one of the 'grandfathers' of information design. But specifically, his booklet [The Cognitive Style of Powerpoint: Pitching Out Corrupts Within](http://www.edwardtufte.com/tufte/powerpoint) which mentions every graphic/info designer's favorite parable on the evils of powerpoint: [How bad powerpoint design may be to blame for the *Columbia* explosion.](http://www.edwardtufte.com/bboard/q-and-a-fetch-msg?msg_id=0001yB&topic_id=1)
To answer your specific questions, albeit with a healthy dosage of opinion:
>
> Should bullets generally be introduced one-by-one using animations or should the entire slide be displayed immediately?
>
>
>
If you are introducing bullets one-by-one it sounds like you are using your slides as verbatim cue cards. This doesn't serve the attention of your audience very well. It's also tedious and repetitive and ultimately you are just adding more monotony to the visuals rather than getting the audience to focus on what you are saying/teaching. So no, don't do that. It's a distraction, at best.
>
> Is animation generally good or bad for lecture slides?
>
>
>
Depends on what you are communicating. If it's a diagram that can communicate more information via the animation, go for it. If the animation is merely for the sake of animating something, it is likely superfluous [chart junk](http://en.wikipedia.org/wiki/Chartjunk) (a term coined by the aforementioned Tufte.
>
> What is a good approach to take for titling slides?
>
>
>
A slide shouldn't have so much textual content that it needs it's own title. :)
>
> Should I include "separation slides" to chunk content?
>
>
>
Not sure what a separation slide is. But, in general, the majority of your content should be coming from you...not your slides.
>
> What is the optimal amount of information per slide?
>
>
>
No more than you need. :)
>
> Slides should be numbered, but is it better to give the total number of slides on each (e.g., 4/20) or just the current slide (e.g., 4)?
>
>
>
The only reason to number slides is to give your audience a countdown as to when it will be over. So, I'd say no, there's no need to number the slides.
One last story: one memorable speaker I've seen is [<NAME>](http://en.wikipedia.org/wiki/John_Maeda) He had given a presentation on digital design. When it got to the point of his talk where he wanted to communicate via slides, there was no powerpoint, but instead an overhead projector and a marker. He'd talk and draw. I found that to be one of the more compelling ways to communicate a point and, if you think back to grade school (for anyone over 40) that's how teaching was done. And didn't we all find the overhead projector and the teacher's marker an engaging way to communicate? It was immediate, personal, and directly connected to what was being said at that very minute.
Upvotes: 2 <issue_comment>username_8: Best advice I have read, maybe in the whole world:
<http://www.garrreynolds.com/preso-tips/design/>
I have applied Garr's advice for over 7 years and it has always worked to the point businesses pay me to design, give presentations (and coach senior managers), so he knows what he is talking about.
Upvotes: 2 <issue_comment>username_9: In terms of pointers to *scientific material*, I am unaware of ones that are specifically focused on teaching. That being said, in terms of *slide design* I do not share the opinion that teaching is fundamentally different than a research presentation - so good advice for research presentations in my opinion are easily portable to the classroom.
Some work I am familiar with are from pioneers in data visualization research, <NAME> and <NAME>.
* <NAME>'s [*Clear and to the Point: 8 Psychological Principles for Compelling PowerPoint Presentations*](http://rads.stackoverflow.com/amzn/click/0195320697). I also see he has a newer book named [*Better Powerpoint*](http://isites.harvard.edu/icb/icb.do?keyword=kosslynlab&pageid=icb.page250941).
* Edward Tufte's essay [The cognitive style of powerpoint: Pitching out corrupts within](http://www.edwardtufte.com/tufte/powerpoint). Note this is also a chapter in his book [Beautiful Evidence](http://www.edwardtufte.com/tufte/books_be). (As already mentioned by username_7)
Tufte's work is more a diatribe about what is wrong with powerpoint. Kosslyn's writing style is to give autocratic advice in text and then have an appendix that points to scientific literature to back up his opinion. So even if you don't agree with Kosslyn's advice it is a good start to a literature review (which would mostly be oriented towards data-visualization and not necessarily experiments specific to slides).
Upvotes: 2 <issue_comment>username_10: From my (student) perspective, there is no such thing as slides that are good both for lecture and self-study. Slides for lecture should contain as little information as possible (following general *good presentation rules*). This will never be enough material for somebody who did not attend the lecture to learn (if you intend to provide such material).
For example, I had a professor, who made very good presentations. There was the concept of *business requirements* included in them (just the notion, with no definition, which is good for presentation). I spent 10 minutes arguing with my colleagues who did not attend the lecture that it does not mean only how much money you have to spend for project, but also the whole environment the project will be held in. If they studied without me they would be mislead. I knew what it was supposed to mean because I noted it down on slides copy.
That's when we come taking notes. My favourite workflow for lectures was when I could print the notes before the lecture (from Internet) and then take notes on them. This made me concentrate on the lecture very well, because I had to find the things that require additional notes in what the lecturer was saying, but I didn't have to note everything (I would not have time for this).
Remember that using slides makes yourself *faster*: you don't have to type notions/names/equations on the blackboard, while your students still have to do this. You must take this into consideration if you want them to have correct notes (and then learn all the facts properly).
If you **already** have the slides that include definitions, **longer texts**, you can make them more **lecture-friendly** fast, by making most **important** words/terms/ideas **bold**.
Upvotes: 4 <issue_comment>username_11: Besides <NAME> which was already recommended by username_9, I also like [Five ways to reduce Powerpoint overload](http://www.indezine.com/stuff/atkinsonmaye.pdf).
And of course Nthe Gettysburg Powerpoint Presentation, which tells you how to do everything wrong: <http://norvig.com/Gettysburg/index.htm>
Upvotes: 2 <issue_comment>username_12: I'd really like to chime in with a big reply about what I consider really important - that it's you rather than your slides diong the teaching etc. - but I'll try and stick to answering the question. Also, I feeel that the kind of teacher that takes the time to go on stackexchange to ask about good slide design is probably aware of the big picture already.
I'm a university research assistant with around five years' experience in various teaching assistant jobs, some of which have involved slides, and four years' university experience as a maths student during which I've seen everything from brilliant to atrocious lecturing both with and without slides. Based on everything on presentation design that I've ever googled there seem to be a few basic rules that match well with my experience of which slide-presented lectures I found most engaging and in which of my own classes I noticed the fewest students falling asleep.
**Animations for bullet points**:
As a rule, never. The one exception that comes to mind where this works well for me is highlighting the current point if you are displaying a number of points across a sequence of slides where you want to remind students which point you're currently on. Example: you're showing something's an equivalence relation which means it must satisfy three properties (reflexive, symmetrical, transitive). Here one could put these three keywords in one corner of the slides for all three properties, with the current one highlighted.
If you have a sequence of bullet points each one containing a chunk of text, you probably have a much bigger problem than animation: students will be reading your slides rather than listening to you!
**Animation in general**:
Animations such as spinning-in text, fading in text with "whoosh" sounds and the like I think we can safely dismiss as for five-year-olds and under, and even then only because they won't have seen it too many times before to get bored. At a college/university level it just feels unprofessional to me. That leaves the "stepwise" animation where the teacher shows a concept step-by-step with the help of slides.
Of all the presentations I can remember seeing or giving, it's overwhelmingly had a negative impact. Like all rules this is not without exception but I've seen so many bad animations that I tend to avoid them completely myself nowadays.
In maths/cs, something that's generally taught using an animation are graph algorithms such as Dijkstra's ([1], complete with animated gifs). I saw this presented twice in an undergraduate course, once with slides, once presented by the TA on a whiteboard - the whiteboard one was better by miles even though it was the very same example! I don't have any hard evidence for this but my feeling is that the teacher doing the steps themselves (whether blackboard, whiteboard or overhead projector) makes students pay much more attention to the teacher than if they're just clicking a mouse to advance the slides. Animated slides or gifs can work when the student's studying on their own, a two-entity interaction between the student and the material - which is not what's supposed to be happening when you as a teacher are in the classroom.
**Titling, spearation slides**:
The big difference for me is between slides for presenting in class and handouts. Handouts need separation and titling because they're what gives them structure, in the same way as chapter/section titles in textbooks - in class, as long as I'm in the room I can do that myself so I need much less support on the slides. As a rule, I'd say the better a slideshow is for self-study, the worse it is in class.
Something I've had a good experience with are separation slides that are simply blank - the students pick up my style quickly enough that when the slide goes blank, it means the next thing they need to pay attention to is me as I'm about to tell them something, either by saying it or writing it on the board. If they wait for the next time I'm going to show a slide, they'll miss a lot in between.
My aim here is that I'm giving the class and slides come up every now and then to support me, not the other way round that the slides are driving the class with me mainly there as mouse-clicker and occasional commentator. With this style, I find that I need almost no "structuring" slides.
**Optimal amount of information**:
One concept. Unless you have to pay a charge per slide you show, a three-bullet-point slide is almost always better off as three slides. (Again, this does not apply in the same way to handouts, especially when there's a per-page printing cost.) If something can be presented with a single keyword, that's better than a whole sentence. Better still, replace words entirely:
* bad: "(bullet) Dijkstra's algorithm has running time O(m + n log n)." (This slide is essentially giving a parallel lecture to yours.)
* a bit less bad: "Running time: O(m + n log n)" (especially if you've just been discussing said algorithm)
* better: "O(m + n log n)" (in large font; and give the students the context by speaking to them)
* good: "(icon of stopwatch) O(m + n log n)" (Apparently the brain can process images and words in separate areas, i.e. in parallel, so an icon/picture actually reinforces what you're saying rather than contends with it. Useful tip taught me by a graphic designer.)
**Numbering**:
For handouts, something like "course title, date/number of lecture, slide x of y" - imagine someone drops their stack of paper with the printed slides, then the footer/numbering should be sufficient to help them reassemble it all. For slides in class, I'm in favour of "less is more". The more text there is on the slide the more it distracts, which is why a footer as above is terrible for a presentation. I'd stick with just the slide number - it conveys useful information (a student can at the end of class ask you "I have a question about slide 14"), I can't see how the total number of slides is useful during class itself, especially as some slides will take longer than others. (If someone really wants to know how long it will take until this class is finally over, they can always look at the clock.)
[1] <http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm>
Upvotes: 2 <issue_comment>username_13: Honestly, the best slides are those where they contribute to the lesson being taught, but don't reiterate exactly what you're saying. The worst slide design is bullet points that are, ver batim, what you are saying. A common misconception is that powerpoints are designed to use bullets to present everything. While bullet points help to organize some information, generally you don't want enough information on each slide to warrant bullet points. Walls of text stop people from even paying attention to at the very least the powerpoint, at the worst, the presentation as a whole. Depending on your subject, you can either use short two liner quotes or such for a slide, a mathematical equation, etc.
The best use of powerpoints are for visual cues. Many top tier presenters put no more than an individual picture on each slide. Something simple that either helps drive home their key points or makes it easier for the viewers to remember what is being spoken. The key here though is to make sure the picture is simple, as often times, people end up using pictures that are distracting, drawing the attention of the viewers away from the information being presented.
As far as animations go, they should be used sparingly at best, and it doesn't hurt to not use them at all. Again, this goes back to the distracting aspect of things. In addition, if you do use animations, be certain to not use too long or too many animations in a slide, as it does nothing other than slow down the presentation and break the fluidity of the presentation as a whole.
Upvotes: 0 <issue_comment>username_14: From 20+ years of Powerpoint use in instruction, here is what I have learned:
**Should bullets generally be introduced one-by-one using animations or should the entire slide be displayed immediately?**
It depends. If you really know the material and want students to focus on your elaborations rather than furiously write down what is on the slide and thus ignoring you, then use a "slide build". If you are not concerned with that issue, put it all up there at once.
**Is animation generally good or bad for lecture slides?**
It depends. It can be very useful in adding emphasis to a key point or assisting with comprehension (humans learn best when you tell a story with a beginning middle and end). If you have to explain an sequential concept (timeline, multi-step process etc) animations can help students visualize and retain the sequence.
**What is a good approach to take for titling slides?**
Titles seem important to keep students on track. Multiple slides on the same topic labeled "Background Information (1 of 3)" seems to be the easiest to follow.
**Should I include "separation slides" to chunk content?**
Sure. As was said previously, slides are free. Any help with structure is good. The best use would be to include some type of mental "hook" on such slides to pique interest for the next section.
**What is the optimal amount of information per slide?**
There are a lot of opinions Ive come across. The simplest is one topic with slide content adhering to the "6 x 6 rule" of no more than 6 bullets of 6 words each. Less is better. Slides provide a frame upon which the teacher elaborates with the details. You can always have duplicates of one slide with a slide in between with just a function or some other detail you want students to focus on and flip to it then back to the 6 bullets.
**Slides should be numbered, but is it better to give the total number of slides on each (e.g., 4/20) or just the current slide (e.g., 4)?**
4/20 works well with more mature audiences. For younger students (K-12) it doesn't seem to matter much in my experience.
**Additional thoughts**
If you can break up instruction into 5 minute or smaller segments with some other activity then taking 15 minutes or so people seem to learn a lot more and stay engaged. Alternate back and forth for the allotted time.
Walk around in a random pattern ("bumblebee technique," as one of my students called it). I can't find the link but supposedly research has shown that it helps an audience maintain attention.
Use 30 pt or larger font in sans serif. This will keep words to a minimum and make it easy to read. Once again—some research I read a long time ago.
Don't use color gradient backgrounds, it is often hard to read. Use high contrast text and background.
Work in ways to check on comprehension throughout the instruction. Whether you poll students using their phones and texting apps, verbally call on students, encourage questions, etc. You have to think hard about your comprehension feedback loop.
Bottom line, remember it is an assistive technology and only you, the instructor can make it interesting and memorable!
Upvotes: 2 <issue_comment>username_15: Since a lot of the answers are perspective-driven rather than evidence-driven, I will give my own perspective and then accept the most popular answer.
First off though, I wanted to address this comment I made in the question:
>
> I'm looking for answers that specifically target teaching rather than research presentations. (For me, there is a significant difference.)
>
>
>
The difference is that for research talks, the meat of the content is in the paper already. The talk is an advertisement: a flavour of the content in the paper. When I prepare research talks, I'm quite happy to keep things at a high level of detail and to avoid bullets almost entirely and let those few people in the audience who are interested in more detail defer to the paper.
For teaching, the meat of the content may be elsewhere: it may be in a book or it may be in handouts. The level of textual detail in the slides does depend on a balance of what other material is made available alongside the slides, what can be noted in the talk, etc. For example, I'm currently preparing a course on a new area that is without an authoritative text-book; I'm preparing the course as I teach it. Though I could provide separate handouts, I prefer to use the time I have to capture the content in the talk/slides. Currently my strategy is to write text-heavy notes and annotations into each slide (where needed) to give the context and some description for the slide. This is lower-effort for me, meaning I can keep the slides for introducing topics and keep everything together, and the students can refer to the slides as handouts without being hit over the head with text during the talk.
Also, the idea of using (only) the board is in principle good for courses that are, e.g., mathematical in nature, and where teaching process is important. However, using the board is far too slow for subjects that wish to cover a lot of factual knowledge. For example, if I need to teach about the table of elements, I'm not going to draw that on the board. (Furthermore, I remember in many a board-heavy lecture as an undergrad, concentrating solely on transcribing everything to read later ... I think a lecture where a student is *constantly* writing is a bad lecture.)
Anyways, back to answering my own questions (parts of my answers are indeed informed by/similar to other opinions):
>
> Should bullets generally be introduced one-by-one using animations or should the entire slide be displayed immediately?
>
>
>
Bullets should be used sparingly and should contain short material, but they are useful to present taxonomies and high-level summaries. They also save time in preparation (vs. preferable visual slides). Bullets that are logical consequences of other bullets ... that follow from what was discussed ... should be hidden until that conclusion is natural. Bullets that answer questions should be hidden until the answer has been arrived at (interactively).
>
> Is animation generally good or bad for lecture slides?
>
>
>
Animation for aesthetic purposes gives the feel of a board-room presentation in a marketing department. Animations should be as minimal as they can be ... for heavens sake, no whooshing around. However, animation of graphics can work well when explaining, for example, the dynamics of a system. It needs to be done well, however, and in clear stages! For example, here's a self-explanatory animation to teach bubblesort (from [Wikipedia](http://en.wikipedia.org/wiki/File:Bubble-sort-example-300px.gif)):

... it's faster and cleaner than going to the board, and one can even *see* the worst-case performance.
People who say "*never use animation*" (or make similar blanket statements) I feel lack imagination.
>
> What is a good approach to take for titling slides?
>
>
>
Since the title of the slide is often the only substantial text on a slide, I use whatever title helps set a context for that slide in relation to others. I don't really believe in slide titles like "Sorting Algorithms I", "Sorting Algorithms II" ... instead ...
>
> Should I include "separation slides" to chunk content?
>
>
>
Yes. I find separation slides (with a short large title) to be very useful to mark the end of a theme, to give a chance for pause and overcap, to emphasise the flow of the talk and to also give a brief opportunity for questions before moving on.
>
> What is the optimal amount of information per slide?
>
>
>
That depends. Slides are generally free and content is easier to consume in sparse slides. Generally a picture or a diagram you can point to while talking is sufficient. But occasionally you might have a lot of content that is interdependent, such as for comparison:

([Taken from Wikipedia](http://en.wikipedia.org/wiki/Sorting_algorithm).)
This table is dense *but* it is best presented in it's entirety (with the help of colour) for the purposes of comparison. If the complexity of each algorithm were split over different slides, the comparison would be difficult to see.
A good rule of thumb is to include a ***minimal*** amount of content that stands as a cohesive whole: put the question and the answer on the same slide (since one makes little sense without the other), put the comparative data on one slide (since the comparison cannot be drawn otherwise), etc.
>
> Slides should be numbered, but is it better to give the total number of slides on each (e.g., 4/20) or just the current slide (e.g., 4)?
>
>
>
I believe that parts should be numbered rather than slides. For me, `4/20` can sometimes be the psychological equivalent of making an audience watch a pot boil. Progress should instead be *emphasised* on the level of learning and topics (e.g., using separation slides) rather than an arbitrary slide numbering. "*We've looked at X, now we're going to look at Y and Z.* ... *Okay, now that we've covered Y, all that's left for today is Z.*"
But slides should certainly be numbered! The audience/students may wish to ask a question or make a note about a specific slide!
Upvotes: 2 <issue_comment>username_16: >
> Should bullets generally be introduced one-by-one using animations or should the entire slide be displayed immediately?
>
>
>
Teaching should be about telling a story. Whatever you are teaching about should have some sort of flow -- a beginning, a middle, and an end. I teach IT Networking for a living, and often my three-step "story" is: Define the problem, Define the solution, Define how the solution fixes the problem.
Assuming you are following something like username_14's 6 x 6 rule and only have a reasonable amount of text on each slide, it is *essential* to have the bullets appear one at a time.
If everything shows up at once, the students will sit there and read your slide, and ignore your story. But if there is only one bullet to read (of only 6~ words), they will have read it in the first 10~ seconds, then their attention is back on you, and on your story. Even the students who insist on writing everything on the slides in their notes will only be writing for 20~ seconds, before they give you the attention back. Instead of for the next 60-120 seconds and then "come back to" you when you are part way through your story, potentially having missed something important.
Remember, most of the information will come from your speaking and lecturing. The slides are just little bullet points, often mostly to keep you (the instructor) on track. But hopefully to give the students "hints" at recreating your lecture when they are studying later.
>
> Is animation generally good or bad for lecture slides?
>
>
>
As a general rule, keep it minimalist. HOWEVER, do not take that to mean avoid it entirely. A little animation can create a bit of visual stimulation that simple text can not.
For text, the only acceptable animations are "appear" and "fade", and if you use fade, it should *always* be set to "very fast" speed (0.5 seconds). The effect is barely noticeable when compared to simply appearing, but makes for a somewhat more 'graceful' appearance of the text. It gives the impression of a more fluid presentation.... especially if you do a good job of introducing the bullet before it appears on screen.
For pictures, images, or abstract concepts represented visually, use animation *if it makes sense*. One of my classes discusses encryption between two parties. I actually have a message (a letter) move back and forth between my Sender and Receiver (represented as cartoons or faces) as I discuss what each party does with the message to encrypt/decrypt and verify the contents. The animations are simply the "move right" and "move left" animations, also set to very fast.
The vivid/blatant animations like woosh, or spinning text, or bounce or, spiral in/out, and the others, pretty much *never make sense*, and really shouldn't exist in any professional or higher learning teaching setting.
>
> What is a good approach to take for titling slides?
>
>
>
Text slides ought to be titled. When creating curriculum, most people start with an outline. Assuming an outline looks like this:
```
Course Name
* Chapter 1
- Section A
* Idea I
```
Each text slide will be titled with "Idea I", or "Idea II" (not Idea itself, but the actual content of that idea, of course).
If I have a slide which is pure infographics or sequence animations, then I might omit the slide title. This cues the students in to watch the screen, and the furious note takers to put down their pen for a second.
>
> Should I include "separation slides" to chunk content?
>
>
>
Yes, absolutely. There is no reason not to. If gives the students a visual break and the instructor a 'pause' slide to check in with the students, see if there are any questions, or check the time to ensure you are giving students breaks every 45-60 minutes.
Using the outline example above. I might not put a separation slide between each Idea, but I definitely would between each Chapter and Section.
>
> What is the optimal amount of information per slide?
>
>
>
This is often debated, but can be summarized as "less is more". I like what username_14 said about the "6 x 6 rule". That is a good *guide* to follow (but maybe not a hard fast rule). Again, most of the content should come from the Instructor's voice. For example, I might use these bullets to "teach" the entire plot of [Disney's Alladin](https://en.wikipedia.org/wiki/Aladdin_(1992_Disney_film)#Plot):
* Aladdin, the street rat, unexpectedly acquires a magic lamp
* Aladdin wishes to become a prince to impress Jasmine
* Jafar thwarts Aladdin, and tries to marry the Princess
* Aladdin outsmarts and defeats Jafar
* Aladdin frees the Genie
Obviously, if all you read was the bullets above, you wouldn't see how [Aladdin was the most successful film of 1992](https://en.wikipedia.org/wiki/Aladdin_(1992_Disney_film)#Theatrical_run). But with the instructor providing the content, you might get an idea. And if a student was in the class and heard the story, they might be able to recreate it given just the bullets above.
>
> Slides should be numbered, but is it better to give the total number of slides on each (e.g., 4/20) or just the current slide (e.g., 4)?
>
>
>
I prefer "4/20" if the "20" is the number of slides in the current Chapter (see outline above). This allows adults/students to time their bathroom breaks, etc, knowing a break is coming in 16 more slides, assuming the teacher is nearing the 45-60 minutes of lecture mark.
If the "20" is total slides in the presentation, then using "4" or "4/20" produces the same effect.
But definitely, the slides should be numbered. Helps for students wanting to refer to a specific slide, or when the students point out a spelling error, you can just jot down the slide number and fix it later.
Upvotes: 1 |
2014/03/09 | 833 | 3,736 | <issue_start>username_0: I submitted a manuscript last year to an IEEE journal, and the paper was accepted for publication. I recently submitted a new manuscript to the same journal, but, this time, the paper was rejected, and I was told that I should submit to another journal since my paper was not within the journal's scope. The two papers both target similar applications. Furthermore, the second paper uses some work in the first paper, making substantial improvements at solving a much more difficult problem. Based on the reviewer's comments (and my own personal assessment), I do not believe that the work presented in the second paper was "incremental" in nature. Far from it. Two of the three reviewers gave positive reviews. The third reviewer, however, mentioned the "outside of journal's scope" issue and the associate editor and editor-in-chief both sided with the 3rd reviewer, and the paper was stopped dead in its tracks.
I'm scratching my head trying to come to terms with how in the world my first paper could be accepted by this journal, but the second paper focused on the same application space and achieving substantial improvements on a much more difficult problem (backed up with measured results) could be rejected. Any ideas?<issue_comment>username_1: Four ideas:
1. It was really out of scope last time, but they either
* didn't notice and mistakenly accepted it, or
* didn't have enough in-scope papers to fill the issue, so they accepted yours even though it was slightly out of scope.
2. They made a mistake and your new paper is actually in-scope. However, you won't win any friends or improve your chances by trying to convince them of this.
3. The journal has changed scope slightly in the last year, or is trying to change scope slightly now, or different editors have different ideas of what the scope is (as suggested by NateEldredge)
4. There was some small detail about the first paper that made it in-scope which the second paper lacks, which we couldn't possibly identify for you.
Upvotes: 6 [selected_answer]<issue_comment>username_2: The question and most comments assume that the publication process is more rational than it really is. This happened to me very recently: journals A and B rejected my paper one after the other; in both cases the editor and/or referee said that I should submit it to journal C. So I did, and now journal C tells me (with reports from two referees) that my paper is interesting and technically correct, but not a good fit for their journal.
Upvotes: 3 <issue_comment>username_3: There could be any number of reasons. Ultimately, since your two papers were different, it's perfectly possible that one was within scope and the other wasn't.
Perhaps your first paper was on the borderline of the journal's scope. Even if it wasn't, perhaps the extra analysis you had to do for the second paper was outside the journal's scope. For example, the people at CERN had to develop some serious computer systems to process all the data coming from the Large Hadron Collider. Those systems were completely necessary to solve the harder problems resulting from experiments with the LHC, compared to those done before the LHC was built. However, a detailed description of those computer systems would, I assume, be off-topic in a physics journal.
Also, remember that there are some papers that everybody will agree are within the journal's scope, and there are many papers that everyone will agree are out of scope (e.g., political history in a biology journal). Everything else is subjective: some people will say it's in-scope, some people out. Some people will say it's in scope today but, if you ask them again in a year's time, they'll say it's out.
Upvotes: 2 |
2014/03/09 | 1,000 | 4,066 | <issue_start>username_0: I have three close friends who are ambitious and young junior lecturers at universities in Thailand, Indonesia and Singapore who are interested getting academic jobs in a top-tiered American university.
How might they go about achieving this? What can they do in the meantime to increase their chances, compared to academicians who are already based in the US?
Neither of them are American citizens, although one of them studied in the US and another in the UK. They all work in the humanities and social sciences. I know that entering the academic job market in the US can be expensive, with all the traveling. And my friends live more or less at the other end of the world.<issue_comment>username_1: To give some thoughts on my own question:
Foreign lecturers in the US are common but from my anecdotal experience (I am not an academician) a lot of them began their academic careers in America, probably transferring from being a grad student there.
For academicians who used to be based at universities outside of the US and Europe, my impression is that they were mid-career and already well-known and respected in their fields before they made the move. They were usually specifically recruited by the American university.
Having some sort of visiting professorship stint in the US and presenting work at conferences in the US might help with the exposure, I guess, but I am not sure.
Upvotes: 2 <issue_comment>username_2: As discussed here ([Is it more difficult to score a Tenure Track position in the US when applying from outside?](https://academia.stackexchange.com/questions/17694/is-it-more-difficult-to-score-a-tenure-track-position-in-the-us-when-applying-fr)), getting a US tenure track position from e.g., Thailand might be tricky. JeffE pointed out that it is not necessarily impossible, however, your friends would likely need to be on the top of their fields in that case. Your friend from Singapore seems to be a different case, as the universities in Singapore (at least NTU and NUS) have an **excellent** reputation around here (something that can, unfortunately, not be said about universities in Thailand or Indonesia).
In any case, my tip is to **try to get a postdoc position in the US** first. Postdocs are often reasonably easy to get into (at least in CS, can't really say for social sciences), and can act as a step ladder of sorts to faculty positions. Anyway, your friends need to be aware that the academic job market is no piece of cake in the US. Hence, no matter what they do, they need to expect that getting a professorship at a top university may simply not work out, so they should have a fallback plan to account for this case.
Upvotes: 3 <issue_comment>username_3: 1. **Publishing.** If your field has "A" journals then publishing in them will add to your credibility.
2. **Co-authoring.** If you can manage to have co-authors in the U.S., you can come over during your sabbatical or visit their school for a few months. Make more connections, gain more visibility, present your work in informal or formal workshops at all the nearby research schools. Once people know who you are, they will think of you when a need arises or your application will carry more weight when there is a systematic search for a candidate.
3. **References.** Strong letters from reputed academics that are known to US schools.
4. **Networking and presenting at conferences.** Networking at top conferences. If you can manage to have your work be presented at top conferences, that would give you lots of visibility and also opportunities to network.
Staying and Traveling in the US is expensive but the harder part is reducing the information asymmetry (about your future publishing potential) between you and your potential employers and also convincing them that you would join, even if you did get an offer. For example, some school in rural Virgina may want to hire you but they probably won't try because they don't think you will come live in a rural culture which would be very alien to yours.
Upvotes: 1 |
2014/03/09 | 6,610 | 27,378 | <issue_start>username_0: So I reviewed the "[What are the advantages or disadvantages of using LaTeX for writing scientific publications?](https://academia.stackexchange.com/questions/5414/what-are-the-advantages-or-disadvantages-of-using-latex-for-writing-scientific-p)" question on this forum and am sitting on the fence at the moment whether or not to use LaTeX to write up my masters Thesis.
I get the feeling that it is best suited for Scientific work but my MLitt is in History. I have searched my university website about LaTeX and most results come back from the maths department.
I am a part-time research student so my thesis with be approx. 50,000 words. At the moment I am using Libreoffice (I'm a Linux user -Ubuntu distro) to write up each chapter as a separate document which I was going to bring into a master theses document. I am using Mendeley to manage all my footnotes and bibliography.
I'm going to be meeting my supervisor over the next couple of weeks and would like to discuss the matter with him as to if I should/can use LaTeX. I'm sure how familiarity or usage of LaTeX within the History department will impact on my decision but would also like to prepare my thesis in the best possible way.
>
> Edit 10/04/14: After a meeting with my Supervisor it appear that the History department has no preference on software for writing the thesis. Only requirement is that final thesis before defence is printed in ring-bound cover and the (hopefully!) accepted thesis is a hardback bound copy. My last written piece to my supervisor was done in LaTeX, using Texmaker on Ubuntu then exported to pdf, and other than some tweaking we need to do to the citation styling he was quite happy with the output. His advise was to use whatever software I was comfortable with (although he had never heard of LaTeX).
>
>
>
I would be grateful for answers from people who have used LaTeX in the Humanities area so as to be best prepared for my own decision on whether or not I would like to use it.
>
> (Edit 10/03/2014) Just based on some of the answers, especially in
> relation to the learning curve with LaTeX, here is some more info that
> may be useful. Probably about 95% of my thesis will be text but I
> shall also need to insert some images (maps and photos) and will
> probably be entering some tables with stats. As stated above I use
> Mendeley for my refernece manager and have read some blogs where this
> is compatible with LaTeX so I think I would continue to us it if I go
> the LaTeX route.
>
>
><issue_comment>username_1: It is quite likely your advisor has never heard of LaTeX, so don't expect too much from discussing it with him.
LaTeX has a fairly steep learning curve but it is good for working with references and *great* for working with math. Since you don't need the latter at all, it may not be worth the effort for you.
Upvotes: 3 <issue_comment>username_2: >
> I'm sure how familiarity or usage of LaTeX within the History department will impact on my decision but would also like to prepare my thesis in the best possible way.
>
>
>
I would say that it should impact your decision strongly. You should prefer a tool that is used by your colleagues and supervisor to one that is slightly better intrinsically. They are the people who know what you need to do, and how to accomplish it with the tools they know.
I personally prefer LaTeX to LibreOffice, but I would guess that the combination of LibreOffice and Mendeley have all the features you need. (One in particular that I'm going to call out is change-tracking. *Enable it as soon as you start, or you'll wish you had before long*.) But the advantage of knowing your tool and having colleagues who know your tool outweigh most of LaTeX's advantages.
Upvotes: 4 <issue_comment>username_3: To answer the question with a clear yes or no would be to oversimplify things. LaTeX is used for much more than just mathematics so I would not hesitate from that point of view. Learning something new is always useful so again, no argument there. What you need to ask yourself is if you are interested in learning LaTeX. this may depend on whether or not there are others using it in your neighbourhood. Being completely alone is tougher than having some other persons with whom to work when learning. Another question is to what extent LaTeX is used in your field for journal publications etc. So, try to assess how much use you will have in your field from learning it. Since LaTeX can be used for writing papers, books and reports as well as for making presentations (a la PowerPoint) and posters, it includes bibliography handling, it is a very useful tool for any field for scientific writing.
Upvotes: 3 <issue_comment>username_4: I'm finishing up a PhD in philosophy that I've written in LaTeX. Here's some suggestions:
* make sure your advisor is ok with leaving you comments in pdf. I suspect he or she will not understand the question and will not be able to give you any feedback unless you submit chapters in word format. This is a deal breaker. Don't make any more problems communicating with your advisor than absolutely necessary.
* lots of academic journals in the humanities still don't accept submissions in pdf or latex source form. If you are planning on submitting your stuff to a journal, you might save yourself time writing in word format.
* there are some tools available to convert latex to rtf, html and other tools. texht is the best.
* If you do decide to go LaTeX, don't get lost in the minutiae of learning how to tweak everything. It's easy to lose lots of time learning new packages and stuff when you should be writing, writing, writing. Use the wikibooks latex guide as your quick start guide when you need to learn how to do something fast.
* Especially if you're on Ubuntu, don't get the LaTeX distributed through Canonical's repositories. It's usually out of date (haven't checked in a while). Just go on and get the vanilla TexLive 2013 distribution from CTAN.
* The tex.SE site is really, really good. Like ridiculously helpful.
* If you are familiar with version control programs like git, mercurial, or svn you can actually keep a very precise idea of exactly how your thesis has grown over time. You can roll back changes, etc. This is kind of advanced stuff for LaTeX, so I wouldn't spend like a lot of time learning this stuff if you aren't already familiar with it, but if you are, it can be really helpful. EDIT: Per @henry's comment below, see the following [guide](http://rogerdudler.github.io/git-guide/index.html) by <NAME> to get started with git.
Upvotes: 7 [selected_answer]<issue_comment>username_4: When my husband did his master's, he used LaTeX to write his thesis even though his supervisor preferred Word and everyone else in his lab used Word. He encountered some pros and cons.
***Pros of using LaTeX:***
* Word gets very slow and buggy once your documents are past a certain length, say forty or fifty pages. One of my husband's friends spent a day renumbering all of the figures in his thesis after the numbers mysteriously disappeared. I'm not sure if this issue exists with LibreOffice, but it may.
* You mentioned breaking your thesis into smaller documents and combining them later. This is easily and commonly done with LaTeX; ~~it would be considerably harder with LibreOffice~~. [Edit: [derobert](https://academia.stackexchange.com/questions/17961/should-i-learn-to-use-latex-to-write-up-a-history-masters-thesis#comment36232_17967) pointed out that LibreOffice supports this through a "master document" feature. I wasn't familiar with it.]
* It's much easier to change the formatting of the document at the last minute if you discover that, e.g., your margins don't match your university's specifications or your references are formatted incorrectly. It's also easier to keep the formatting consistent.
* The results are more aesthetically pleasing, if you care about things like ligatures and kerning.
***Cons of using LaTeX:***
* LaTeX has a much steeper learning curve, as others have mentioned. If you don't need to use it after you're done your thesis, it may not be worth the time investment.
* LaTeX forces a slightly different editing workflow since you can't turn on Track Changes. Your supervisor will have to mark up the PDFs you produce or add comments to the tex file itself. This may make your supervisor less happy about your choice.
My recommendation: use LaTeX to write up something short that you need to write anyway to see how it works. Then play around with some of the features you'll need for a thesis: add a footnote, a reference, or a figure. Try the \include command, too. That should give you a sense of whether it's something you want to continue with.
Upvotes: 5 <issue_comment>username_5: For some people, the crucial features of LaTeX are portability and permanence. With LibreOffice and (especially) MS Word, you are essentially at the mercy of whether developers decide to make all future versions backwards compatible (I have a couple of papers I wrote in grad school that are effectively inaccessible until I find a machine with MS Word 98 for Windows). In contrast, with LaTeX, the source file is just plain ASCII text, so it will be forever readable and editable in any computer, regardless of operating system.
If your advisor is not LaTeX-friendly, you can have a look at [this other question](https://stackoverflow.com/questions/829408/extract-text-from-tex-remove-latex-tags), the answers to which provide a number of tools to strip all LaTeX tags from a file, so you are left only with the plain text. From my experience, that is an acceptable compromise to most people.
Upvotes: 4 <issue_comment>username_6: Definitely no.
This may sound like a flippant response but it's actually grounded in experience and much thought.
I'm in a STEM discipline where LaTeX is considered by faculty to be the ONLY way to do word processing so I've had time to really formulate my viewpoint here.
Note: I use Word here but I assume LibreOffice is of the same quality/functionality.
1. Latex *used to be better* than Word in terms of controlling your document with precision and with things like formatting, bibliographies, and equations. Now that word has evolved to it's current state there is no advantage to LaTeX unless, as someone has already mentioned, you are into kerning and typography-type stuff. You can easily adjust margins, bibliographies, styles, equations, etc in Word these days. It's just which you learned the idiosyncrasies of first and more thoroughly.
2. Word is easier and more universally accepted. Everyone knows what to do with a docx file. What would a older professor who may not be a tech person in a history dept know what to do with .tex, .cls, .bib, or .ttf files? If you wanted to get feedback you'd have to put it into pdf and then getting comments would be annoying as others have mentioned.
3. I just used LaTeX for the first time to write 2 papers in the past couple weeks. I used the ShareLatex site and a template from an academic journal in my area. Without ShareLatex I could not have done it in a timely fashion. It does automatic versioning the entire time(at least for the time I used it). So every single time you make a change it keeps a copy of the document before the change. You can share it with others as the name implies and it will function like Google Docs in this regard. You can specify which compiler you want to use and some other options. It also lets you upload any and all files you might have associated with your document including fonts, images, template class files, etc. The only issue is when fine tuning formatting things make sure you look at the pdf because the little preview display isn't as high fidelity.
4. Inserting figures can be annoying. Especially with various file types. I had to convert all my figures to jpeg's to avoid issues I had with png's. They don't always go where you put them. LaTeX puts them where ever it sees fit. Yes there are settings to try to force LaTeX to put them where you want instead of where it wants but do you really need to be having a fight of wills with your word processor???
Summary: There is just absolutely no reason I can see to use LaTeX for the average person who is not OCD about kerning, ligatures and pristine fonts. Yes, it's fun at first playing with all the options and features but after awhile you just yearn for the simplicity of Word. In past years before the current state of mainstream word processors maybe LaTeX had it's advantage but now I get better results using Word and it's equation editor and formatting tools than with LaTeX. If you don't have equations and scientific things to display, as I suspect you might not, I would just write it in Word. If you want to play around with LaTeX for the typography aspects then copy what you've written into a .tex which you can then experiment with the nitpicky font details and see that when it's all said and done... the difference in output may be imperceptible.
Upvotes: 2 <issue_comment>username_7: I recently made the mistake of writing a novel with each chapter as a separate document in LibreOffice. When I needed to combine them all into a single document, it wasn't pleasant. So that might be a consideration, depending on how many chapters your thesis has. As Imi mentioned, LaTeX definitely has better support for combining documents.
You might be interested in [LyX](http://www.lyx.org/). It's a word processor that uses LaTeX as a backend. You can get almost all of the abilities of LaTeX, but the learning curve is a lot shallower since it has more of the standard word processor features. It's a big help in following shane's advice not to get caught up in the minutiae of LaTeX. I use it for linguistics, where it seems to be pretty widely known. (Linguists have some of the same typesetting challenges as the math people). Newer versions have change tracking, as landroni mentions in the comments. Also, if you do decide to use source control, LyX can help you manage it, although it might take some coercion to set it up. LyX uses its own file format by default, but you can export to LaTeX.
As shane also mentioned, there are tools to convert LaTeX into other formats. [Pandoc](http://johnmacfarlane.net/pandoc/index.html) can even convert LaTeX into Word or Open Office format. Both LyX and Pandoc are available for Ubuntu.
So what I would probably do in your situation is write in LyX or LaTeX, then use Pandoc to export that to docx or odt for your advisor. You can read your advisor's comments in Word, make changes to your original, and export again. It sounds convoluted, but I do think you'll gain a lot in flexibility and tool support over using LibreOffice.
**EDIT**: Yes, you probably will have to put in some extra work to make the docx files generated by Pandoc usable. On the other hand, LyX and LaTeX can also save you a ton of time that you might have used struggling with LibreOffice's primitive support for pictures and formatting. (And special characters, not relevant to the OP's case, but very much so if you're doing linguistics or math.) If you're not sure, try testing your workflow on a small document: write something in LyX or LaTeX, convert it to odt or docx with Pandoc, and do the work to get the odt or docx file into a usable state. See whether you think it's worth it for what you're doing.
Upvotes: 4 <issue_comment>username_8: If you're considering other software besides just Microsoft Word or LaTeX, then I would put a vote in for [Scrivener](http://literatureandlatte.com/scrivener.php), which I switched to halfway through my Masters (submitted in December).
Scrivener is a bit LaTeXy in that it separates the composition from the formatting, with the latter done as a single rendering process at the end, but it also allows a high degree of style and formatting while you write, however this should be viewed as a 'preview' rather than actual formatting.
It was originally designed for novelists, but there's a growing community of academics using it going on the posts in the forums. I found it great for handling things like numbered lists such as figures and tables, and, particularly important for linguistics, example sentences.
The basic workflow is that you compose in Scrivener and then 'compile', which will format the entire document and apply formatting in a single hit, then you can either print directly, or if you need further post-processing (like Endnote), then output as a useful format for you and take it to whatever program you need. My workflow was to go into Word and run Endnote, plus a couple of other small tasks, and then print. For a 50,000 word document, my time spent in Word was about 3 hours (including proofing). Scrivener also supports compiling to a LaTeX document using MultiMarkDown syntax.
Anyway, some dot-points:
**Pros:**
* Great hierarchical sectioning support, in fact this is key to its organisational structure
* Secure in terms of text data, which is stored in individual text files (one for each section/subsection)
* Much easier to learn than LaTeX
* Support available from the forum from the developers, and very quick response from them (I once suggested a feature and it was rolled-out two days later)
* Good support for multiple independent numbered lists (tables, figures)
* Good support for cross-referencing to chapters and sections
* You can output to .doc or .rtf and have track changes, but see last Con.
**Cons:**
* Not free; $40 (PC) $45 (Mac) once-off purchase (free 30-day trial, free for Linux)
* Originally designed for Mac, and other operating systems are still catching up on features. There is a [Linux](https://www.literatureandlatte.com/forum/viewforum.php?f=33) release but it is in beta (however the features that are missing only affect the compile process, so you could conceivably write in Ubuntu and compile on Mac)
* No support for Endnote apart from mapping ctrl/cmd-Y to open Endnote, that is, scrivener does not insert formatted bibliographic citations either in-line or as endnotes (but if you work using raw field codes, then pass them through to Word, it works great)
* Formatting presets in Scrivener do not map to 'styles' in Word (although again, there are workarounds)
* Not intended to be a complete typesetting program, but a writing program. You should expect to have to post-process your document in another program.
* Importing from Word is not very good. This will affect what you do with track changes and supervisor's comments (you'd have to incorporate them yourself manually).
Sorry about the plug – I do not receive any kind of support from the makers of Scrivener; I just think it's the best writing software I've used, and since you're trying to weigh up the benefits and costs of switching to LaTeX, I definitely think you should consider it since it is compatible with LaTeX anyway, and is much easier to work in.
Upvotes: 3 <issue_comment>username_9: You should definitively learn LaTeX.
MS Word is slow, is instable, and the document is just, if you are not a guru at Word, ugly. You need a lot of expertise to control the blank spaces between words, to get a line break correct, to avoid orphan lines, etc., not to mention more advanced things related to type setting.
The ease to use of MS Word is not real ease. It just makes any secretary think he/she can use it. For a two page document this might be OK, but actually it requires a lot of experience to get a document right.
I don't agree that LaTeX has a steep learning curve. There are tools which make LaTeX pretty much WYSIWYG. Read the first two or three chapters of an introductory LaTeX book, install miktex and simatra PDF viewer on your windows, use EclipseTex for editing, it's pretty easy, and it does not required you to be a rocket scientist to type set documents that are nice to see.
You'll love LaTeX once you started using it. You'll wonder how it is ever possible to use Word any more.
Upvotes: 2 <issue_comment>username_10: I would say it is worth your time and effort, not that the effort required would be great, as other have said first off talk to your professor and ask if he has any opinions on the matter.
I wrote the paper for my B.Sc. in LaTeX and to do so I learned it mostly by myself, with some help from some post grads but those were for issues you will not hit when writing an historical paper, in the spare moments between actually writing the code my paper was about.
Like yourself my paper didn't involve large swathes of mathematical formulas with lots of Greek letters but it did require almost Swiss-like precision, and I felt that with my limited command of Word I would be better of just starting from scratch with LaTex and knowing exactly the cause effect relationship between my actions and the document's presentation (I was and still am weary of losing all my formatting in Word due to a slip of the mouse).
Some of LaTex's strongest points in its favor is the ease with which it can handle citations for you as well as index your pictures, give them captions and etc.
The fact that you use Linux already is also a factor as setting up a good LaTex editor on Ubuntu seems much easier than on Windows to me, I highly recommend Kile as the editor it's not WYSIWG but it will keep you from making syntax errors due to the large amount of LaTeX snippets available and it also simplifies the "make a pdf from all this mark-up" process.
Another point in LaTex's favour is the very friendly and responsive community at [the Tex exchange](https://tex.stackexchange.com/ "the Tex Echange")
I have to be honest that it was very easy for me to pick up LaTeX because I already had experience with HTML which is also a mark-up language, but in my opinion if you understand what happens when you bold a word on a Stack Exchange site (see the **\* \*** and their effect) then you can understand LaTeX.
Upvotes: 3 <issue_comment>username_11: **Yes...unless...**
You will have references, many of them. Managing references is a complete pain without the facilities provided by LaTeX or one of the often expensive tools for word (there may be something free for libreoffice, and it may be good). It may be a little easier with the citation styles typical in humanities/social sciences than it is for physical sciences, but still not easy. So unless you have access to a good tool for managing your citations in your wordprocessor, just go for LaTeX.
Cross-referencing between sections and to figures/tables is also much easier.
You *may* find a tool like LyX suits you - or you may like the freedom of just getting on with it in a decent text editor. [This list](https://tex.stackexchange.com/questions/339/latex-editors-ides) may be helpful in that regard.
I can thoroughly support the view expressed above that you should start on a smaller document.
When it comes to commenting by your adviser, such as for a dissertation, pdf comments on your work, or handwritten comments are fine, but for collaborative writing of a paper with non-LaTeX users things become *a lot* harder unless you're in charge.
[Using LaTeX to Write a PhD Thesis](http://www.dickimaw-books.com/latex/thesis/) by <NAME> is well worth a read, it's from a CS background but don't let that put you off, it's clear and well written.
On the other hand, if you start writing in a word processor, you will end up sticking with it, which is fine, plenty of theses get written that way, but nobody ever finds the time to switch from one system to the other.
Also note that if you use libreoffice and you work with people who use MSoffice (or even for different versions of MSoffice) change tracking is rather fragile.
Upvotes: 3 <issue_comment>username_12: **YES... and why not use both!**
The learning curve for *using* LaTeX (with a good template) is actually relatively small, and will save you time/headaches in a large document such as a thesis since the formatting is automatically taken care of, and probably looks much better too (especially equations). Just paste your text into a .tex template, and create a .bib file.
My process is to write the draft (unformatted text) in LibreOffice with each chapter as a separate document, referencing with tags only, and storing the figures in a separate folder. The reason is that I find it easier to think/edit in WYSIWYG. Once I am happy with the draft it takes **minimal effort** to transfer it to LaTeX: I just go through and create a .bib file from copy and paste EndNote entries whilst I am inserting reference tags.
The end result looks perfect and I have saved time, avoiding the formatting problems that plagued my MS word undergraduate thesis.
Upvotes: 2 <issue_comment>username_13: All mentioned pros and cons are valid, but... LaTeX is written by smart people for use by smart people. Word is written by smart people for everyone, including dumb people. Take your pick. For me, 5 dissertation years spent with LaTeX as an everyday companion was thoroughly enjoyable.
Upvotes: 3 <issue_comment>username_14: I think using dynamic documents is the only way to ensure your work is reproducible. Latex is an important tool in my workflow that helps me achieve this aim, and I view it as a necessity. Word documents are simply not reproducible.
**You can read about the issues with Word [here](http://en.nothingisreal.com/wiki/Please_don't_send_me_Microsoft_Word_documents), [here](http://thewritelife.com/microsoft-word-just-say-no/), [here](http://factorgrad.blogspot.com/2010/07/why-latex-is-superior-to-ms-word.html) and [here](http://gribblelab.org/scicomp/07_Document_processing.html).**
If you adopt latex you should also use [Pandoc](http://johnmacfarlane.net/pandoc/), which I don't think anyone mentioned here. Pandoc allows you to convert a document to range of other document types with a simple line of code. I'm imagining the person who tries to cut and paste from the latex generated PDF into a Word document because that is what their advisor wants. Pandoc will make your life easier.
Scrivener is also another nice option that will allow you to write in markdown, it converts to latex on the backend and then converts the latex to pdf. Scrivener is also a great companion for long-form writing.
To the person that said: "You should prefer a tool that is used by your colleagues and supervisor to one that is slightly better intrinsically." I'd argue that you should use the tool that gets the work done most efficiently for you. Latex is going to save you so much time in terms of formatting the document. So, please explain to me why anyone expect that I use a tool that's going to require me to needlessly spend hours out of my productive schedule to tweak the formatting of the document over and over again? Your whole statement is anti-progress.
Upvotes: 2 <issue_comment>username_15: First, I am a mathematician.
I see no reason whatsoever to use LaTeX for humanities work. I think the best tool to write regular texts or books without formulas is LibreOffice.
I write my non-scientific (and even some mathematical) works in LibreOffice.
This is despite I have rather expert knowledge of LaTeX, I don't like to use it unless strictly necessary.
LibreOffice handles images and tables well. Just learn to use it.
Upvotes: 2 |
2014/03/09 | 864 | 3,706 | <issue_start>username_0: Let us consider a quiz question whose answer is a proof. For instance, a question in mathematics, such as *prove that for every natural number $n$ the quantity $n^3-n$ is even*.
**How could such a question be implemented in a computer interface so that the computer can check the proof for correctness?**
A simple, yet unreasonable way would be to have a multiple choice format for the question, where every choice was one way of proving the relationship, but I guess the shortcomings of this are obvious...<issue_comment>username_1: Checking proofs automatically from free-text is not something that is solved.
Even if you impose very strict rules on how to type stuff, you cannot expect a program to check it.
Here is why: if your student were asked to type a computer program to solve a specific task, you cannot expect a computer program to check with 100% certainty that the algorithm is correct, in general.
This is not due to technical limitations; it has been proved that there is no computer program that can guarantee that the program it checks terminate (thus is not in an infinite loop, a common error in computer science).
Hence, I suspect there will never be a generic solution to your question either.
Maybe a better from of evaluating the students is to maybe give fragments of a proof, and ask them to order these in a way that makes sense.
Or even better, give the full proof, but instead ask questions about the details, with multiple-choice or simple free-form that only has a finite number of correct answers.
Upvotes: -1 <issue_comment>username_2: I have experimented with a system that provides students with a collection of phrases/formulae from which they can drag and drop a selection to construct a proof. I think that this is a promising approach but there are still a large number of different ways in which students can get the answer wrong, and an even larger number of ways that they can construct an answer that cannot be parsed as something meaningful. If you simply reject such answers without comprehensible feedback then you will just make the students hate you. So you have to write a large amount of code that tries to analyse all possible answers and explain what (if anything) is wrong with them. The logic is quite complex and I am not sure how well the students would understand the explanations. I hope to return to these experiments at some point but at the moment I am not teaching anything for which they would be useful.
Upvotes: 3 <issue_comment>username_3: It is true that a computer program cannot check the correctness of an algorithm as explained in the answer by paxinum, but that is not what is required here.
It *is* possible in principle for a computer to check a proof. The problem is that the proof would have to be written in a very complete form with each logical step spelt out. This is far beyond what would be required in an exam. Remember that Russel and Whitehead famously proved that 1+1=2 using 52 logical steps to finish a whole book that set up the logical formalism they would need.
In practice we write proofs with many details of the logical steps missed out and a computer would need a high level of artificial intelligence to fill in the gaps.
username_2's formulaic approach may be the best that can be done for now, but I think it would give too much away.
Upvotes: 1 <issue_comment>username_4: There is a rich area of proof assistants which deals with these problems. See
<http://coq.inria.fr/>
<http://wiki.portal.chalmers.se/agda/pmwiki.php>
<http://nuprl.org/>
There is also a "market" where people can offer bitcoins for proofs checked by Coq.
<https://proofmarket.org/>
Upvotes: 1 |
2014/03/10 | 4,702 | 19,739 | <issue_start>username_0: I have been attending a number of academic / professional conferences recently and one thing that really awed me was the flawless and natural way in which the speakers presented on their topics.
Sure, they had the occasional mispronunciations or awkward pauses but they spoke with authority and confidence.
Some of them used powerpoint slides but were not reading from the slides; they were just talking on the points and they had so much to say on each point that I felt they could not do it without notes, but they were not even looking at their notes most of the time.
(The advantages of not looking at their notes too much was that they could maintain eye contact and could use very interesting slides that were not crammed with the dot points of their speeches.)
I hate to think that they had memorised their speech notes but some of the presentations went for more than an hour.
I know rehearsal is important but I wonder how could anyone remember so much in a nervous situation. (Its one thing to know everything on your topic; its another thing to present that 'everything'!)
**Question:**
What background / foreground things do presenters do that make their presentations flawless and natural?<issue_comment>username_1: A natural presentation comes from practice, and lots of it.
From practice comes confidence. Excellent speakers rarely have more than a few words bullet pointed on their slides. This means that the audience's attention is focused on the speaker. The speaker then tells the audience what the speaker wants them to hear, or directs the audience's attention to an image displayed on the screen.
Aside from not splitting the audience's attention between speaker and loads of text on-screen, having few words on your slides means that you are not tempted yourself to read your presentation to the audience. Such recitation is only suitable if you are analysing the text itself closely.
Further to that last point, having only single or few key words on your slides forces you to know your subject and what to say on each point. You don't have the slides to fall back upon, allowing you to lazily read them to your audience.
Which brings me back to practice. One way of getting familiar with what you want to say on each point is to write down a few detailed notes for yourself. When you practice your talk, you can refer to these detailed notes. Next time through, distil your notes down to only a few key words. Next time through - or when you are confident - your notes should be only the key words on the screen for the audience, and are therefore redundant. No notes, fluent delivery.
Upvotes: 6 [selected_answer]<issue_comment>username_2: >
> Question: What background / foreground things do presenters do that make their presentations flawless and natural?
>
>
>
I don't think I've ever seen a *flawless* presentation by the way ... much like I've never seen a *flawless* conversation.
In any case, I think if you do not have a lot of experience, as ff524 says (edit: and username_1), practice is important. In particular, if you are going to present at a conference, for example, try present in front of (and get feedback from) your colleagues.
With more experience comes more confidence. With more confidence, presentations become less about "speeches" or memorised text and more about having a conversation with the audience.
At this stage, when you prepare a talk, you can imagine the flow of exposition, how the slides should be ordered, what the audience will understand at that point, what questions they might have and how they could be answered, how to order the points of conversation, how to clarify the "why" before the "how", how to ask the question and engage the audience's curiosity rather than just provide the answer. When you deliver the talk, you have the outline of the conversation you're about to have and you follow through with it, improvising the exact phrasing as feels right.
For me, it often helps to think about the audience as one person that I'm trying to engage with. An audience can be daunting -- a blur of faces -- but if you rather think of trying to engage directly with that guy/gal who came in late and is sitting right at the back of the room ... and make it almost more personal ... I think this is the attitude to have. (This is orthogonal to memorising exact phrasing, which I often find leads to unnatural talks. Having a few nice catchy phrases is nice, but they'll stick in your mind naturally when you prepare the talk.)
Upvotes: 4 <issue_comment>username_3: As someone who has been trained in creating and presenting presentations, let me tell you the secret: it's practice.
With practice and learning comes confidence, and with confidence you manage to make up for all the minor flaws most people might not even notice with nothing but a simple smile.
There is one technical aspect to it, which has something to do with how the brain works. In stress or panic situations, the human brain has a functionality to switch off all higher-level areas to focus on the situation at hand. While this is perfectly good and useful when i.E. fighting lions, it is absolutely not helpful when in a test situation. So the prime rule for good presentations is: keep calm and present on.
The second trick is to generate so called island-knowledge. Basically you take all the required topics, then learn enough about them that you could hold a speech for each one of them without having to prepare. Once you did that you not only have the knowledge to speak more freely but also the confidence when it comes to questions. Because for every question asked or for every mistake you make, you know which island to hop to, to find an immediate solution that at least sounds professional.
And then there is still the good old "sorry, I don't know" answer. If you preset something about a topic, it is perfectly fine if you do not know everything. Admitting that is a strong sign of confidence and the audience will honor you for not talking "bullshit". Knowing that the audience will react positively if you openly lack some knowledge will help you to present that lack as perfectly fine.
Upvotes: 3 <issue_comment>username_4: I have no idea how to give a *flawless* presentation, but to give a good one, follow points from:
* <NAME>, [Advice on Giving a Good PowerPoint Presentation](http://www.jstor.org/stable/25678623), Math Horizons, April (2006), 25-27 [[pdf here](http://www.d.umn.edu/~jgallian/goodPPtalk.pdf)]
(Don't be afraid of the word "PowerPoint" here - there is nothing specific to this program. Moreover, many points from there work for any presentation, including one only with blackboard or even one without it.)
In any way, to make in better and better there is no shortcut to preparation and practice!
Upvotes: 3 <issue_comment>username_5: Note that presenting is a teachable/learnable skill. You may want to consider classes, if you can find them, or doing your presentation for friends and asking for feedback. Some folks suggest investigating the Toastmasters organization as a way to learn public speaking skills; I don't know enough about them to have an opinion.
As others have said, practice makes better, and -- as with most arts and crafts -- this is more about being able to recover gracefully so nobody especially notices the mistake than about a perfect performance.
Afterthought: a flawless presentation is highly unnatural...
Upvotes: 2 <issue_comment>username_6: Practice, as has been mentioned.
Here's one concrete points with respect to practicing:
I like to write *exactly* what I want to say in the Speaker Notes section of the slide.
This point is a little contentious, because the first thing people usually learn about public speaking is "not to read". I used to subscribe by this. Now, I've gone back to writing really really complete notes and reading for practice.
Before I get a whole community down my throat let me explain. When I was practicing in the past, I kept misstating things or adding 'um' or pausing and would have to retract my words, and then that led to even more awkwardness. So I thought, "Well, I'm just going to write down exactly what I want to say then" and not have to try to find the awesome phrasing by memory. So I did. What you then need to practice is *delivery*, not *content*.
The problem that people have is that they often associate reading with how you do reading out loud in school - you just kind of drone out the text in a monotone voice that's drab and boring. In reality, you can read and still make it dynamic and interesting and fluid and fun. Do practice that. Imagine your favorite speechgiver - a famous world leader, a CEO of a company, and so forth. A lot of them are reading their speeches. Think of your favorite newscaster or comedy anchorman. They are all reading off of the prompter and yet it feels like they're just talking to you.
It is pretty important that, when developing a dynamic reading habit, to get a sense of how you sound. Record yourself and play it back and see if it sounds like you're just talking to someone in conversation. That's how it should sound like. It shouldn't sound like you're reading a research paper. Figure out if it's the content that's doing this or if it's the delivery.
Once you practice this a lot - just reading dynamically - then you're going to start to know your talk so well that you won't need the notes, and you won't need the slides, and you won't need to worry about interruptions or anything.
Another thing: I don't write notes for every section for my talk: just the tricky parts that I stumble on. I often do it for the very beginning of the talk (Yes, I actually write "Thank you very much for the introduction. My name is username_6 and I'm happy to be here today" in my note slides because when I'm at my most nervous moment, I need to be able to start off without any ums, ifs, or buts), technical portions, and places where I have big "A-has" and "punchlines" and so forth.
Hope that helps.
Upvotes: 2 <issue_comment>username_7: The first thing that you need to know is that it is not "natural". If you're lucky enough to watch the same person give the same talk more than once (as I have) you will discover that it is a lot less spontaneous than it appears to be. Giving a live talk draws on several different kinds of preparation at once:
* the talk itself is typically prepared and practiced over and over. There may be notes in the speaker notes section, or the bullets on the slide may be enough to remind the speaker what to say to each slide. The talk is organized in a way that makes it easy to remember all the points that need to be covered, to be able to drop some material if necessary, and so on.
* the speaker has a wide collection of stories and jokes that can be used to provide time to think, to lengthen a talk that is going too fast and will run short, or to relax an audience that isn't interacting enough
* the speaker knows a physical vocabulary: where to stand, how far and how fast to walk, what arm positions to use, whether to pause at the far edge of the stage or hidden behind the desk, and what effect all of these will have on the audience
* the speaker knows the overall length the talk must be and often knows some milestones within the talk (finish demo 1 by 12 minutes; should have 5 minutes left when we get to dog picture) so that subtle lengthenings or shortenings can keep the talk on schedule
* the speaker has learned to drop meta talk (oh, I see I covered these points earlier, hm, I guess there isn't time for this demo, ah, this is awkward I seem to have finished early) and to project tremendous confidence even while internally panicking over a demo that isn't working, a slide that has gone missing, or the sudden realization of the current time.
It's hilarious to watch a well done "spontaneous" demo that is exactly the same every time. I tell you what, the speaker says, let's throw some code together to let you see what I'm talking about. Closing the Powerpoint (or at least minimizing it) and bringing up a developer tool, the speaker goes on: I can do this in C# I guess, of course it works in other languages too. Let's make a .... pause .... look at the screen as though trying to decide ... Windows app, sure that can work, I'll put a button or two and a text box, yeah, that should work. ... the demo goes on and on to all intents and purposes just being made up on the fly, but I'm in the back of the room with the demo script and I know the speaker is doing exactly what we planned.
You need to know the material well in addition to practicing. If you forget to mention something, you'll need to spot a chance to work it in later. If you get a question from the audience, you'll need to be able to answer it. And if you get thrown by a technical glitch and need to speak really spontaneously, you will need to know where you were headed for sure.
All of this is something you can learn. If you think it is natural and flawless, you may think "I either have it or I don't." That's not true. You can learn the mechanics of structuring a talk, of laying out a slide so that it doesn't detract from the talking you're doing, of using your voice, your pauses, and your body to support your message. And you can practice over and over, and watch other people too, until you are good. Some people learn faster than others, but everyone can learn this if it's important to them.
Upvotes: 4 <issue_comment>username_8: A number of factors that are I have found to be important to giving a good presentation. I have not yet given a academic presentation yet, other than at undergrad level, but have attended some. In my professional career as an Accountant (I am also a part-time Post-Graduate) I have given some and attended many. The number of people I have normally given a presentation range between about 50 to 300.
* **Good knowledge of topic**
This may seem like a given but you'd be amazed haw many poor presentations can be put down to this fact. This gives you the ability to expand on your presentation naturally if the need exists. It also can help if your preparation, for whatever reason, was not the best. Trust me it can happen.
* **Preparation**
That said about knowledge; preparation is very important. I've found over the years that this can vary from presentation to presentation. At first & I still do I would type out my whole talk. This would serve as a template for my talk and I would follow it pretty closely but would go off script if or when I felt it would be of benefit. Sometimes with time you can sense a vibe off a room that you may need to expand on a point you were talking about. On other occasions I would just make bullet points to keep me on track when I am comfortably with the topic. For me preparation also includes being comfortable in the space in which you are giving your presentation. I have found it is good to get to the venue early to survey the room that you would be speaking in. Walk the room if you can; note how big it is. Is there equipment for you if you need to use slides, a podium or table for you to use and water available. These may seem small issues but it's good practice so as to keep yourself in a good frame of mind before you speak. Nothing worse than 30 minutes messing with a laptop and projector to frustrate you before hand.
**- Practice**
This has been dealt with in many of the answers so I won't dwell on it to long. It is important so the more you can do it the better. As said already if you can do a trial run with friends or colleagues that is good. I know in my university there are some workshops available that help with public speaking and I believe Toastmasters was also mentioned where you could practice. You can also practice on your own. Two techniques I use are 1. when out for a walk I run over in my mind little short sections of make talk and see how well I can present them and then check my notes when I get back; 2. (this may sound a bit vain) you can practice in front of a mirror which I think helps you get comfortable with how you look when talking and you may also notice bad habits you will wish to eradicate from you presenting style.
* **Confidence**
This is noted in most of the answers as well. You can be naturally confident or it may be some thing you have to work on. If it come naturally then nerves can play less of a role in your presentation style and people can notice a nervous speaker. That said, all the confidence on the world will not help you if you don't research your topic, prepare and practice. Where confidence is of benefit even if it acquired over time is that it helps ensure you are more relaxed when giving the talk and when you are relaxed you are better able to concentrate on what is important, the material.
Upvotes: 2 <issue_comment>username_9: There's a lot of great answers here, and most of them say *practice*. Well, I agree, but I didn't see this particular point in any answer yet, so let me try and explain what usually helps me "keep the flow" and how.
Well, it's all about **practice**, but:
* when I write the slides, I always have a rough idea of what I would like to say and try out a few (different) phrasings in my head (only the key points / words end up on the slides)
* (ideally), I do multiple rehearsals, improvisation-upon-improvisation. At this point, it is not uncommon for the first rehearsal to last 4 or 5 times as much as the allotted time.
* at early-stage rehearsals, I will **try multiple phrasings** for the same slide. If I start saying "Um...", my sentences get lost in the middle or something similar, I will just calmly stop at this point and try a new approach to what I want to say.
* I tend to do around 2 more rehearsals after I get the presentation down to the allotted time (for me, personally, going on much longer I might unintentionally shorten the presentation too much)
* now, **what, concretely, I get from all these rehearsals** is multiple, different ways to handle every slide.
The reason presentations sound flawless is because **not just every sentence by itself is good, but the transitions between sentences, slides and sections are well done**.
And, after doing 4-5-6 rehearsals for the presentation, **you know multiple ways** to **say each thought**, and then multiple ways **to transition to the next** thought, and even if you "slip" and say something other than the "perfect, planned version", you still have a rehearsed back-up strategy.
* as for writing down the notes, I usually sit down after a rehearsal number 2 or 3, and focus only on **difficult transitions**.
If, in those few first rehearsals, I sill didn't find a fluent way to say something, or if I did but I stumbled around it, I will try and **write down verbatim what I want to say**, sometimes even multiple versions.
Just writing it down usually helps, but if I'm going to go over any notes minutes before presenting, these are going to be it.
* finally, making a rehearsal if front of a test-audience helps. I dread anybody hearing me on the rehearsal number one or two, but I like for somebody to listen on around the pre-last rehearsal.
By this time, I usually "know" my presentation well enough so I can easily integrate suggestions in, but I still have a go to test if the suggestions fit fluently.
* this all helps the presentation sound more natural. Since you can handle multiple "lingual" situations, you do not sound like you're reciting by heart. On the other hand, you're sure that you have multiple "fallback" options which allow you flexibility and that all of them will deliver the same idea.
Upvotes: 4 |
2014/03/10 | 7,286 | 28,294 | <issue_start>username_0: My biggest challenge as a PhD student is best summarized by the following from PHD Comics:
**"Piled Higher and Deeper" by <NAME>
[www.phdcomics.com](http://www.phdcomics.com)**

A consequence of working in research is that the end is never in sight - unlike other jobs, *there is always more work for you to do*.
I am pretty good at making sure to take care of myself, because I know it's important. I can force myself to go for a run, get something to eat, participate in a regular activity that's not related to academia. But I can't turn off the voice in my head that keeps nagging me about the work that's waiting for me back at the office.
This is especially true when there are deadlines and people relying on me to meet them. On top of my research, I have mentees I should be spending more time with, students we won't be able to hire if I don't get my grant-writing act together, collaborators who keep asking when I'm going to write up that work we did together last summer. *If I don't do this, nobody else will*; it's not like a normal workplace, where your boss can reassign an important task if you are too overloaded to handle it.
So, my question is:
>
> **How do you avoid feeling guilty about all the unfinished (and unfinishable) work in academia?**
>
>
>
**I am looking for specific, practical techniques based on research and/or personal experience, not suggestions that you just thought of but have never tried.**
One technique I've tried with limited success is to make a daily to-do list that is limited to three items, and tell myself that I'm not allowed to feel guilty about not doing things that aren't on the list. It works when I'm not *terribly* busy... but most of the time it doesn't.
Related questions:
[How to avoid thinking about research in your free time](https://academia.stackexchange.com/questions/9200/how-to-avoid-thinking-about-research-in-your-free-time) is related, but I'm not trying to avoid thinking about research in my free time. I'm just trying to avoid *feeling guilty* about research in my free time.
Also related is [How should I deal with discouragement as a graduate student?](https://academia.stackexchange.com/questions/2219/how-should-i-deal-with-discouragement-as-a-graduate-student) but those answers seem to address how to convince yourself that your efforts are worthwhile. I (usually) realize that my efforts are worthwhile, I don't know how to convince myself that I'm putting in "enough" effort (whatever that means).<issue_comment>username_1: To answer your specific question:
>
> How do you avoid feeling guilty about all the unfinished (and unfinishable) work in academia?
>
>
>
You try to come to the understanding that there is always more work to be done, and that this is the way it is, not just in academia, but also in almost every other walk of professional life.
Disentangle the feelings of guilt and anxiety. There is work that you **should** have done/be doing (e.g. to test an idea fully, rather than assume the result; meeting deadlines) and there is work you **could** have done/or be doing (e.g. new ideas/extensions).
Concentrate on completing all the work that you know *must* be done. Set yourself practical goals and list them, marking them off when achieved.
Set out time for the other tasks you know need to be completed. e.g. 2 hours a week for meeting student A, 1 hour for student B, 2 hours for grant writing. Stick to those arrangements. Now add in time for "fun" work stuff - perhaps not directly related to your main goals, but perhaps which interest you at the moment.
Keeping a track of how much time you are spending on different types of task, and seeing how you are progressing in each activity, will allow you to fine-tune your time-management.
Having time set aside for each activity type - and sticking to your timetable - allows you to feel less anxious about the work you *should* be doing, because you know that you've boxed off time in your schedule to set to work on them. It gives you the confidence to say, okay, I'm not doing mission-critical stuff right now, but it's the time of the week for reading/meeting people/setting up webpage and I know that I'll be back on that task when I've the time allocated for it.
Upvotes: 4 <issue_comment>username_2: >
> How do you avoid feeling guilty about all the unfinished (and unfinishable) work in academia?
>
>
>
First of all, recognize the difference between unfinished and unfinishable. Yes, you *can* always do more, but your duty is to do what you *promised*. This means that learning what your own capacities are and only committing to what you know you can do\* will eliminate much future guilt. The guilt comes mainly from the unfinished work that you promised to do, not from not doing other work beyond that. (If you actually feel guilty about not doing things you never committed to, I think you need to reevaluate your worldview. You can feel regret about those things, but there should be no guilt.)
Secondly, the "to-do list." It feels great when a to-do list is cleared, but, the guilt only increases when you fail to clear it. It's really just another form of the failure to fulfil a commitment, but privately. So there are a couple of variations on the to-do list that eliminate that problem.
* The "to-mostly-do list" this is a large list of small specific things you plan to do in a bit more than the next day, the point is to make it hard to actually clear the list in a day, but easy to progress through it. That way, since you know the list is more than a day's worth of work, you are more psychologically satisfied with your progress and less dissatisfied with the unfinished items. Also seeing at the end of one day some of the things you'll need to do the next day can help increase productivity the next day.
* The timetable: break up your day into small periods of time for each task. You're promising yourself to "work on X for an hour" rather than to "finish X," and you can be satisfied even if you ran into problems and didn't finish X.
Thirdly, I think you're not taking free time seriously enough. It really takes a shift in attitude to think of your free time as time in which you're *not supposed* to work as opposed to time in which you're *allowing* yourself not to work. I don't know if anyone can tell you *how* to make that shift, though.
Of course, all of this breaks when there's a close external deadline (grant/paper submission). Then you just work, eat, sleep, and work until you're done (but no time to feel guilty there).
---
\*Actually, it's more complicated than "only committing to do what you know you can do." Sometimes it pays off to take a risk and promise something that you aren't 100% sure about, but when you take a risk you need to know that it's a risk and be prepared to fail.
Upvotes: 2 <issue_comment>username_3: I am currently a sort-of-senior-ish postdoc, and hence in the somewhat awkward career phase where basically everything that does not have a clear other responsible seems to end up in my inbox. The process I use to not get overwhelmed with the sheer amount of wacky tasks that end up in my inbox is rather similar to what Nicolas does. It works for me (most of the time), it may work for you as well.
Basically, every few days, I take an hour to sit down with my calendar and my TODO list and **plan**. For each task taking more than say 15 mins to execute, I reserve a slot on my calendar. I try to arrange so that each task meets its deadline, and keep some free space for incoming urgent things and some lump time for short administrative tasks not worth mentioning explicitly. When I see that there simply is no way to plan everything so that all tasks meet their deadline, I make compromises, i.e., drop or delay tasks, and (and this is the difficult part) **do not feel guilty at all about that**. It works for me, because I *know* from my calendar that there simply was no time to, say, write the paper for this medium-level conference and at the same time write this super-important grant application. The difference here is the certainty from your planning that everything simply could not be done at the same time. There is no point in feeling guilty about something that you ultimately know you could not change.
However, if that happens all the time, you have a different problem. Then your problem is not that you should not feel guilty about not getting your work done, but **that you should not bite off more than you can chew** (== commit to more than what you can deliver). You say in a comment that your *second-biggest challenge as a PhD student* is that you are over-committed. It seems like your "second-biggest" challenge causes the "biggest" challenge - you commit to things you cannot do, and hence feel guilty. The fix for that is, again, to plan, and not commit to things you know you cannot do in the first place.
Upvotes: 3 <issue_comment>username_4: What I found to work best for me is *separating* work from leisure time. I work in the office and I don't work at home (except for emails). If I have to work during the weekend or even until the morning for some deadline, I do that in the office (it's important that wherever you are this is possible, don't try to force your way in).
When I go to home I don't feel guilty because I don't have my computer (even if I have my files synchronized, just in case), and because I have learnt that if I don't rest properly (at some point, for some time) then I'm not productive and more time gets wasted, so it's better to simply forget about work and doing anything else (or nothing at all).
Everything else is done with two basic principles:
1. prioritization of tasks. If some people depend on me for some task, I give a high priority to it. If something is unclear, it gets a low priority (it will get clear through time, probably), etc. Closest deadlines get higher priority, etc. It's similar to the important-urgent matrix, but you probably have some subconscious algorithm to assign priority to those things, basically try to imagine what would make you feel more anxious and do it first. Depending on your sources of anxiousness this method will work better or worse for your career, but it will reduce your anxiousness (if we don't consider how career progression would interfere with that).
2. Don't bite (much) more than you can chew. Sometimes people feel work is just too much because it *is* too much. This really depends on how much you want to push your limits (of workaholism), but doing so for too long (or any amount of time) is usually a bad idea, unless you are willing to fully sacrifice everything for your work, reaching your top productivity for some time and then suiciding as a disposable researcher. Consider that if you stay healthy and focused you will probably be more productive, and you will be able to provide more value and do more work on each of your hours. So your health and leisure time is not interfering with your work, it's *enabling* it.
If all of the above fails, there is a last thing you can try. Finish PhD asap (before it kills you), get subordinates (e.g. PhD students) and focus on reading and forwarding mails so that *they* do all the work. You can do that from anywhere with your mobile phone, like a boss.
Upvotes: 2 <issue_comment>username_5: **Stop saying "I have too much going on, and nothing will work."**
Stop self-defeating is the probably the first step. Of course we have "too much going on," that's what we say when we lose control. Do realize that "too busy" can be a cause, but most of the time, **"too busy" is more of a symptom.**
**Realizing that no one can drink up a whole river**
Be it work, research, and teaching, they all work like a giant wheel or river that keep moving. No one can take the whole activity and "finish" it. Once I have realized that I am just moving things along from less to more refined shapes. It's noble to be a more responsible researcher/teacher, but the mindset has to be correctly set before burning one self out.
**Use an urgency and importance matrix**
Whenever I got a task I mentally assigned it into a quadrant of the the [urgency vs. importance matrix](http://www.mindtools.com/pages/article/newHTE_91.htm). Then when I plan my weeks, I make sure to distribute 2/3 of available time to all the high impact activities. For the rest 1/3, I use it to deal with urgent and low-impact items or emerging items.
**Avoid paper-based To-Do list**
To-do list can be a confusing way to manage time because the whole process is high maintenance (keeping a list, some sub-lists, and constant correction and update) and frustrating (the list keeps growing, and yes, crossing out tasks feels great, but then you have a messy list.)
I just use [EverNote](https://evernote.com) to document my projects and tasks. When they are done, I move the whole index thread to "Archive." Index cards are a also a better alternative to a to-do list. When I am in a meeting or walking around, I put all strayed thoughts onto an inexpensive [composition book](http://en.wikipedia.org/wiki/Composition_book).
As a side note, *capturing strayed thoughts* has a side effect on me as well. Most of the time, I kept mentally regurgitating works that I need to do and the long chain of tasks really bothered me. Once I spilled them out onto a piece of paper, then I stopped thinking about them for a while. When I have access to a computer, I change the thoughts into EverNote note page. This simple step empties up my mind to do some other more useful thinking.
**Say no, say it a lot!**
What about the urgent and low-impact? I just say no. This includes, but not limited to: grant proposal invitation sent to me when the grant is due in 5 days, meetings that really do not need me to be there (but I always attend the monthly staff meeting and faculty research meeting, just to be collegial), etc.
I used to suck at saying no, now I have a lot of elaborated ways to put it. And what's my elaborated way? Just say, "*Thanks for the invitation/thinking of me. I am sorry that I can't help this round.*" And leave it like that.
*One great trick* for those who cannot say no is: you do not have to answer right there and right at that moment. Tell the people that you'll give it a good thought, and then say no afterwards.
**Block your time in your calendar way ahead**
Don't start filling in the calendar passively. Reserve your own time many weeks ahead. I fill them up with protected writing time slots. I write the best in the morning and love to do coding in the afternoon so I sprinkle all these little 30-, 60- or 90-minute slots across my calendar.
And I agree with @username_1' answer that this is a very useful technique. And this is not something that "sounds like the kind of thing that is nice in theory." I use this and username_1 probably does too, and it really works. The harder you guard your time slots, the better it works.
**No more, just three tasks a day**
<NAME>'s [Zen to Done](http://rads.stackoverflow.com/amzn/click/1438258488) is an inspiring read and I'd recommend to people who think they have no time. I have adopted the idea of doing three Most Important Tasks per day. There are days that I barely got one done, there are days that I finished three by 1:00 pm and then spent the rest reading or learning new stuff.
**Become a time freak**
I time my tasks with a kitchen timer. I don't strictly follow [pomodoro technique](http://en.wikipedia.org/wiki/Pomodoro_Technique) but I adopted the spirit of it. The way I operate is that I dedicated a chunk of time to a project, move it forward as much as I can, and when time's up, I consider my job for that project on that day is done. I do not binge work, because binge working is very prone to errors.
One very interesting side story. A colleague was chit chatting in my office and sudden the timer went off! The colleague jokingly asked me if her time is up. I explained to her my system. And oddly... since then whenever she visits me, she would add this question "Can I have \_\_ minutes of your time?" before talking to me. Now everyone does that to me; and I do the same to everyone else.
Time to time, chaos and mess happen to us because we have developed an image of being easy going and flexible, two major magnets for chaotic and messy people. In fact, we don't have to. A good dose of rigidity gets you off a lot of ad hoc committees, "emergency" meetings, etc.
**Identify what manifests the guilt**
This is very important because the source of the guilt dictates how you resolve it. For me, the major source is fearing that I have upset the collaborators. I once dropped the ball on a secondary analysis and delayed it for half a year. Then this job gradually became low-impact/low-urgency. I decided instead of feeling awkward, I just went up to her after a meeting and apologize for not being able to finish the project. In fact, she didn't care as much as I expected; I felt a lot better having told her my thought.
Another fear is that people may think I am incompetent or chaotic. And for that, I have come to be very comfortable with myself. I resolved this issue simply for two facts: i) I am probably the person who cares the most what I look like in other people's mind. And ii) All other people are also busily caring how they look like in others' mind.
Practicing being *mindful* has many positive impacts on how I deal with these negative emotions. Now whenever I feel bad/good, I emotionally zoom out and look at the big picture, trace the connections, and examine the dynamics. I feel having this little slight detachment with emotion allows me to better tackle (either to avoid or to exploit) these emotions. Don't just feel guilty, ask why, why, why, why, and why. Yes, ask five times. Usually for me, three to four associations usually get me to the root cause, just like how they [can get Toyota to their problems' cause](http://www.toyota-global.com/company/toyota_traditions/quality/mar_apr_2006.html).
**Closing remark**
I guess none of what I said is new. When it comes to time management there isn't really a silver bullet. From my experience, so far I have boiled down to only one truth: All time management techniques work if you use it regularly and seriously.
Upvotes: 3 <issue_comment>username_6: tl;dr: **Keep forgiving yourself and keep working.**
I am having the same problem, and only recently it got better.
I have it only for open-ended work (scientific projects, other personal projects - everything which is of type "I should have it done" *and* the same time it is not closed; even worse when others are waiting for results). It seems to be very different from "normal" work (when someone gives me a particular task) and work with an expiry date.
The wisest (and most successful) piece of advice I found is this one (from [Smart Guy Productivity Pitfalls - Book of Hook](http://bookofhook.blogspot.de/2013/03/smart-guy-productivity-pitfalls.html), which has more good points and is definitely worth reading):
>
> **6. Do not overpromise to make up for poor productivity.** There's a tendency when we're falling behind to try to overcompensate with future promises. "When I'm done, it'll be AWESOME" or "I know I'm late, but I'm positive I'll be done by Monday". By doing those things we just build more debt we can't pay off, and that will eventually lead to a catastrophic melt down when the super final absolutely last deadline date shows up. Just get shit done, don't talk about how you're going to get shit done.
>
>
>
Also, somewhat related is [forgiving yourself for being not productive enough](http://bps-research-digest.blogspot.com.es/2010/05/cure-for-procrastination-forgive.html) (constantly feeling guilty does not help; not only for me, but it seems it does not work for most of people):
>
> The key finding was that students who'd forgiven themselves for their initial bout of procrastination subsequently showed less negative affect in the intermediate period between exams and were less likely to procrastinate before the second round of exams. Crucially, self-forgiveness wasn't related to performance in the first set of exams but it did predict better performance in the second set.
>
>
>
And from a bit different angle, from [<NAME>'s TED talk on genius](http://www.ted.com/talks/elizabeth_gilbert_on_genius)
(it's about treating inspiration, but it is similar for everything - no matter how good you are, you won't do everything; so why should you be bothered by missing a few things?):
>
> And [Tom Waits]'s speeding along, and all of a sudden he hears this little fragment of melody, that comes into his head as inspiration often comes, elusive and tantalizing, and he wants it, you know, it's gorgeous, and he longs for it, but he has no way to get it. He doesn't have a piece of paper, he doesn't have a pencil, he doesn't have a tape recorder.
>
>
> So he starts to feel all of that old anxiety start to rise in him like, "I'm going to lose this thing, and then I'm going to be haunted by this song forever. I'm not good enough, and I can't do it." And instead of panicking, he just stopped. He just stopped that whole mental process and he did something completely novel. He just looked up at the sky, and he said, "Excuse me, can you not see that I'm driving?" (Laughter) "Do I look like I can write down a song right now? If you really want to exist, come back at a more opportune moment when I can take care of you. Otherwise, go bother somebody else today. Go bother Leonard Cohen."
>
>
>
And from my personal stuff (I mean things that I found helpful):
* using to-do list only for task (i.e. things I know I can do in a few hour max), not projects (it's depressing to have "finish this paper" on the same list for long months, cf. [relevant PhD Comics strip](http://www.phdcomics.com/comics/archive.php?comicid=1350)),
* *underpromise and overdeliver* to oneself; i.e. committing to do each day less task than expected (this way, with the same results, it's "wow, I did things from the list plus 2 extra" instead of "I only made almost half of the first point out of 7"; extrapolating one's maximal efficiency does not work...).
Upvotes: 7 [selected_answer]<issue_comment>username_7: There are two threads in answers here that I'd like to respond to:
>
> Research is just like any other job. The tools needed to manage guilt are no different.
>
>
>
Yes. and no.
The nitty-gritty of work - deadlines, working in groups, answering to a 'boss' - are the same. That is indeed true. What's different about research work that I think ff524 is alluding to is the "freedom trap". Because research work involves more freedom and more unstructured effort, and there's a direct correlation between output and success (not effort and success of course), the anxiety is not external ("my boss needs this done", or "I can't let my team down") but extremely internal ("I am an inferior researcher if I'm not working all the time" or "someone else is getting ahead in their career while I'm slacking off").
And this is incessant. Every minute spent not working is tied up in internal accusations. And it's exhausting. And that's what we'd like to be free of.
Do I have an answer ? Not really. It's a slow process of realizing that
* feeling guilty about work is a meta-worry that doesn't lead anywhere constructive (this realization only works in flashes :))
* all the other people racing ahead will also need to rest at some point.
* a guilt-free mind is clear and prepared for research (whether it's leisure time or not: as ff524 says, this is not necessarily about partitioning work and free time so much as not feeling guilty when not thinking about work. Indeed, one of the pleasures of being a researcher is that I can think about my work whenever I like, even when day dreaming on a bus to work (ahem).
In that respect, username_6's answer about
>
> forgiving yourself and working
>
>
>
is spot on. Guilt is rarely a constructive force, and it can lead you to make bad decisions to compensate. Blowing off that paper deadline ? it's ok. Dropping a fascinating research project because you're overcommitted ? that's ok too. Not spending enough time with students ? Hard to wave off, but it's ok.
*But forgiving yourself only works if you **trust** yourself,*
and again the Tom Waits analogy is brilliant. You have to **trust** that blowing off one paper deadline won't make you a lazy git who doesn't write any papers. That missing one student meeting doesn't make you an abusive advisor. That ignoring a collaboration doesn't make you a toxic personality. That if you can learn to trust in your own research instincts and drive that you'll be able to pick up and go full steam ahead, but this time with less guilt than before.
This is not a time-management answer, and you didn't want one ! So all that I can say is that reducing guilt is a slow process (I haven't figured it out yet), and you have to keep reminding yourself to forgive and trust.
Upvotes: 5 <issue_comment>username_8: *Disclaimer*: I don't have psychology background, I have only read the first two answers and skimmed the rest and comments, and I have no affiliation to any product.
I think, psychologically, the only way to stop your guilt is to actually see that you have worked *productively*. If you have satisfaction on your work, usually when you have a remarkable result, then you can take a rest for weeks feeling guilt-free. Of course, this is not always a case, so you have to find other remarkable points that you can rely on everyday. Ask yourself, what is the last time you feel guilt-free on your unfinished work?
You mention about workplace, so how do employees not feel guilty about their unfinished work, even when they don't ask their boss? They just simply stop working at 5. Can make sure that your work always start at 9 and stop at 5 everyday? If you can stick to a plan, you don't fell guilty anymore.
But leisure time is when new ideas come, and having a flexible time plan is a gift. This noon I studied a book and felt tired and sleepy after two hours. Though I had only worked for two hours today, I know that *feeling tired = giving 100% concentration*. I rewarded myself a snack, a nap, and an hour of distracting on Academia writing this answer. I know that *relaxing = producing*, so I'm happy for being in progress.
Once I have an urge to answer yours, I know that if I don't write it to the point of feeling satisfactory, no one will (literally!). So presuming that I will overspend time for unwanted activities today or this week, how can I compensate that tomorrow or next week? To really do that, you need to track and analyze your working time. I find Manic Time (for Windows) and Smarter Time (for Android) are both good apps for this. The latter uses wifi signal to track your room-level location and can improve it suggestion by learning your habit (though not always accurate).
Last word, you will always find yourself feeling guilty. *That feeling* is normal, don't feel guilty for feeling guilty. The point is to adjusting your plan, and let it be.
Upvotes: 1 <issue_comment>username_9: To grow your research trajectory, you need to **prune your least-favorite projects**.
In research, you're never, ever going to be able to finish everything (no matter how many holidays you work). You need to figure out how to cut out some of your projects, so that the best ones have enough time, energy, and resources to grow.
In practice, this means re-framing your unfinished projects as successful prioritization decisions.
* You started a project, then thought of a better idea. You *choose* to leave the first project unfinished so you can dedicate your time to the more exciting idea.
* You started a project, then found that it was unexpectedly hard. You *choose* to leave it unfinished, freeing up time for multiple easier projects.
If you make these decisions consciously and deliberately, you can spend more time on exciting projects - and less time on projects that you're only doing out of guilt.
Upvotes: 1 |
2014/03/10 | 1,216 | 5,392 | <issue_start>username_0: I'm finishing my PhD and looking for postdoc jobs. I've published a decent number of papers (and also have a fair number of citations), and want to emphasize this on my CV.
Should I actually state my h-index on my CV? If so, is Google Scholar the easiest/best way to compute this?
**2014.03.12 EDIT:**
Thanks for the advice. I had originally planned to put my h-index at the head
of my list of publications (as a clickable link to my Google Scholar profile).
Based on the advice from many of you I will just omit this altogether for now.
My scholar profile is easy to find (for potential PIs who care about such
metrics), and not explicitly stating my h-index myself avoids any negative
connotations among those who object to the h-index or calculating it via Google
Scholar (e.g. Google includes self-citations, arXiv/non peer-reviewed papers
etc).<issue_comment>username_1: I would say no for two main reasons. The first one is the h-index will change rapidly with time, particularly for new graduated PhD students with only few years of publication history. The second one is that the h-index provides only a little information, the only possible values are likely 3,4 and 5 which can be increased with some luck.
I have read only few dozens of CV of PhD, but none of them really put down their h-index. Probably, a better way is to highlight the most important papers that you think which can represent you research interest and your contribution. It might be better to provide a clickable link to your Google Scholar profile in application email rather than in CV.
As for the h-index, Google Scholar indeed provides an easiest way to obtain it. However, I have some doubt on it as it also counts those citation from unrefereed papers, such as those in arXiv, and even worst, publications from some journal articles that can be written by anyone. It seems to me there are ways to play with the system, in particular, for small number of citation. But I think that it is somehow representative.
Upvotes: 2 <issue_comment>username_2: A range of statistics can be useful in providing a quick snapshot of your research productivity. Common statistics include: number of refereed journal publications; number of publications that meet a criteria such as you being first author or the journal being above a certain impact factor or on a discipline specific list of quality journals; h-index; total amount of grant funding; etc. These would supplement a complete list of your publications.
With regards to h-index, Google Scholar will give you the largest value and its arguably the implied database when a h-index is provided without qualification.
I know that some people on this site object to the bureaucratic reductionism that can result from using measures like h-indices, impact factors and the like. Nonetheless, just because they are imperfect, does not prevent them from being useful. Such measures should not replace actually reading your work to assess its quality, but they can be useful in getting a rough handle on the impact and status of your research.
In terms of anecdotes, I have read several promotion applications that have successfully incorporated a range of such summary statistics to make the case for promotion. h-index, total grant funding, total publications, average teaching scores are all evidence, albeit imperfect that decision makers who are often outside your area will use to decide how to allocate resources like jobs, promotions, etc.
Upvotes: 4 <issue_comment>username_3: Better is to put list of your academic results and publications into your CV. Good index values are fine and could be included, but put your results first. Value of index tells something about popularity of your publication, but nothing about subject matter, which is most important in CV.
Upvotes: 2 <issue_comment>username_4: I am not anywhere near the point where I'll be reviewing CVs, but people I know who do so tell me that whenever a CV lists things like h-index, number of citations for each paper, journal impact factors, aso, the general feeling is that the person is either a) a show-off or b) trying to hide actual content or merit behind impressive metrics. That's also the impression I get. Remember that the people who are going to read your CV know the field: what is relevant, what the good journals are, what it means to have those citations, and whether or not you are trying to boost your achievements. Also, as others have mentioned, metrics are ever-changing, data-base dependent and very easy to find by whatever means the reviewer deems appropriate. So I think your decision is the wisest.
Upvotes: 2 <issue_comment>username_5: Since citation counts and H-index change over time (even for a fixed set of publications) it is probably best to omit this information from your CV and instead report it in the "response to selection criteria" for relevant positions you apply for. Academic jobs usually require some evidence of a research track record (or research potential at lower levels), so you can respond by reporting information on present citations and H-index. You should also bear in mind that it is usually simple for the selection panel to look you up in a citation database and get an updated report of these metrics in real-time. For all these reasons, the CV is not a great place for this information.
Upvotes: -1 |
2014/03/10 | 1,351 | 6,002 | <issue_start>username_0: I am in my senior year of studying electrical engineering in an african university. I intend to apply to grad schools(Ph.D) a year after graduation, possibly to the top EE programs in the US (MIT, Berkeley, Stanford...) Despite the fact that ground-breaking/stellar research is virtually impossible to come by for undergraduates in my country, I have sort of gathered some. I have an accepted poster presentation(based on independent research) at the major national conference in my field, and my present senior year thesis/project may yield three-five papers at IEEE conferences around Africa. I should also be able to get two publications(also independent research) in a continental journal which is popular only among African Engineers in my field and not much outside.
My degree is a five year bachelors programme interspersed with three internships/co-op experience. One was a summer spent in a local R&D electronics lab/company in which we completed several design projects although all were adaptations/imitation of existing projects to solve problems facing developing societies like ours. Another one was a long internship (~8mths) in the EE division of a world-renowned company with huge presence in my country during which I worked on a major design project which required a lot of technical knowledge and was evaluated. I have also done some remote/virtual research for a foreign, not well-known, research institute in my field and I have completed a technical report (no peer review) of my research(not particularly breath-taking) which they publish in their report series.
My gpa though not near-perfect is on a 'first-class' (we use the british system) and so I am a high-ranking student in my university. We do not use the four point system and as much as I would not like to compare apples with oranges, if I convert my cgpa to the 4.0 scale it is just over 3.6 although our scale is much larger and high-end grades are more difficult to attain than what is obtainable in the US system. I am also a recipient of several national awards/recognition for academic merit.
Given the aforesaid and also assuming that I ace the GRE (math especially since I'm in engineering), my questions are these:
How do you think I can improve my chances of being a good fit for top U.S. graduate schools noting the difficulty/impossibility in doing any ground-breaking research especially as an undergraduate here?<issue_comment>username_1: I'm going to assume that you are "qualifed," that is, you have good grades from your university program, and decent marks on standardized tests such as the GRE.
You ought to understand that in your situation, your "research" potential is probably viewed differently from, say, an American's. You probably won't be evaluated for your abilities in "pure" research (adding to the existing body of (Western) knowledge, as for your ability in "applied" research; that is taking the knowledge you will be taught in the United States, and sending it back to your home country.
I'm going to assume that the U.S. admissions committee at a top university would likely see you as an "average" student, rather than a "top" student (relative to other top university students). In this case, their question will be, would we rather train another "average" student for e.g. Silicon valley, or a similar student for an African country. The answer is likely to be in favor of the latter, because even top U.S. universities have limited points of entry into most African countries.
The hottest ticket in American universities right now is a degree from Kuwait, Qatar, or one of the other Gulf countries, with Africa not far behind. In this regard, your coops and internships with a national conference will be very helpful, because they suggest that you will rise high in your government's scientific hierarchy. American universities will be evaluating you as a potential Transportation Minister or head of the National Scientific Institute.
In this regard, your competition is not with "western" students, but similarly connected and educated African students (perhaps others from your homeland). You seem to have the advantage in this group. Good luck.
Upvotes: 3 <issue_comment>username_2: Ok, good question, but let's eliminate the phrase, "noting the difficulty/impossibility in doing any ground-breaking research especially as an undergraduate here?" This actually applies to almost the entirety of the undergraduate population. Very few undergraduates work on ground-breaking research projects -- and, if they do, their role on the actual project is unlikely to be the intellectual driver. They might have done some lower-level work under the supervision of the principal investigator (PI).
So, your question is really about how to improve your chances of acceptance in general. That really depends on the discipline, and it really depends on the program to which you are applying. Every program has a unique admissions committee that will review applications and assign different levels / weights to factors they consider important in making admissions decisions. Thus, certain programs may place a lot more emphasis on GRE than other programs. Some programs might consider letters of recommendations to be very important, which other programs may consider letters of recommendation to be of little importance (since most letters of recommendations are cherry picked from the possible letter writers).
That said, your task is to tailor every application to the programs where you are submitting an application. A general snowball approach where you submit the same application everywhere is an alternative strategy, but one that I advise against. Learn what those programs value most in their admissions decisions, and tailor the application accordingly. Of course, it is hard to figure out what they value, so you have to do a lot of researching -- e.g., any published data on the current and previous cohorts?
Good luck!
Upvotes: 3 |
2014/03/10 | 442 | 2,066 | <issue_start>username_0: I have been asked to develop an extended version of an accepted conference paper. This extended version will be submitted to a journal for review.
Please tell me how to develop the same. Is there any need to provide complete/partial results in this extended version?
Is it insufficient to further improve the idea instead of presenting the results?<issue_comment>username_1: The key is to have some added value in the journal paper with respect to your already accepted conference paper. Typically extended versions take the results in the initial paper a few steps further, Ideally, extended versions are more thorough with regards to theory but may alternatively including a new set of experiments to corroborate the original work published shown at a conference.
Partial results are less commonly introduced in such papers (in my field). Conference papers tend to be brief or discuss early results of novel work, so often some relevant material is omitted in them, which can then be placed in extended versions. This depends heavily on your field of study, so YMMV.
Upvotes: 3 <issue_comment>username_2: This varies from journal to journal. For example, in some subfields of CS (where this comes up all the time), the informal rule is that the journal version should have at least 30% new content (including new experiments, new algorithms and so on), and this new content should be identified clearly in a cover letter. In other subfields (like in theory) the expectation is that all proofs will be presented in full detail (no sketches), but new results are not necessary.
So it depends greatly on the subfield and the journal. As always, when in doubt, **ask** the editor of the journal, or colleagues who've submitted there before.
Upvotes: 4 <issue_comment>username_3: I think one need at least to rewrite Title, Abstract and Introduction and Related Work. The key is Rewriting in new word no merely adding new sentences. You think you want to revise your Conference version paper as a new improved detailed version.
Upvotes: 1 |
2014/03/10 | 1,318 | 5,453 | <issue_start>username_0: I'm in mathematics, just in case that matters.
I submitted a manuscript to a journal, and got an extensive referee report from referee X. After sending the revision, the paper got rejected, so I sent it to a second journal where it got accepted.
Later, I got a note from the editor of the first journal saying that referee X found a way to improve my results, and the editor gave me a pdf file from referee X outlining his/her ideas. Unfortunately, since my manuscript had already been accepted for publication I could not change it at this point.
Nevertheless, the improvement that referee X suggested is significant enough to merit another paper. I asked the editor to pass on an invitation to referee X to work on a joint paper with me, but the editor refused, saying that he didn't want to violate referee anonymity.
I think the paper needs to be written, but I feel it would be strange for me to write a single-author paper when the most significant idea does not originate with me. (Referee X only gave me a vague sketch of the idea, there are things that still have to be worked out. I still have to do a lot of work, but the most important insight would be referee X's). I suppose I'm just going to write a few paragraphs in the introduction explaining the situation. I was wondering if there would be another way to handle the issue.<issue_comment>username_1: You can state something like "I am indebted to an anonymous reviewer of an earlier paper (give ref) for providing insightful comments and providing directions for additional work which has resulted in this paper. Without the anonymous reviewers supportive work this paper would not have been possible." The exact wording is of course up to you and what you see fits reality best.
I think it is a pity the editor does not want to forward your invite (I assume the review system is not double blind?). Asking is not a breach. I can, however, see that an editor does not necessarily want to become a messenger.
With a clear statement in the acknowledgement you have done what you can and I am sure the reviewer will pick up on it sooner or later and maybe after your new paper get in touch. After all, there is really not much you can do about it.
Upvotes: 6 [selected_answer]<issue_comment>username_2: From my understanding of the details of the situation, the editor is not acting well in refusing to pass along your *invitation* to the referee. Doing so does not violate anonymity in any way (I am confident that the review process was not "double blind" -- i.e., the referee knows the author's identity -- in my experience, no mathematics papers are reviewed in this way.) Maybe what the editor is thinking is that in order to accept your offer the referee would have to violate anonymity.
However, is this an ethical issue? I have always held it to be the case that a referee can disclose her identity to an author at any time, and I have done this more than once as a referee. I can vaguely see some ethical problems which *might* arise if this process of referee-self-disclosure were very widespread, but it seems like a bit of a stretch. I would be very interested if someone can explain to me why this is a real concern.
Against the highly nebulous previous paragraph one must balance the ethical issue that **academic ideas are not gifts that one person can freely bestow upon another**. I wrote the previous sentence in full awareness of the fact that mathematics in practice does have some degree of *noblesse oblige*: one often encounters very eminent and senior mathematicians giving ideas away to younger / less experienced / less eminent mathematicians without wanting anything in return: in mathematics we are inculcated to have a view that certain contributions are "below our level" and thus not worth taking credit for. That is fine if "not taking credit" means not becoming an author on a paper. But if it means not disclosing your contribution at all -- with the consequence that the begifted junior mathematician gets "too much credit" for work that had a significant component that was not his own -- well, that is hardly a victimless crime in our current highly competitive job-market. In fact it seems to be a form of plagiarism.
[The situation brings to mind [<NAME>'s short story "Zilkowski's Theorem"](http://www.all-story.com/issues.cgi?action=show_story&story_id=118&part=all). This was anthologized in the Best American Short Stories of 2002. Remarkably, this was only one of two short stories in that anthology in which the main character was a practitioner of the mathematical sciences. The other is [<NAME>'s "Nachmann from Los Angeles"](http://www.newyorker.com/archive/2001/11/12/011112fi_fiction). Both were excellent!]
Perhaps you should write back to the editor to express these ethical concerns. Getting the editor-in-chief of the journal involved (if this is not already the editor you are dealing with) is also a good idea at this point.
If you really don't know the identity of the author, then you need to indicate clearly the circumstances in whatever paper you write. You may also want to make it known in your circles that you would very much like to know the identity of the mathematician who helped you write your next paper. Depending upon how small / tightly knit your particular subcommunity is, you may have more or less luck with that, but it's certainly worth a try.
Upvotes: 5 |
2014/03/11 | 1,034 | 4,305 | <issue_start>username_0: I'm very perplexed as to how the terms **College** and **University** are archaically used to denote associations of people, but are in modern times being used to denote buildings or realty.
What are standard organizational structures in a legal sense for different Colleges in a US-based Research University??
For example, if the **corporation** of the University Trustees makes the policies providing for faculty appointments in a College of Arts and also in a College of Education, will these different Colleges typically be like **departments** in the same corporation, or do they have separate **Legal Personalities**?
Are the Colleges associations or corporations, or merely administrative units defined by the internal policy of the Trustees to distinguish different collections of offices?
Do the Colleges have members, and if so who are the members? Are all the **matriculated** members, or just current students? Are the faculty members of the College?<issue_comment>username_1: As I understand it, in a typical private U.S. research university the different colleges or schools that make up the university have no legal independence, and they are simply administrative units within the university. (On the other hand, part of the endowment generally consists of restricted gifts, which can only be used in certain ways or by certain departments. This can give the corresponding parts of the university more power or independence in practice than one might otherwise suppose.)
>
> Do the Colleges have members, and if so who are the members? Are all the matriculated members, or just current students? Are the faculty members of the College?
>
>
>
This is entirely a matter of university policy. Current students, staff, and faculty would usually be considered members of their corresponding colleges, but this can vary (and some universities just aren't organized this way in the first place). In practice, this generally doesn't mean very much: it may determine some requirements for students in addition to departmental requirements, it could be listed as an affiliation on your publications (although departments are more common), it might give a few privileges such as building or library access, and it tells you how to fill out university forms, but it's otherwise not a big deal.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Oxford University ([**constitution**](http://www.admin.ox.ac.uk/statutes/375-092.shtml)) it seems was able to get the Crown to either recognize as or raise into corporations each of the Oxford Colleges. Additionally, the members of these Colleges (example [**charter**](http://www.spc.ox.ac.uk/uploads/Royal%20Charter.pdf)), including students and faculty, also appear as individuals in the membership of the corporation of Oxford University. Oxford University as a corporate person seems to be a charter party exercising control over the Colleges, but is not itself a member of those Colleges.
When we get to USA State Universities, there seems to be some oddities where the **[Uniform Unincorporated Nonprofit Association Act](http://www.uniformlaws.org/Act.aspx?title=Unincorporated%20Nonprofit%20Association%20Act%20%282008%29)** ([Ohio's UUNAA](http://codes.ohio.gov/orc/1745)) might implicitly turn Administratively created Colleges into **[William Blackstone Corporations](http://www.lonang.com/exlibris/blackstone/bla-118.htm)**. This is because UUNAA seems to assert that almost any group of people with a voting membership and a succession continuity plan and that isn't forbidden to acquire property will be able in principle to hold that property in **mortmain** perpetually outside of Probate, just like any regular business corporation. In the USA, these entities would likely be deemed **Quasi-Corporations**, in part because they probably don't have the same constitutional Due Process rights that a generic corporation would have. Any governance rights which the University Trustees have vested in the faculty via a Union Contract are possibly relevant for determining the legal status of the Colleges.
Since any legal personality for colleges seems unintentional in the University Policies I have read, I am going to assume until further notice that username_1 deserves the Check Mark.
Upvotes: 2 |
2014/03/11 | 2,704 | 11,482 | <issue_start>username_0: I, an electrical engineering undergraduate, am currently involved in a research project, wherein I help write all the codes, run all the simulations, and plot all the graphs.
The paper is about to be submitted soon. I am hesitating whether to mention my desire to be listed as a co-author to my advisor.
Admittedly, I have no contributions to the idea and the theoretical analysis, as those are a bit too complicated for an undergraduate, or at least for me myself.
However, I have done all the implementations of the ideas and plotted all the resulting graphs, which, I believe, will be put into the paper.
In such a case, do I deserve an authorship (maybe 3rd or 4th)? Or it is reasonable to just acknowledge me in the Acknowledgement paragraph at the end of the paper?
If I deserve one, should I bring this up now or wait until I have done all my work?<issue_comment>username_1: >
> I am hesitating whether to mention my desire to be listed as a co-author to my advisor.
>
>
>
Please mention it to your advisor **right away**.
They may say, "You're right, you should be a co-author." Or, "Given your specific contributions, it wouldn't be appropriate to list you as a co-author, but you will be listed in the acknowledgements." Or they may say, "Given your current contributions you can't be a co-author, but if you also do XYZ for the paper before it is finished, then it would be appropriate for you to be 3rd author."
You may agree or disagree with their answer.
We can't really tell you definitely whether or not you should be a co-author. (We don't know enough about the content of the paper, or your contribution, and it wouldn't be appropriate for you to give that level of detail here.)
But, you should **definitely** talk to your advisor about it (preferably right away, and definitely *before* the paper is submitted).
**Edit: The OP made the following comment -**
>
> After digging out some info about the previous undergraduate students, I find all of them are not listed as authors. But they are indeed acknowledged in the paper.
>
>
>
If you think for some reason your advisor will be reluctant to include you as an author, you can do some preparation for your talk with them as follows. Then, if your advisor's first reaction is "I've never given authorship to an undergrad before," you are prepared to politely and non-combatively make a case *to your advisor* for why you think you merit authorship.
* Check if your university or department has any formal policy on authorship. For example, here are [three](http://wustl.edu/policies/authorship.html) [different](http://www.adelaide.edu.au/policies/3503) [policies](http://skirball.med.nyu.edu/resources/facilities/protein-analysis-facility/nyu-protein-analysis-facility-policies-user-fees-auth).
* If you have no university-specific guidelines, you can refer to the [IEEE rules](http://www.ieee.org/publications_standards/publications/rights/Section821.html).
* In either case, identify to yourself things you have done that you think qualify you for authorship according to the guidelines.
Then, if your advisor's first response is "No" and you disagree (after listening to the reasoning), you can present your side (which your advisor may or may not agree with).
Also, if your advisor's assessment is "No, you haven't done enough to warrant authorship," you can ask "What else I can do before paper submission that will 'bump' my contribution up to authorship level?" I have had students in the past whose contribution to a paper didn't warrant authorship. In those cases, I told them: "Your current contributions do not merit authorship (only acknowledgement), but if you also do X, Y, Z, you will be an author."
(Note: I'm *not* advising any kind of escalation at this point - you don't even know yet what your advisor will say! But you may feel more comfortable with some kind of plan in mind for how to explain your contributions (as you understand them) to your advisor *if* advisor says "No" and you don't think advisor fully understands your side.)
In any event, **talk to your advisor** as soon as you can.
Upvotes: 7 [selected_answer]<issue_comment>username_2: You do not have to have formed a solid opinion about whether your contributions merit coauthorship in order to ask the question of your advisor. In fact I think that every young person doing work which is being used in any way in a published academic paper would do well to ask this question. If you like, you can frame it not as your trying to suss out whether you have been unfairly denied coauthorship but that you are looking to learn more about the research process itself and inquiring into what sort of research contributions merit coauthorship. (Even if you are seeking to do such a sussing out, framing it as a teachable moment is probably a better way to get a helpful, unguarded response.)
In my experience, things go smoothly if the issue of coauthorship is raised earlier in the collaborative process rather than later. It is hard for me to think of a situation in which it is "too soon" to ask this question, although if you ask it early enough the answer may not be definitive. It is not good to be working on something and wondering whether one will wind up as a coauthor or not. That's needlessly stressful.
Also in my experience, undergraduates often have unrealistic expectations about what sort of work merits publication and/or coauthorship. It seems, especially on this site, that a lot of undergraduates are hoping that their undergraduate research can be published. In some cases it can, and how feasible this is must be highly field-dependent, but here is an inherent feature of undergraduate research: it is research done by someone who has much less subject-area knowledge and insight than *she will herself have later on*, assuming she continues in the discipline.
(If you don't continue in the discipline then you should ask yourself seriously why you want to be published in it. Getting anything published anywhere sounds neat to any suitably bookish young person -- and it sounds neat to me too, and I am a published author -- but the reality of it is that academic publication takes a lot of time and work over and beyond the work that was done to write the paper. If you are a 20-year old who has written an electrical engineering paper and then decided to go on to some other career, please think seriously about just putting that paper on your webpage and spending the time that it would take to get your paper published learning to play the guitar, or watching <NAME> films, or finding a cute boyfriend/girlfriend, or...almost anything else, really. If you don't continue on in the discipline, then having a published academic paper is worth essentially nothing beyond the neat feeling you get by being published. It does not convey the real life advantages that, say, being able to talk knowledgeably about [Dogme95](http://en.wikipedia.org/wiki/Dogme_95) would.)
Note well: I am not saying that undergraduates cannot do good research. They can: in rare but extant circumstances, undergraduates have done research which is better than what most other people in the field can do. What I am saying is that almost every undergraduate who does research will do significantly better research a little later on, to the extent that I cannot think of a situation in which someone's undergraduate research became a big part of their professional profile.
**Added**: I did not mean to create the impression that I feel that in this particular case the OP's contributions do not merit coauthorship. I can't know without knowing the details of the situation, and I wouldn't be a good judge anyway because I work in a field -- mathematics -- where the standards for what constitutes coauthorship are very different from in EE. Let me reiterate the points that I did make:
(i) The advisor is much better equipped to understand whether the student's work merits coauthorship than the student. So statistically speaking, if a student is unsure about this, asks the advisor, and the advisor says the student has not done enough for coauthorship, the advisor is probably right. Of course this unequal expertise and authority sets things up perfectly for a predatory advisor to exploit his undergraduate workers. My answer is mostly directed towards the higher probability event that the advisor is acting honorably. The main thing I advised the OP to do was to talk to his advisor about the standards for coauthorship. I feel very strongly that this is the correct first step (and should have been taken before, in fact, since the OP has some anxiety about the situation). If what the OP hears in this conversation is very unsatisfactory to him, then we can further discuss the situation. I don't want to assume that will happen.
(ii) The line between getting acknowledged and getting coauthorship is certainly a gray area, even among adult academics. For example, in the last few months alone I was invited to be a coauthor on two different papers. I had to think about each one. In the end, I turned down one request -- which came from two graduate students in my own department, including one of my own advisees -- and accepted the other *after doing more work* so that I felt like my coauthorship was justified. Both of these could have gone either way. When I think about whether to include myself or someone else as a coauthor, first I weight intellectual contributions (in a way which may not generalize so well outside of mathematics, which is something I have recently learned by discussion with other members of the site) and second I think in terms of the professional implications of putting on or leaving off a certain person's name. The paper that I left my name off of was the third paper coming out of a graduate research seminar organized and led by me [I was a coauthor on the first two], and the results obtained were ones that I specifically asked for. Nevertheless the work done on this paper was by the students and not by me\* -- my own direct contribution to this work was positive but indubitably of a smaller order than theirs -- and the advantage to each of these students of having a paper which does not have a faculty coauthor is considerable. For me, having one more publication of a similar sort as the other two is not a big advantage to me. In fact, by not putting my name on the paper I am in fact claiming a sort of academic seniority: I'm showing that I've reached a certain point in my career where I can successfully inspire and direct projects that I am not directly involved with. So my answers come from the perspective of a faculty advisor who thinks carefully about which names to put on or leave off a paper...including his own. This behavior does not make me an unusually virtuous member of my academic community: it seems like business as usual. So I would like to extend the benefit of the doubt to the OP's academic advisor and assume that he is acting honorably until specific information comes up to the contrary.
\*: In fact this third work involved computations that were so substantial that although I am, I suppose, capable of having done them, in practice I would not have been willing to devote the time and enthusiasm that my students did, and it would certainly have taken me longer to write a much kludgier version of the code than what my student came up with fairly quickly.
Upvotes: 4 |
2014/03/11 | 1,074 | 4,408 | <issue_start>username_0: I was fortunate enough to get a position as a researcher for the Mayo Clinic's SURF Program this year. My PI's lab focus is on the the immune system's role in CNS axonal and neuronal injury, specifically through the lens of how innate and adaptive immune effectors interact w/ infected neurons.
Although I do research under a professor at my college and I volunteered for a state university lab during the previous summer, this is my first REU/SURF opportunity, and I REALLY want to make a good impression. Here are my questions:
What are the do's and don't's in terms of being a skilled and efficient researcher?
Since I am still an undergrad, I know that I will be a less useful asset to the lab than a grad student or post-doc, but what can I do as an undergrad to not burden my colleagues and PI?
Thank you all for your help! Wishing you all the very best!<issue_comment>username_1: To me, the quality that makes a student *not* a burden is the following:
>
> A willingness to learn for themselves and good judgement about when to stop and ask for feedback.\*
>
>
>
If a student isn't willing to try things out and learn independently, then it creates a burden on the supervisor. I really don't appreciate when a student asks me how to do something *before* they do a basic Google search.
Similarly, if a student doesn't know how to judge when he/she is "stuck" and needs help, this also creates a burden on the supervisor (because the supervisor has to keep checking on the student to make sure they're progressing).
The dos and don'ts (to avoid being a "burden") that come to mind are:
* **Do** ask your supervisor this question at the very beginning, to find out what he/she expects from you.
* **Do** take detailed notes when you have a meeting with your supervisor or somebody teaches you how to do something, so you can refer to them later
* **Do** keep a written record of your own attempts and progress (such as a lab notebook) to show your supervisor during meetings
* **Do** ask a question if you don't understand an instruction or something that is said, because it will be much better for everyone involved if you clear things up sooner rather than later.
* **Do** mention your own ideas to your supervisor, if you have some that you think will make your research better.
* **Don't** think that just because you are an undergraduate, you can't make much of a contribution. Obviously experience helps, but it's really only a small piece of what makes someone a skilled researcher. Willingness to learn is a much bigger piece, IMO. I've had high-school summer interns who were better than any of the M.S. students in the lab, simply because they put in more effort to *learn*.
\* Source of the quote: *The Unwritten Rules of PhD Research*, by <NAME> & <NAME>
Upvotes: 5 <issue_comment>username_2: The purpose of REU/SURF programs is to educate undergraduates about research, and to encourage them to make good decisions about graduate school. Secondary benefits include advancing science and the careers of the participants. Note that these are not the same goals as of a PhD program (such as in username_1's answer).
The very natural desire to be a "skilled and efficient researcher" is orthogonal to these goals; in some circumstances it will be counterproductive to be skilled and efficient. Your supervisor sees the whole project while you only see what's been set in front of you.
You should set your goals as:
1. To learn as many details of the lab dynamic as possible.
2. To participate in the lab dynamic in the manner you are expected to.
3. To do the tasks you are assigned in a skilled and efficient manner.
4. To noticeably improve in your abilities and understanding, over the course of the program.
5. (stretch goal) To have a good and creative idea that transcends the tasks you were assigned. Share this idea with your supervisor; do not just implement it. Do this at most once during the summer.
To expand a bit on (2): Listen carefully, including to body language. Your supervisor has a role in mind for you. Different faculty have different expectations with regards to issues such as creativity, asking questions, frequency of meetings, quality control, etc. You need to meet these expectations as well as you can. Exceeding expectations may not be a good thing; that's why I recommend doing it at most once.
Upvotes: 3 |
2014/03/12 | 926 | 4,010 | <issue_start>username_0: I am a MA student who frequently works with undergraduates on projects. One student has asked me if I can write her a letter of recommendation for graduate school. Are there any risks to me submitting a letter, as opposed to her finding a faculty member (who likely would not be in the same field as her interests)? For graduate school applications how much does the position of the person writing a recommendation letter matter, as opposed to his/her academic familiarity with the student? I have already advised her to check with the programs she is applying to in case they have requirements.
Edit: I realize I left this unclear before-- she has two tenured professors willing to write letters. If I agreed to, I would be the third letter. The department is small, so there might not be too many options for an additional letter.<issue_comment>username_1: >
> [H]ow much does the position of the person writing a recommendation letter matter, as opposed to his/her academic familiarity with the student?
>
>
>
It matters very highly. For most graduate programs it would be better to have a letter from a very eminent and trusted person which simply says "Student X's performance in my class convinces me that she will be successful in a top master's/PhD program. I highly recommend that you admit her." than a more personally insightful letter from a less well known faculty member, let alone someone who has not even completed the degree that the student is applying for. If you have not yourself completed a master's degree, how can you certify that the student will be able to do so successfully? (Well, of course it may well be that you probably can, but what degree of trust can the reader put into your letter? Not very much.)
In general, I would recommend that even postdocs and temporary faculty should defer to more senior faculty, if possible, when writing letters, and in any case the student should make sure to get at least one letter from a senior person. If someone who has a PhD (let's say) but is otherwise very junior can say something about the student that other faculty cannot, it could be a good idea to send along a letter, but it would be better to have that be an *additional* letter beyond the number required. However, for someone like you who has not even completed the degree the student is applying for, I would simply say that you should not write a letter for the student. If you want to help, I would recommend that you find a faculty member who is senior enough but doesn't know the student very well and give them the information that you wanted to convey in your own letter. (Don't write the letter for them! Just give them the information.) It helps of course to find a faculty member that you are comfortable with.
By the way, if you are in the habit of mentoring grad-school bound undergraduates, you
would be doing them a favor if you let them know as early as possible that it is in their best interest to make significant contacts with senior faculty as well as with you.
Upvotes: 6 [selected_answer]<issue_comment>username_2: I agree with Pete above - it is important to have PhDs, esp. senior faculty members as referees. If the student needs three letters, then your will add good specifics to her case. One common problem with letters from senior profs is that they are often too generic. I am a tenured prof, and review applications with reference letters all the time. Most of them are awfully generic.
-that being said, I had one of my former students from when I was a T.A. as a Ph.D. candidate from years ago who has asked me for several reference letters over the years: two for teaching jobs, and one to get into grad school. It might have helped that my letter was stamped as a faculty -albeit very junior from a small university - at that stage, but all my comments were about my observations as a T.A.. The student (who was very good) keeps getting in to whatever she applies into.
hope this helps.
S.
Upvotes: 3 |
2014/03/12 | 425 | 1,731 | <issue_start>username_0: A PhD student has been working with Prof A. However, after a year, the student decided that he doesn't like the field and wants to change his advisor to Prof B, who works in the same department with Prof A.
Would there be any potential adverse effects on relationships if the student changes PhD advisor from Prof A to Prof B within same department, especially when the student has strong previous connections with Prof B?
Can Prof A advise Prof B not to take the student in such a case? Has anybody experienced cases like these and what the results were?
Assume Prof A is tenured and Prof B is not .and Prof A was against co advising from beginning.<issue_comment>username_1: It depends on the personalities of the professors involved and in how the student handles the switch. If they really are working in two distinct fields, and if that is the real motivation of the student for switching, most likely there will be no issue.
Upvotes: 2 <issue_comment>username_2: It really depends on the reason between the switch.
1. **Changing fields**: If changing fields in the only reason for the switch, then there would be no particular adverse effect to this problem.
2. **Discomfort with old supervisor**: This would be a different story, if the professors A and B of the department is close enough, then you might see a direct influence of opinions. It should be up to you to figure these out when making the switch. This post should help you in this case: [How to find a new PhD advisor if relationship with current advisor is not working out?](https://academia.stackexchange.com/questions/71925/how-to-find-a-new-phd-advisor-if-relationship-with-current-advisor-is-not-workin/75130#75130)
Upvotes: 0 |
2014/03/12 | 2,021 | 8,414 | <issue_start>username_0: It's interview season. I've noticed that many of the applicants in this round are already assistant professors at other institutions.
**Is there any known number or record on how frequently (let's say, across North America) assistant professors who don't have tenure yet switch institutions?**
The market is already incredibly competitive. The academic institution has spent thousands of dollars on hiring a candidate and wants to retain them. If an assistant professor goes to another school and gets hired, then what you might end up with is this cycle of highly-ranking candidates swirling around and leaving empty positions in their wake. Alternatively, one might presume that the hiring system must be inefficient if, in a competitive environment, a university is unable to retain its hires.
This [article (and associated comments)](http://scientopia.org/blogs/science-professor/2011/01/19/faculty-movers/) for example, illustrates how many faculty have changed institutions for various reasons; I also know personally at least two assistant professors who left their original institution to move to a new one within the first few years of them being hired.
Depending on the answer to this question, I would have a number of follow-ups (Do schools hate it if you apply elsewhere, and would they fire you? How does a department react when a recent hire leaves? Why do people switch? Are applicants often successful? Do people switch more than once?) but I'll start with this first.<issue_comment>username_1: My several-decades' observation suggests that this does not happen often at all, in all sorts of economic climates. Only the rare person whose work is both "hot" and who *wants* to play job-change games will do this.
Yes, applying for other jobs does tend to alienate many (not all) faculty at one's current institution. But if one can "get away with it", then it may be profitable, even while people resent it.
I have been acquainted with situation in which there was anticipated to be some prejudicial nonsense at the tenure-decision time, so that people took a year's leave and found other (potentially permanent, but supposedly temporary) jobs as back-up. This is only sensible. Yes, people at the "home" institution used this as leverage to disparage the candidate... but one imagines that these people were wanting to disparage the candidate *anyway*, so the fact that a person acted reasonably and in self-defense was irrelevant.
Indeed, I think the most common, and most sensible, case of such job applications is exactly when there's some anticipation that the tenure case will not go well. In some very rare cases a person has out-of-the-blue done something surprisingly good, and can launch themselves into a higher stratum of academe. :)
In most cases, such job apps would be a pointlessly antagonistic thing to do...
Upvotes: 3 <issue_comment>username_2: It does happen. But it's not that common. I've been on faculty search committees for a number of years now, and out of the hundreds of applications I've seen, it's very rare to see pre-tenure faculty from other institutions apply. If and when they do, it's because
* they're worried about their tenure case
* there's a two-body problem (this is in fact the most "reasonable" reason to move)
* they have other reasons for wanting to move (location, opportunity)
But it's not a frequent thing. Because it's hard to do: clocks have to be negotiated, research programs have to be uprooted, students have to be moved (and let me not even start on the logistics of moving a family that might just be getting settled in the first place)
Upvotes: 4 <issue_comment>username_3: Well, first of all, I'm living proof that it does happen (twice!) from time to time. My sense (which is purely personal, and thus can only speak to the situation in mathematics) is that applying for other jobs before tenure is very common, and even people who aren't especially serious about moving will do it "just to test the waters" or "because it's the thing to do" especially right as they are coming up to tenure. Having been in a position to read files for job searches over the past couple of years, a lot come from people who already have tenure track positions. It's considerably less common for offers to actually get made (in part because of a vague sense that people may not be especially serious about moving or that they may just want it as a bargaining chip), and less common still for people to move, though I'm far from the last person I know who's done so, and I know of several other instances of people having outside job offers that they ended up turning down.
I think username_2 has the reasons down; for me, the issue has been the two-body problem. I think generally people are quite understanding about the issue, and I haven't (directly at least) encountered hostility from my former colleagues about moving (and indeed I've cowritten papers with 3 people at institutions I've left since leaving), though I have gotten occasional jokes.
My experience is that the general reaction of schools is to negotiate to convince you to stay (or come back after you've left); I've never heard of anyone being fired because of applications elsewhere.
Upvotes: 4 <issue_comment>username_4: This is becoming more common. Younger workers are savvy about good opportunities and moving is not that big a deal to us. I think the baby boomers look at relocating as a far more disruptive thing than we do. I changed jobs and moved this last year and it was no big deal. I'm single and don't own a home so I can take advantage of good opportunities. Plus, if you want an actual pay increase, you have to move. That's the reality of it.
Upvotes: 2 <issue_comment>username_5: At the R1 where I got my social science PhD, I'd estimate that 75% of senior faculty in my department switched jobs at least once before gaining tenure. Usually these moves were from one R1 to another, although usually from institutions considered less prestigious than this one. A few moved from R2/R3 or SLACs to this R1. That being said, maybe my field and department are different from the norm:
1. This field has a "pretty good" job market; there are more new jobs than PhDs each year (caveat: we do hire from outside the field and some professional practitioners). Still, a healthy job market probably facilitates mobility.
2. My department is by some metrics the most research productive in the field, so the dynamics that lead one to land there may be exceptional.
Also, my field does not generally do post-docs so a large proportion of those who get a T-T position had a T-T position of some sort as their first job post-PhD. Maybe fields with an expectation of post-doctoral work have more efficient markets that place you into a better fit once you go on the T-T market. We also aren't really a lab-based field so there are fewer "anchors" so to speak that would hold us in one place since we don't have big infrastructure needs.
The advice I received before going on the job market was that your first T-T position was unlikely to be your last. Again, this coming from people whose experience was advising graduates of this department and institution. The logic is that it's not that likely that the market the year you were on the market had the right spot for you and even if it did, you needed some good fortune to be the one chosen.
This answer is of course referring to the pre-pandemic world.
Upvotes: 2 <issue_comment>username_6: As a social science PhD I was on the tenure track 4 times. I moved from a community college in the northeast to a comprehensive school in the south to one R1 in the west and then to a private liberal arts college which paid more in the northeast again, but this was just before the meltdown of 2008/2009 and I don’t think I’d be able to pull it off again in today’s job market which has gotten way more competitive. Having served on search committees now for about a decade after getting tenure, we do get some tenured folks who apply but they rarely are interviewed. TT applications are more common and they often get shortlisted as are post docs and ABDs but the former have to come up with a pretty good reason and explanation about why they want to move where they are and why they want to join our institution. The onus is on them to prove there are as much pull as push factors…
Upvotes: 2 |
2014/03/12 | 4,621 | 18,177 | <issue_start>username_0: This isn't the best moment to ask the question, since I should be writing an overdue document and presentation.
However, is there any place where a person with fear of writing can work? I am studying for a PhD in Europe in a STEM field. I was always quite good at school and university. I was on schedule with my studies, but the preparation of my master thesis was quite painful and I have the impression than I should have completed it in 60% of the time it took me. After a short experience in the industry in a field close to my own (but for which I didn't have all the skills), I took the opportunity of a PhD in my field. Of course before that I had been making interviews for jobs in my field, but never got anything. Being extremely introvert doesn't help. I already feel embarrassed writing my CV, and adding details as hobbies or strength points (actually I ended up skipping those parts).
Now it's been more than one year that I'm in this PhD and each writing task is really making me desperate. Initial proposal, internal reports, and presentations torment me for weeks. From this emotional distress, I am often impaired both in my writing and research work. I have ended up crying at home at night in front of a white page (once even at morning at work, but I managed to hide in the bathroom in time).
I'm lucky that my advisor doesn't put too much pressure on me, but I feel so bad not being able to hand in what I am required, sometimes even after a deadline. The problem is that even if a deadline is reasonable, sometimes I cannot make it because it takes me 2 days to write 1 paragraph (no kidding). Guess how many papers submitted?
Still, I enjoy much more my time here than in the industry. I enjoy academic reading, I have thirst for knowledge and so on, but I see that communication is an essential part for a PhD student. So far I am willing to continue with the PhD program, but I am starting to doubt that I will ever be confident in writing and that this will cripple my academic future.
Any suggestion on how to turn the career path? Also a suggestion on how to overcome fear of writing would be good, but I am afraid I've already read all the good advice here on SE and in many blogs about procrastination, perfectionism, impostor syndrome...
If can be of interest, writing this question took me more or less 40 min.<issue_comment>username_1: It is your PHD. You have to write. Papers, reports, thesis. It cannot be done otherwise. On the other hand, you must know that all of us have collapsed (or constantly collapsing) under pressure, one way or another. Others have stayed in bed not wanting to get up. Others blocked out and could not perform. The "stronger" ones sucked it all in (externally) and developed serious health issues. So, you are not alone.
You should fight it. Get a girlfriend / boyfriend. Talk to your family and friends. Exercise, do sports. As wiser people have said "*Whatever it takes to keep your hands free*". But even then, you might not be able to fight it alone and might need to seek some professional help. It is not a shame. It is actually a brave action to admit you are not always strong in a world full of hypocrites and wannabes.
In term of practical advice. Select some successful authors in your research area that you really like their writing style. In the beginning, partially emulate their style. See how they describe the challenges and the related work. Rephrase the related paragraphs in your unique style. Then add in your what are you trying to do. That will be enough to make it through the introduction and the related work. Beginning is always the hardest part. Once those 1-3 pages are out of the way and you start to write "your" contribution and "your" experiments, you will see that things get easier. So, in the beginning emulate and adapt (but not copy) from the authors you like. After 1-2 written papers, you will see that you do not need to do that anymore and you have developed your style. Most of all, enjoy the process. Everyone has gone through your phase and you CAN and WILL get through it.
Upvotes: 4 <issue_comment>username_2: The only thing you should do is to seek professional advice. Writer's block (in all its forms) is not uncommon and it is clearly possible to remedy. I cannot and will not get into possible reasons for why this occurs but clearly in a PhD, particularly early on, it may seem like a daunting affair. To avoid ones problems is not the solution. If you see yourself in academic jobs (in or outside academia) writing will be one of your main tools. To try to find a job where this is not necessary will be difficult unless you see yourself switching direction completely. So the advice is seek help now. You will not be the first one to do so and will not be the first who will be able to continue.
Upvotes: 4 <issue_comment>username_3: I do resonate with your experience because I was scared to write (English is not my mother tongue) and I am also an introvert. I have been reading much about writing and networking and here are some tried and working methods (by me) that I wish will also help you. In the case of real fear, I agree that talking to some professional will also be advisable.
**Don't "write," instead, draft then edit**
My biggest "A-ha--!" moment was learning that most writers don't really write a perfect sentence at the first go, instead, they *draft* and *edit*. And for me, my mind has eased up so much once I have learned this distinction and decided to split my writing tasks into drafting and editing. In drafting, I just freely interpret the data and plug in some discussion here and there, occasionally with some anecdotes, etc. I never edit them. Then, after a couple days, I return to edit the piece. On a good day, I got to keep about 40-60% of them; on a bad day, I may have to slash 80-90%, but also often have a good laugh at what I wrote.
This finding really liberated me. I'd suggest you give this a try as well.
**Write to think instead of think to write**
Instead of writing to tell people what you think, I found it easier to write it as if I am having a discussion with myself. I wrote the research question, and then gave some answers, and then went back to question the answers more. I was often surprised that many times I read the scribble and figured out "Oh, this is what it is!"
I would advise against crafting out a perfect sentence and then put it on paper. This is futile because mind works a lot faster and non-linear than typing/writing process. By the time a "perfect" sentence is written, a lot of useful thoughts might have been suppressed or forgotten.
**Dedicate times to write**
Silvia in [How to Write a Lot](http://rads.stackoverflow.com/amzn/click/1591477433) introduces a method that involves making a time and a space to write. I adopted the method this way: I block out time a few weeks ahead as writing/analysis time, then I guard that time. I cleared up the wall and the desk I face when I work on the computer so that I can only see the computer (and other books/articles I use) when I write. There is no picture, stationery, picture frame, etc. 120 degrees in front of my eyes. I also close down e-mail, silent my phone, and close the door.
Boice in [Professors as Writers: A Self-Help Guide to Productive Writing](http://rads.stackoverflow.com/amzn/click/091350713X) even goes so far to suggest coupling important and crucial daily ritual (such as shower) with writing hour. No writing? No shower. I found this kind of stressful, but perhaps may work for people with a different personality.
**Document time as well as words**
Productivity aside, it's more fulfilling to document my writing effort in both time and output. High output is, of course, great. But I have also come to realize that reminding myself I have been dedicating a fixed time regularly to write, albeit slow and low, makes me feel more confident.
**Free writing**
Goodson in [Becoming an Academic Writer](http://rads.stackoverflow.com/amzn/click/1452203865) explains some pretty nice workshops to foster a healthy writing habit. The starting one is to "free write." Free writing is not new, and some people use this to warm up or tune into the internal writer channel. It's really simple, just sit down, open an blank document, *minimize the visible area so that you cannot see what you type,* set a time limit, and pour out all sorts of thought that goes inside your mind. Anything, just type.
Usually, after the time limit (I started with 5, now 10,) you'll have a pretty clutter-free mind and clearer idea on what that day's writing is gonna be.
**Use proxy of writing**
There are many, many ways one can put down an idea without initially writing it out. Drawing a [mind map](http://rads.stackoverflow.com/amzn/click/0452273226), popularized by Buzan, can help forming a bird view of an article. Paper and colored pen are good enough, online tools such as [VUE](http://vue.tufts.edu/) are abundant.
Doodling, such as [Sketchnote](http://rads.stackoverflow.com/amzn/click/0321857895) recommended by Rohde, can also be a fun way to "play" with words and idea.
For scientific writing, [a paper](http://www.ncbi.nlm.nih.gov/pubmed/21567769) by <NAME> Holmquist in 2009 suggests a pretty interesting method: basing your writing on a key output, such as a graph of the results, or a table of an analysis, and then build from there. For most experienced researchers this may be nearly common sense, but for those who think a paper should be written from Abstract to Conclusion linearly, this paper would help re-orienting them. My former supervisor, knowing my hesitation for writing, actually used this very method when advising me.
Use a recorder to document your thoughts or use [speech recognition software](http://en.wikipedia.org/wiki/List_of_speech_recognition_software) to dictate your spoken words into texts may also help you break away from the deep-rooted fear towards writing.
**Harness creativity**
When writing output falls, I feel tired. After some trials and errors, I figured out that I feel tried not because I lacked energy, but because of too much creative thoughts didn't get to be expressed. So, I play music, pick a challenging recipe and make a potentially horrible dish (and eat it, butter + pepper + red wine always save the day), play LEGO, knit and crochet, come to answer questions... etc. I actually picked up a lot of habits through my doctoral study.
**Introvert vs. extrovert**
Being an introvert is actually not bad! In most modern culture extroverts get a lot more praises and attentions, but if you cast a critical look at it, this world needs both types to run properly. Two books I read last year were quite inspiring: Zack's [Networking for People Who Hate Networking](http://rads.stackoverflow.com/amzn/click/1605095222) provides advices on how to focus your energy and properly carry out quality networking; Laney's [The Introvert Advantage](http://rads.stackoverflow.com/amzn/click/0761123695) provides a more holistic look at how extroverts and introverts work, and how to cope and thrive as an introvert.
**Closing remark**
Best of luck and, really, enjoy the ride even it's scary. After the PhD thesis, there will be more collaboration and you will not feel as lonely. When I am down and want some healing reading, I often flip through Lamott's [Bird by Bird](http://rads.stackoverflow.com/amzn/click/0385480016) and Zinsser's [On Writing Well](http://rads.stackoverflow.com/amzn/click/0060891548). They are also a good choice before reading those more serious and pragmatic how-to guides on writing.
Whenever I feel cognitively/academically inferior/inappropriate, I would think about a remark of [Florence Foster Jenkins](http://en.wikipedia.org/wiki/Florence_Foster_Jenkins), who slaughtered Mozart's Queen of the Night aria (Try listen to [Edda Moser's version](http://www.youtube.com/watch?v=ZNEOl4bcfkc) first, and then [Jenkin's version](http://www.youtube.com/watch?v=6h4f77T-LoM)): "People may say I can't sing, but no one can ever say I didn't sing." Before worrying doing it well, do it first, and you will improve.
The fact is I have never met one person in my life who genuinely embraces writing as if they "love" to write. Most successful writers, when interviewed, often just say things like "What tips? I just write," or "I draft, and then I write, rewrite, and then I rewrite again." Betty's Drawing on the Right Side of the Brain has made a lot of people realize they can draw; I wish there will be a book called Writing on the Right Side of the Brain that will make people realize that we all can write, but we probably have to wait until the neuroscientists get over their fear of writing.
Upvotes: 7 [selected_answer]<issue_comment>username_4: There are opportunities in research that don't require writing, usually in engineering, building things about which some other people will write about. I don't know about your area, STEM is for stem cells?
First point: you may not need to write.
However, if you happen to need to write or you would like to overcome these issues (which I would encourage, even if it may be a bad idea), then I don't know how to help you, but since I started writing I now need to put an end to this. There are a few things you could consider (I'm not an expert in this topic, this is just from the top of my head):
1. As you see, you don't need to worry [that] much about writing in a place like here. You are anonymous, and users here can get to be as moronic as I am (I think I'm the supreme in that ordered set), so no need to worry about sorting the ideas [that] much.
2. try to get a habit of writing, forums, blogs, whatever. Don't struggle to write properly (people clearly don't do so), simply try to be natural, get a habit, have fun. Then, when you are confident (and fast) try to get better at it, you will get better faster (with your newly gained speed).
3. think in words. I always thought everybody did this, I certainly do, I guess some people may not (maybe deaf people think in different ways). If you are commuting, waiting in bed to get asleep or in any other spare time with nothing better to do, think about anything you may be interested, but try to structure your thoughts in a conscious way into sentences and a discourse (no big effort, try to do this naturally). Writing is simply typing this. Typing may be hard for handicapped people, I don't want to make any assumption here, but if <NAME> can manage to write stuff, I think you should be able too.
4. get a template. Don't write papers, fill the gaps in a template. You can even use some other paper or set of papers (taking bits and pieces) to create one. All the papers of the same type (experimental, formal, theoretical, whatever) have the same structure. In this paper we address the problem of $topic when there is some $novelty.
We extend previous approaches by including $novelty. This provides
several advantages like $advantage1, $advantage2, $advantage3 in the
context of $limitations. We have performed an [empirical/formal/user
based/whatever] evaluation showing promising results about this
possibility.
Note: this is not a universal template, sometimes you
will add some novelty that is good, sometimes you will be able to
handle something that makes things harder and despite of that do
something cool, etc. there are different tacit templates for
different kinds of papers. When you get used to this, you will do it
automatically, the template will be engraved in your brain.
Don't worry about plagiarizing if you are starting, your supervisor
will probably make many changes and plagiarizing is (should be!) about what
people do, not how they write it. As long as you are doing new
things, it should not matter much if someone wrote other things in a
similar style. Or simply draw inspiration from several papers, researchers seem to like collages.
5. submit your papers to journals. I'm an introvert and I hate conferences.
PS: listicles are easy! BTW: I wrote an answer that very bad written, mostly stupid and fairly useless, but it's still better than nothing, isn't it? Some people say: "Better to remain silent and be thought a fool than to speak and to remove all doubt." But this is not true in academia, it's more like "What's important is that people talk about you, even if they only say *bad* things". It's all about impact!
Upvotes: 3 <issue_comment>username_5: The best way around writer's block I've found is to have several parallel tasks, i.e., do experiments, analyse results, read up on (and summarize if useful) literature, re-read and proof earlier chapters, write up new stuff, and so on. That way, when you get bored or otherwise stuck doing one task you can switch to another one. One task is blocked, but the others move along nicely ;-)
Another tip is to write down (even as a rough draft, but in your final format) what you do as soon as possible. If you don't, you can easily waste a week reconstructing some earlier derivation. It is also easier to edit a draft to (more near to) final form than writing, and in my experience neither the draft-writing nor the draft-polishing are prone to block me (at least much, much less than "writing").
Upvotes: 1 <issue_comment>username_6: Two books by <NAME> helped me a lot to have a feeling about academic writing: Scientific English: A Guide for Scientists and Other Professionals, 3rd Edition and How to Write and Publish a Scientific Paper. But the next important thing, like the other answers suggested, is to practice thinking and writing regularly.
Upvotes: 2 <issue_comment>username_7: My advice: By the time one is working on a dissertation, one has read dozens or more of articles, books, etc. So just start writing. Hint: You already have an 'emergency' plan in your mind of the minimal acceptable dissertation to write. So, write that out.
If you are in math or sciences (or any other field where figures and tables are applicable), do them, put them in a coherent order and describe them. This will get much of your goal accomplished.
Upvotes: 0 |
2014/03/13 | 889 | 3,709 | <issue_start>username_0: As a 19 year old I had to withdraw from a university because of depression. However, at the time, one professor refused to let me withdraw and gave me an F. On the strength of that grade I was dismissed from the university.
I got myself together and went to DePaul and graduated Cum Laude. Now I have received an offer for an assistantship (full ride) and a stipend from the university I had to leave as a young student.
I want to teach at university level and will someday be sending transcripts on to a PhD program from my MA. Will the dismissal show on my transcript? Will it matter? I appreciate you help and input.<issue_comment>username_1: Undergrad and graduate transcripts are normally separate things. If you want to make sure whether the undergrad grade would show up on your graduate transcript, that's a question you should ask the school's admissions and records department.
If the F is going to show up on your apps, you could choose to explain it in your statement of purpose, or you could choose not to explain it. A single F in a single course is probably not going to cause anyone any big concerns if all your later work from other schools looks good. If you choose to explain it, you run the risk that there will be people on the admissions committee who have medieval attitudes about mental illness.
Upvotes: 4 <issue_comment>username_2: Ph.D applications typically ask for transcripts from *all* undergraduate institutions you attended, so the question of whether the withdrawal will affect you is generally independent of whether or not you go back to that school.
That said, it's extremely unlikely that it will impact you negatively; your subsequent good work more than makes up for it. If you absolutely feel that you need to explain it in your statement, I would not spend more than a few words and be vague (after taking some time off to deal with a health issue, I transferred to DePaul where....) However assuming your subsequent work is good enough to stand on its own (and it surely is if you were accepted to the master's programme with scholarship) I would advise not mentioning it at all.
Upvotes: 3 <issue_comment>username_3: Usually from what I have seen, as long as you can explain yourself and be honest, you should be good. A lot of good places look at the drive you have towards your goals and fortunately professors these days are getting out of the "only-grades-matter" attitude (albiet slowly).
Please do what you want to do instead of pre-empting what may happen. I wish you all the best.
Upvotes: 1 <issue_comment>username_4: I'm going to answer your question for a general case (i.e. not the school that you are applying to but what happens in most cases).
A drastically bad mark in your curriculum is much better than a mark which is just bad. To make it clear, allow me to clear my point with an example. In my country the max you can get from an exam is 20 and anything below 10 is fail. I somehow was absent during an exam an as result, I got 0.25 for my final mark (0 is was not acceptable by the software system).
When I was applying for post-graduate studies, that 0.25 was a concern. But the fact is, the committee easily understood that this mark is somehow a particular case, they simply asked me about it and it was dealt with within 5 minutes.
The point I'm trying to make is, an inhomogeneous result in your career automatically tells that it is a particular case and it does not necessarily represent you. Moreover, post-graduate committees are more into your recent career rather than a dark point in your teenage years. Proceed with self-confidence and above all, with honesty and you shall prevail.
Upvotes: 2 |
2014/03/13 | 1,388 | 5,608 | <issue_start>username_0: I already have a BSc degree from an unknown school in outside the US. I have convinced a professor at a top US school to let me join his lab to work on a project that he will propose and conduct experiments at his lab. In return for learning and having access to the lab and working on a project, I am supposed to help the lab with programming their machines. However, there is no pay, that is my title would be "Volunteer". Therefore, I have to work in part-time or night jobs while working there (I have work permit).
Is this common in US, that is to work in a lab without getting paid and working on a part-time job outside the lab to pay for living expenses?<issue_comment>username_1: Your description of being "allowed" to do research work in exchange for programming work sounds off to me. Learning and running experiments for a research project proposed by a supervisor is basically the *job description* of a research assistant. It's work in its own right that people are typically compensated for in some way, not a reward for doing other (programming) work.
The arrangement you describe is *not* common, and it might also violate U.S. labor law. **Under U.S. law, it's illegal to let someone work for you for free unless they meet specific legal requirements to be considered a "volunteer" or "intern."**
"Volunteers" according to U.S. labor law are individuals
>
> who volunteer their time, freely and without anticipation of compensation for religious, charitable, civic, or humanitarian purposes to non-profit organizations.
>
>
>
Your intent is clearly *not* religious, charitable, civic, or humanitarian in nature, so you do not legally qualify as a volunteer.
And to be classified as an "intern" you must meet the requirement that
>
> The employer that provides the training derives no immediate advantage from the activities of the intern; and on occasion its operations may actually be impeded.
>
>
>
(among [other requirements](http://www.dol.gov/whd/regs/compliance/whdfs71.htm)). That is, the employer cannot expect to be dependent on your work for normal operations. I don't think you meet the requirements for an unpaid intern, though it's not possible to be 100% sure from your description.
The usual interpretation of the U.S. labor law is that an internship has to be part of a *formal educational program* (e.g., you are enrolled as a student and get credits for the internship, or write a report which you submit to your home institution) or a *formal apprenticeship* for it to be legally unpaid. In fact, if you search for unpaid internships in the U.S., you'll find that most listings say that only current students who can earn college credit are eligible. It doesn't say in your post that you are currently enrolled as a student somewhere.
This is not to say that there is no legal scenario in which a U.S. lab can allow you to participate in research there without paying you. (If the entire experience was supposed to be for your educational benefit - including the "help the lab with programming their machines" part - then my answer might be different.) But from your description, I don't think the scenario you describe is acceptable or normal.
I personally do not allow anybody to do work for my lab unless they are paid or doing a personal project (like a thesis) for which they earn academic credit. I've been told it would be legally problematic. For example: suppose I have an M.S. student working with me for academic credit. He graduates in May and has a job starting in September. I'm not allowed to let him keep working in the lab from May-September unless I can pay him (according to my university lawyers).
(Disclaimer: I am not a lawyer)
Upvotes: 5 [selected_answer]<issue_comment>username_2: I think that one of the main points is: do they **need** you? If they do, well, username_1's answer is true and honest. But **if they do not**, if their experiments already have all staff needed, if they gain nothing they need by letting you in and they will need to use some of man-hours to teach you and make sure you don't cause trouble, then it looks fair. You say you had to convince a professor. So it seems likely that he already had all the research assistants he thought he needed - and probably was also afraid you will use their time to learn (that's why you do it, isn't it?) so he wanted something in return.
It would be safer for professor and better overall for you if you could find a research assistant job where you are needed, not convince anyone to let you in where they seem not to need you.
Upvotes: 2 <issue_comment>username_3: IMHO the real question is that it is an exploitive offer, or it is a good possibility. My opinion is probably the second, although a little bit of exploitation can't be closed out, a research job in the U.S. can have this price.
It were an exploitive offer if you had (or will have) alternative opportunities to get an U.S. job. IMHO this isn't the current situation. If you want to get a real, paid U.S. job, first you should be already there. Out of the U.S. it is much harder (nearly chanceless), even if you have a work permit.
Your boss (professor) probably knows this, and it has to be a big chance, that you will get a much better (=paid) offer from him, or from any other, if you are already in the U.S. In this case it isn't an exploitive offer, but an opportunity, and you can see this volunteer-time as a trial period.
If you are sure that you will be able to get better offers, then you should reject it, but I don't think this is the case.
Upvotes: 2 |
2014/03/13 | 557 | 2,283 | <issue_start>username_0: I am proofreading a grant from another researcher. This is going to be submitted to the NIH which follows AMA guidelines. A particular, long report is providing her several of the researcher's citations. I asked for page references to some of her stats since I couldn't read the whole thing, and I needed to verify her correctness in interpreting them. This document is hundreds of pages long and she cites 3 different sections in the article.
1) Is it necessary for her to indicate the actual pages in the citations
if so,
2) How to cite the same document at several locations?<issue_comment>username_1: When citing a specific page or pages of a longer document (not generally necessary for published journal articles although that may also be mandatory in some fields) one adds the page number to the reference. In the Harvard system one would write "Smith (1969, p. 24)" or "(Smith, 1969, p. 25)". There are as far as I know no hard rules when such references should be made but an author should consider the readers of the work they produce and facilitate finding the information cited as well as possible. In a short article of 10-15 pages this would not be difficult but in longer works one should consider providing pages. The point of references is after all to provide sources for information used in the article and one should not have to read an entire book to find it.
If I understand the second question correctly, citing different pages in a specific document is done as above and the reference in the reference list will be just the document itself; no need to refer to page numbers there. The reference list provides the litterature to find and the in-text referenes will provide information on where in the literature you need to look.
Upvotes: 3 <issue_comment>username_2: The AMA citation guide does not accept *ibid*, *op cit*, or *loc cit* references. Their protocol is listed here:
<http://jeffline.jefferson.edu/Ask/Help/Handouts/Citation_AMA_style.pdf>
and the basic recommendation is that endnote references should include page numbers when necessary e.g.
>
> In his early work, Smith found a significant difference between smokers and non-smokers8(p.23) Age was also a factor. 8(p.64,66)
>
>
>
Upvotes: 3 [selected_answer] |
2014/03/13 | 2,500 | 10,358 | <issue_start>username_0: BACKGROUND: I'm beginning a computer science MS program this summer. My undergrad background is a 2.9 in non-related sociology sub-category. The last time I took an actual math class was AP Calculus in my junior year of high school *cough* 8 years ago because I did well enough on the AP exam to get credit for everything my major required.
GOAL: Ultimately I plan to pursue a PhD in the Computer Science/Computer Engineering/Electrical Engineering with strong leanings toward AI/Machine Learning/Robotics. "Dream School" is Caltech but any well-respected school with connections to the aerospace industry would be stellar (couldn't resist the pun, sorry.)
Assuming I have solid grades in my MS program, 90th percentile general GRE scores, and a few publications do you think it is worth it for me to take the Math subject GRE in order to show that I have taken it upon myself to fill the gaps in my undergraduate math education?
EDIT: The program I am going to has a provisional entry option where you take 2 accelerated courses at the beginning which catch you up to speed on programming, data structures and foundational computer science. The min GPA is 2.75 in any BS/BA program and as long as you get at least a B in both classes they let you continue. I've worked as a developer/IT person for the last 3 years, my general GRE score was 27 points above their suggested minimum and I took a few random graduate computing courses at SPSU a few years ago and got As in all of them so I think those were contributing factors to my admission.<issue_comment>username_1: It sounds like you need an answer from someone at the school to which you will apply. I took part of an online machine learning course more than a decade after graduate school in mathematics. The amount of linear algebra and multivariable calculus used there was minimal (it was an MOOC meant to be inclusive), and was more than I recall being on the GRE in an earlier millenium. I don't see how I could have done the course without a solid foundation in linear algebra and multivariable calculus, if for no other reason than to be comfortable with the notation and basic concepts, much less the applications.
I think it makes sense to check out some of the online offerings related to your eventual program, see how the math strikes you, and confirm by talking to members of the departments. My guess is that the GRE will be inconsequential and that you will have to deal with the math at some point in your trajectory. You and a face-to-face advisor are best suited to determine where in that trajectory you should do it.
(Of course, take the Math GRE if the admission requirements recommend it. Don't fool yourself into thinking the GRE will serve as an indicator for how easy it will be to do multivariable calculus or linear algebra.)
Upvotes: 0 <issue_comment>username_2: Either a particular subject GRE is required for a particular program or it is not. There is no middle ground. If is is required, you will not be considered without it. If it is not required, *don't do it*: if you do badly, it can still harm you a lot; if you do well, it cannot help you very much.
Upvotes: 1 <issue_comment>username_3: Okay, here's some advice:
I like that you have an ambitious, but clear, long-term goal: to work at NASA's JPL. That puts you far ahead of many PhD program applicants in a very important aspect.
Getting into Caltech's PhD program seems like a less realistic goal to me, honestly. They have one of the US's very best programs in CS/EE. Very top programs have their pick of applicants. They are likely going to not seriously consider applications with a low undergrad GPA just because that's one of the easier of many cuts they need to make in order to get the applicant pool down to a reasonable size. Or another way to put it is: imagine that your application is very strong except for this one noticeable flaw. They will be looking at plenty of applications which are similar to yours *except* that this flaw is not present. What do you expect them to do?
Fortunately you don't need to go to Caltech to get a job at JPL (although it wouldn't hurt, obviously). You need to go to a serious, reputable university (think UGA) and acquire and demonstrate the talents and skills that will get you hired at JPL. One's academic pedigree is important but it is certainly not all-important, even in academia and still less in industry. (I got my PhD in mathematics from Harvard, which afforded me the largest possible head-start on the job market. It was quite an awakening to discover that my Harvard degree was much closer to a foot in the door than a golden ticket, and the amount of work that I had to do *after* getting my PhD from an absolutely top program in order to land a job at a serious, reputable -- but not at the very top -- university is still remarkable to me.) You should be shooting for more like a top 50 program, I think, although you should not take my word for it but rather apply to a range of programs when the time comes.
Your undergrad GPA does look like a bit of a problem to me. Since it happens that you got your undergrad degree at my university, I know that your GPA is [significantly below the average](http://www.redandblack.com/news/greeks-maintain-above-average-gpa/article_e171ac95-4567-5d7b-a7a7-b247987bba33.html). In fact having a GPA of below 3.0 is significant at UGA for reasons that you understand and thus I don't need to get into.
You mention that your undergrad major is in sociology, and the big question is how much that will be discounted given that you want to study CS. I'll be honest with you: it still doesn't look that great. The best case scenario is that your undergrad GPA will be viewed as something which does not have much to do with your future graduate performance in a different area. So "irrelevant" is the best case scenario. As a member of an admissions committee, a student's undergraduate GPA in whatever subject is not irrelevant to me. Succeeding at coursework is a skill unto itself, and while it is not the highest or most important skill that is necessary for success in a PhD program, it's definitely on the table.
As an aside, I see that in this and another question you describe your major as "irrelevant sociology". I don't think you should say it in quite that way. How relevant your undergraduate studies in a different discipline are to your candidacy as a CS PhD student is for the admissions committee to decide. I detect a hint of disdain for sociology in your language, which to me is also not great: you are the one who chose your undergraduate major. Just because it is very far from CS doesn't mean that you shouldn't have done well at it. Candidates like you have a fine line to walk, e.g. in your personal statement: you need to show but not tell that there is every expectation that you will be an excellent CS graduate student notwithstanding the fact that you were a not-quite-mediocre sociology undergraduate student. Your task is to frame your academic journey in a positive light. Faculty reading graduate applications love to see *improvements* in students' academic records: among other things, this shows not fully realized potential, and that's really important because almost no one comes into a PhD program fully equipped with the skills they need to graduate from it: it is a very dynamic process, over a period of years. Especially we love to see that students are doing better in the graduate level material than the undergraduate level material.
Back to your specific question: should you take the GRE math subject exam? Other answers have already addressed this question, and I agree with them: if it's required for a CS PhD program (unlikely), then of course take it. If it isn't, then don't: it's a tough exam, to the extent that much above the 50th percentile for an American aspiring *math PhD student at UGA* looks good enough for us. You could really know undergraduate mathematics perfectly well for all your future needs, take the math GRE, and get a score in the lower quartiles. That won't look good for you.
What should you do instead? I recommend **more undergraduate level coursework**. You want to launder your GPA by showing that you can nowadays get excellent grades in the undergraduate courses that count for CS PhD program. In most master's programs you can also take undergraduate courses. You should look into this in your program. If it is not possible, you should look into separately enrolling in some kind of "post-bac" program concurrently at the same institution. If you can get mostly A's in undergraduate level math courses, then while your relatively weak undergraduate performance in sociology will not be completely forgotten, it will probably be forgiven by some top 50 program.
[You might ask: isn't graduate level coursework better than undergraduate level coursework? Not necessarily, no. First of all taking graduate level CS courses doesn't necessarily show the mastery of mathematical foundations that you're rightly concerned about. Second of all, GPAs in graduate programs are an entirely different species from GPAs in undergraduate programs. I am unimpressed by a high graduate GPA. I am only negatively impressed by a low one.]
I really do wish you the best of luck and hope you are working for NASA someday. I encounter lots of un-stellar students teaching at UGA and think "Well gosh, they're so young. If they got their act together, they could probably be much more successful. I wonder what lies in their future?" If you succeed in turning your career around like this, I'd really like to know. Somehow your story will be inspirational to me and help me put more effort into reaching students who don't yet have the academia thing all figured out: we all have so much potential lying beneath the surface.
Upvotes: 5 [selected_answer]<issue_comment>username_4: The math GRE would essentially be useless. Please don't take this. I'd also amend your hopes on Caltech since they're a top 10 school in engineering and top 5 in CS. Methinks it would be rather unlikely someone with a sociology undergrad (and no math prerequisites) would get in. I had more background in math than you even while I was a senior in high school (just to put things in perspective). Good luck though.
Upvotes: 0 |
2014/03/13 | 2,245 | 9,314 | <issue_start>username_0: If I get A+ grades then it will raise my GPA in my undergrad institution, as well as for Columbia which is where I want to go to graduate school for statistics. **Should I try hard to get A+ grades or should I be content with As?** Trying for A+ grades would not cut into any of my important academic activity such as research or my other classes because I already make all As, but it might slightly affect sleep and socialization.
**Edit** (from answer): To be clear I am an undergraduate right now.<issue_comment>username_1: It sounds like you need an answer from someone at the school to which you will apply. I took part of an online machine learning course more than a decade after graduate school in mathematics. The amount of linear algebra and multivariable calculus used there was minimal (it was an MOOC meant to be inclusive), and was more than I recall being on the GRE in an earlier millenium. I don't see how I could have done the course without a solid foundation in linear algebra and multivariable calculus, if for no other reason than to be comfortable with the notation and basic concepts, much less the applications.
I think it makes sense to check out some of the online offerings related to your eventual program, see how the math strikes you, and confirm by talking to members of the departments. My guess is that the GRE will be inconsequential and that you will have to deal with the math at some point in your trajectory. You and a face-to-face advisor are best suited to determine where in that trajectory you should do it.
(Of course, take the Math GRE if the admission requirements recommend it. Don't fool yourself into thinking the GRE will serve as an indicator for how easy it will be to do multivariable calculus or linear algebra.)
Upvotes: 0 <issue_comment>username_2: Either a particular subject GRE is required for a particular program or it is not. There is no middle ground. If is is required, you will not be considered without it. If it is not required, *don't do it*: if you do badly, it can still harm you a lot; if you do well, it cannot help you very much.
Upvotes: 1 <issue_comment>username_3: Okay, here's some advice:
I like that you have an ambitious, but clear, long-term goal: to work at NASA's JPL. That puts you far ahead of many PhD program applicants in a very important aspect.
Getting into Caltech's PhD program seems like a less realistic goal to me, honestly. They have one of the US's very best programs in CS/EE. Very top programs have their pick of applicants. They are likely going to not seriously consider applications with a low undergrad GPA just because that's one of the easier of many cuts they need to make in order to get the applicant pool down to a reasonable size. Or another way to put it is: imagine that your application is very strong except for this one noticeable flaw. They will be looking at plenty of applications which are similar to yours *except* that this flaw is not present. What do you expect them to do?
Fortunately you don't need to go to Caltech to get a job at JPL (although it wouldn't hurt, obviously). You need to go to a serious, reputable university (think UGA) and acquire and demonstrate the talents and skills that will get you hired at JPL. One's academic pedigree is important but it is certainly not all-important, even in academia and still less in industry. (I got my PhD in mathematics from Harvard, which afforded me the largest possible head-start on the job market. It was quite an awakening to discover that my Harvard degree was much closer to a foot in the door than a golden ticket, and the amount of work that I had to do *after* getting my PhD from an absolutely top program in order to land a job at a serious, reputable -- but not at the very top -- university is still remarkable to me.) You should be shooting for more like a top 50 program, I think, although you should not take my word for it but rather apply to a range of programs when the time comes.
Your undergrad GPA does look like a bit of a problem to me. Since it happens that you got your undergrad degree at my university, I know that your GPA is [significantly below the average](http://www.redandblack.com/news/greeks-maintain-above-average-gpa/article_e171ac95-4567-5d7b-a7a7-b247987bba33.html). In fact having a GPA of below 3.0 is significant at UGA for reasons that you understand and thus I don't need to get into.
You mention that your undergrad major is in sociology, and the big question is how much that will be discounted given that you want to study CS. I'll be honest with you: it still doesn't look that great. The best case scenario is that your undergrad GPA will be viewed as something which does not have much to do with your future graduate performance in a different area. So "irrelevant" is the best case scenario. As a member of an admissions committee, a student's undergraduate GPA in whatever subject is not irrelevant to me. Succeeding at coursework is a skill unto itself, and while it is not the highest or most important skill that is necessary for success in a PhD program, it's definitely on the table.
As an aside, I see that in this and another question you describe your major as "irrelevant sociology". I don't think you should say it in quite that way. How relevant your undergraduate studies in a different discipline are to your candidacy as a CS PhD student is for the admissions committee to decide. I detect a hint of disdain for sociology in your language, which to me is also not great: you are the one who chose your undergraduate major. Just because it is very far from CS doesn't mean that you shouldn't have done well at it. Candidates like you have a fine line to walk, e.g. in your personal statement: you need to show but not tell that there is every expectation that you will be an excellent CS graduate student notwithstanding the fact that you were a not-quite-mediocre sociology undergraduate student. Your task is to frame your academic journey in a positive light. Faculty reading graduate applications love to see *improvements* in students' academic records: among other things, this shows not fully realized potential, and that's really important because almost no one comes into a PhD program fully equipped with the skills they need to graduate from it: it is a very dynamic process, over a period of years. Especially we love to see that students are doing better in the graduate level material than the undergraduate level material.
Back to your specific question: should you take the GRE math subject exam? Other answers have already addressed this question, and I agree with them: if it's required for a CS PhD program (unlikely), then of course take it. If it isn't, then don't: it's a tough exam, to the extent that much above the 50th percentile for an American aspiring *math PhD student at UGA* looks good enough for us. You could really know undergraduate mathematics perfectly well for all your future needs, take the math GRE, and get a score in the lower quartiles. That won't look good for you.
What should you do instead? I recommend **more undergraduate level coursework**. You want to launder your GPA by showing that you can nowadays get excellent grades in the undergraduate courses that count for CS PhD program. In most master's programs you can also take undergraduate courses. You should look into this in your program. If it is not possible, you should look into separately enrolling in some kind of "post-bac" program concurrently at the same institution. If you can get mostly A's in undergraduate level math courses, then while your relatively weak undergraduate performance in sociology will not be completely forgotten, it will probably be forgiven by some top 50 program.
[You might ask: isn't graduate level coursework better than undergraduate level coursework? Not necessarily, no. First of all taking graduate level CS courses doesn't necessarily show the mastery of mathematical foundations that you're rightly concerned about. Second of all, GPAs in graduate programs are an entirely different species from GPAs in undergraduate programs. I am unimpressed by a high graduate GPA. I am only negatively impressed by a low one.]
I really do wish you the best of luck and hope you are working for NASA someday. I encounter lots of un-stellar students teaching at UGA and think "Well gosh, they're so young. If they got their act together, they could probably be much more successful. I wonder what lies in their future?" If you succeed in turning your career around like this, I'd really like to know. Somehow your story will be inspirational to me and help me put more effort into reaching students who don't yet have the academia thing all figured out: we all have so much potential lying beneath the surface.
Upvotes: 5 [selected_answer]<issue_comment>username_4: The math GRE would essentially be useless. Please don't take this. I'd also amend your hopes on Caltech since they're a top 10 school in engineering and top 5 in CS. Methinks it would be rather unlikely someone with a sociology undergrad (and no math prerequisites) would get in. I had more background in math than you even while I was a senior in high school (just to put things in perspective). Good luck though.
Upvotes: 0 |
2014/03/13 | 5,549 | 21,629 | <issue_start>username_0: There is a [story on Inside Higher Ed](http://www.insidehighered.com/news/2014/03/13/lost-faculty-job-offer-raises-questions-about-negotiation-strategy) (based on this [blog post](http://philosophysmoker.blogspot.com/2014/03/a-new-kind-of-pfo-mid-negotiating-post.html)) about a candidate who received a tenure-track offer from a U.S. philosophy department.
She emailed the search committee with some requests related to: salary, maternity leave, sabbatical, teaching load, and start date. Her email ended with "I know that some of these might be easier to grant than others. Let me know what you think." In response, the institution withdrew the offer, saying that her requests revealed that she wasn't a good fit for a teaching institution.
Many of the comments on the IHE story or blog post are in support of the school, saying such things as:
* The candidate asked for too much and came across as "entitled"
* The candidate shouldn't have requested a light teaching load initially, when the school in question is a teaching institution
* The candidate shouldn't have asked about a pre-tenure sabbatical, which is apparently unheard of at teaching institutions
In particular, one comment says this:
>
> indicates how important it is to do your best to understand the culture and needs of the hiring institution, both before and during negotiations
>
>
>
and another that it is
>
> an example of knowing the difference between negotiating with a research school and with a small teaching school
>
>
>
There have been [quite](https://academia.stackexchange.com/questions/17968/evaluating-and-negotiating-a-start-up-package) [a few](https://academia.stackexchange.com/questions/1336/what-items-should-i-ask-for-in-my-startup-package) [questions](https://academia.stackexchange.com/questions/4807/how-should-an-academic-negotiate-his-her-salary) on this site about negotiating a startup package, but these mostly describe the various things you can ask for, and which are likely to have more "wiggle room." Most of the U.S. faculty who answered those are at research institutions, and they suggest that
>
> It's perfectly reasonable to ask for anything
>
>
>
and
>
> Definitely ask for all that you need, and let them whittle you down.
>
>
>
Apparently, that advice may be more or less applicable depending on the type and culture of the institution. My questions are:
**Are there really different norms with respect to negotiation in a teaching vs. research institution? What are they?**
and, more generally,
**What can I do pre-offer to get a sense for what the institution's culture is, and what I can reasonably ask for?**
Possibly, the candidate in the story came from a research institution, got advice from her advisor there on negotiation, and never realized that her requests would be perceived poorly at a teaching institution. What could she have done differently?
I am especially interested in a response from anyone who's been on a search committee at both kinds of institutions (though I don't know if we have anyone like that on this site).<issue_comment>username_1: I have never been on a search committee.
A teaching institution hires people to teach. A research institution hires people to conduct research. (primarily) A teaching institution will not make negotiating concessions that prioritize research ahead of teaching. A research institution will not make negotiating concessions that prioritize teaching ahead of research. That said, many institutions that are widely viewed as teaching institutions wish to be research institutions and will negotiate accordingly.
There is no need to be secretive when making a negotiating position. I suggest you outright ask which aspects of the contract are negotiable. You should also ask junior faculty at the institution if they will give you advice based on their experiences. I also suggest asking the departmental secretary, who probably knows everything.
Union institutions may not be able to negotiate with individuals at all.
Upvotes: 2 <issue_comment>username_2: I have only been a member of hiring committees at my present institution, a state research university. However I have watched people go on the job market at liberal arts colleges and one of my oldest friends is the department chair at a liberal arts college. I will try to ask him about this when I get the chance.
My impression is that the difference between quality and wealth of the university is playing more of a role here than the difference between research and liberal arts college. I know someone who got a job at a top liberal arts college right out of her PhD. Nevertheless she still did a one year postdoc before taking the position. And getting eased into the new job in a way which affords her all the opportunities to keep her successful research program, um, successful is definitely a big part of the mutual understanding between her and her department and college.
In the story at hand, it seems that Nazareth College felt like the person it had offered the job to was making a slew of demands that could not be met by an institution of their caliber. I am very eager to discuss the specifics of that strange situation, but that was not the question, so I'll just restrict to one thing: the applicant mentioned that she wanted a higher salary that was closer to the going rate. The salary that she wanted -- $65K -- is the going rate in her field at some places (and probably less than the going rate in others; in STEM fields $65K would not be competitive at a reasonably research-intensive university) but is not the going rate at small, less than wealthy institutions. If you want the salary that some other place is going to offer you, you really need an offer from some other place which has that salary. That was the big outlier of a negotiation mistake that I saw (it does not justify rescinding the offer, though! to me, that is a truly bush-league way to do business) in the story. By asking for a benchmark salary you are calling attention to the fact that the institution under consideration is not as good as some others, to which a fitting response be might be to have them call attention to the fact that you are not as good as some others.
There was a time when a top fifty liberal arts college really didn't need or want the majority of its faculty to do significant research, especially if it took time away from their teaching and meeting with students. That time has passed: my friend at the liberal arts college publishes at least a paper a year. That is a higher publication rate than some of my senior colleagues who are full professors at my top fifty research university. And I care about teaching more than the stereotypical "research mathematician" but not particularly more than the average in my department: almost all of us care about teaching, and are at least solid teachers across a wide range of levels and courses; more so than I would have expected, in fact. When I visit a liberal arts college it is a very enlightening experience because the amount of emphasis and priority given to the basic academic goals are palpably different, even upon arrival. But it's also enlightening because the basic academic goals are recognizably the same, just weighted somewhat differently. I think that the difference between a good liberal arts college and a good research university should not be exaggerated...and I certainly think that the things that desirable job candidates are looking to secure in order to take jobs a these institutions are very similar. Nothing that is reasonable to ask for in negotiations at a research university ought to get your offer rescinded at a good liberal arts college, that's for sure!
Upvotes: 4 <issue_comment>username_3: This is based on my experience as a student on the job market and on the experience of my cohort,on what faculty told us and the stories we heard from the junior faculty that our school hired. My school is a research school.
In our field, rookies generally do not negotiate.
Only the star rookies do. If you're a rising star that is being simultaneously pursued by Harvard, Princeton, Chicago etc and you're tipped to win the MacArthur Grant or something then you have *Market Power*. You get to negotiate and universities will throw money and perks at you to be able to hire you.
Our school hired a rookie faculty where two universities got into a bidding war over her and she got a sweet deal eventually. From what I recall, a good 30K above the standard contract.
For the rest of us mere mortals, with no market power, there are no negotiations. The schools make it absolutely clear up front what they can do for you. The contracts are standard for a given university: a research fund, base pay and performance pay, teaching loads, preps, administrative burden, possibility of sabbatical, maternity leave etc. These features are usually identical for all junior faculty in a given department in that University. They're also usually trying to do the best they can and have no interest in short-changing anyone.
Research schools usually try to encourage rookies and give them a light teaching and administrative load for the first few years but after that as the low man(person) on the totem pole, it is understood that you have to have some flexibility in terms of helping the department when some unforeseen need arises.
Also in smaller departments, hiring a colleague is a lot like getting a family member. You are going to be with them all the time, share a lot of decision making power, and you can't get rid of them. Therefore, how well you get along with them is very very important. That's why when you have some special request about your contract, this conversation needs to happen in person or on the phone so you can gauge the other person's response and react to that. If you come across as tone deaf and difficult then that's not the sort of colleague anybody wants, especially if you're not some star researcher.
Upvotes: 2 <issue_comment>username_4: Although I agree that you should seek some specific information on the expectations of a research vs. teaching environment, I would suggest that this negotiation was doomed for a different reason...
I know <NAME> (the gender/negotiations researcher who is interviewed in the story) personally, and she said that the story is true. Remember that one key issue is that the negotiation was attempted via email. Women are already disadvantaged in negotiation (see enormous research in this area) so they need to use specific tactics to moderate gender-effects of outcomes. These tactics are quite difficult via email.
In fact, I know several researchers who have excelled at doing research at "teaching schools" because they're the only ones bringing in the publications and research grants. Although interest in research may not be valued at Nazareth specifically, it is not a universal axiom for teaching-focused schools.
The research doesn't support the position that negotiation is inappropriate for "rookies" or "non-stars". New faculty get idiosyncratic deals (from lab space to maternity leave) all the time.
Upvotes: 4 <issue_comment>username_5: In the time between asking this question and getting some answers here, I read many comments on the original stories and blog posts dissecting the situation, many of which offered useful answers. I am compiling some of them here for the sake of others. As with most information found on the Internet, YMMV.
First, a question I didn't ask but will answer anyways (since xLeitix expressed some doubt in a comment):
Does this really happen?!?!?!?
------------------------------
Unfortunately, though rare, it does happen. Here are a few examples I found on [Academic Job Wiki](http://academicjobs.wikia.com/wiki/Universities_to_fear):
>
> * When I told them I needed some time to think about the unofficial offer (and also noted how early in the hiring season the offer was coming), the interviewing professor emailed me back that it was clear I was only considering them as a "last resort" and that they were therefore rescinding the offer.
> * They thought my request for a revised offer letter that included the new terms agreed upon throughout negotiation was demonstrative of a "lack of faith" which they seemed to take personally. They rescinded the offer and then blamed it on my need for this letter, and suggested that I be more careful in asking for this in future negotiations with other institutions (they claimed this advice was a show of "mentorship" on their part).
> * The dean rescinded the offer. By email. And then refused to take my calls. This is exactly what she wrote: "After having read your stated requests, specifically the number of years to tenure and the MWF teaching schedule, I am sorry to say that we are not able to sustain our job offer to you. At this time I am rescinding the job offer."
> * Two weeks after the job talk, I was offered the position and given five days to consider it... Upon sending my questions and considerations two days later, about 90 minutes passed when I received an email wishing me well in my further job pursuits.
>
>
>
Yikes. Of course, given these stories you could argue that these candidates probably dodged a bullet. But let's assume you aren't necessarily afraid of a rescinded offer, but want to avoid making a bad impression. In that case, on to my *actual* questions:
Are there really different norms with respect to negotiation in a teaching vs. research institution? What are they?
-------------------------------------------------------------------------------------------------------------------
Yes. Also, per various sources, there are differences in negotiation culture between elite institutions and not-so-elite institutions, community colleges and four-year colleges, public and private institutions, those with current or former religious affiliations and those without, locations where a union is involved and locations where it isn't, locations with large adjunct pools and locations without, etc.
Here's one thing I hadn't thought of: institutions that are seen as "less desirable" places to work for some reason are said to be more sensitive to aggressive negotiation. In these cases,
>
> Negotiating demands can come across as "I’ll only agree to work at a lowly school like yours if you give me all this extra stuff." ([The Professor is In](http://theprofessorisin.com/2014/03/14/the-rescinded-offer-who-is-in-the-wrong/))
>
>
>
However, there were also comments that said things like:
>
> I was a candidate *just like* W, I asked for very much of the same things (more, in fact), and still go the job. ([PhilosophySmoker](http://philosophysmoker.blogspot.com/2014/03/a-new-kind-of-pfo-mid-negotiating-post.html))
>
>
>
The candidate in question herself said
>
> that her request for a starting salary of $65,000 equaled a less than 20 percent increase in proposed pay -- a request she says another college offering her a job had met.
> ([InsideHigherEd](http://www.insidehighered.com/news/2014/03/17/candidate-negotiated-out-job-responds-critics))
>
>
>
So, it seems like finding out the culture of a particular institution (rather than generalizing based on institution "type") is more effective. Which brings us to:
What can I do pre-offer to get a sense for what the institution's culture is, and what I can reasonably ask for?
----------------------------------------------------------------------------------------------------------------
### Start by asking: is it negotiable?
>
> A good strategy for starting a negotiation is to simply ask whether there is any flexibility in the terms of the offer... If you get a firm "no", you don't have to risk upsetting anyone with specific requests. ([PhilosophySmoker](http://philosophysmoker.blogspot.com/2014/03/a-new-kind-of-pfo-mid-negotiating-post.html))
>
>
>
and this can even have some nice side effects! The same comment continues,
>
> At my institution... I began by asking this very general question, and the chair immediately responded with a better offer-without my mentioning any specifics and without consulting the dean.
>
>
>
### Pre-negotiate by phone
Both username_3 and username_4 pointed out here how difficult it is to read a situation by email, and dozens of commenters out there said the same. For example:
>
> You negotiate over the phone (aiming for the sweet spot between being enthusiastic about the job - you want them to still want you - and asserting your own needs). Then when that's done, you get the agreement in writing. That lets you feel out some things and get feedback on what's doable or not, and on how the requests are being taken. ([PhilosophySmoker](http://philosophysmoker.blogspot.com/2014/03/a-new-kind-of-pfo-mid-negotiating-post.html))
>
>
>
But, know yourself. As [The Professor is In](http://theprofessorisin.com/2014/03/14/the-rescinded-offer-who-is-in-the-wrong/) puts it:
>
> I NEVER want to see inexperienced candidates negotiate on the phone. Particularly women. People panic and get codependent and agree to all kinds of things too quickly on the phone.
>
>
>
I don't really know what "codependent" means in this context, but I can definitely see how some people might *not* be able to negotiate as well over the phone. Those people might be better served using a phone conversation as a "feeler" and then proceeding to negotiate over email.
### Listen for subtle differences in responses to your requests
One commenter said:
>
> The chair indicated that this was the ceiling for the starting salary... but since he didn't make the same claim regarding the startup, I detected further flexibility, and was able to get it bumped up to 200% of the original offer, by providing an explanation of how I would use the additional funds. ([PhilosophySmoker](http://philosophysmoker.blogspot.com/2014/03/a-new-kind-of-pfo-mid-negotiating-post.html))
>
>
>
### Identify and use your ally
If you get an offer, there's a good chance *someone* really likes you and wants you to be there. Identify that person, and ask them for advice on what to negotiate. They may even tell you about things you hadn't even *thought* to negotiate for! Specifically, several commenters said that the department chair is often an ally. As one commenter said:
>
> I had a really great Chair. She told me that I was very unlikely to get a salary increase but that there was room to move the start-up. I asked for both anyway, and sure enough, was denied a salary increase but got significantly higher start-up (which I can use for summer salary). The kicker? The Chair spotted something I never would have thought of, which qualified me for thousands of dollars extra. ([PhilosophySmoker](http://philosophysmoker.blogspot.com/2014/03/a-new-kind-of-pfo-mid-negotiating-post.html))
>
>
>
Your ally in the department knows much more than you, and much more than any mentor you might have, what requests can be granted easily and what should not be mentioned at all. Use this!
### Ask about "perks" at the interview stage
Many commenters agree that asking straightforwardly about the "perks" (non-salary) at the interview stage helps you gauge supply and demand at the institution in question. Otherwise, you run the risk of
>
> telling somebody who's put in a full 7 or 8 years for a sabbatical that you want one in your first 2 or 3 years? ([PhilosophySmoker](http://philosophysmoker.blogspot.com/2014/03/a-new-kind-of-pfo-mid-negotiating-post.html))
>
>
>
Yeah, that does sound bad! Same goes for things like teaching load and lab space - you can find out during the interview stage what is in especially short supply at the particular institution.
### Place requests in the context of the institution's specific mission
This is a valuable piece of advice. As a commenter said,
>
> For example, if the reasoning behind fewer teaching preps was to do a better job teaching or to have an appropriate research program (whatever that may be), then frame it that way. ([FemaleScienceProfessor](http://science-professor.blogspot.com/2014/03/ask-not.html))
>
>
>
### Visit the library
Here's an interesting one that I hadn't heard before (though it only applies to a subset of institutions):
>
> I’m betting that W or other candidates often get their data from the CHE’s salary survey, which is near useless for this purpose because it is such an aggregate number... If you’re interviewing at a public school, ask for about an hour to yourself in the library. When you get it, go straight to the Reference desk and ask to see the salaries in the department that’s hiring (it’s usually public). Write these down, and when you get the initial offer you’ll know if there’s room to negotiate on this point. ([The Professor is In](http://theprofessorisin.com/2014/03/14/the-rescinded-offer-who-is-in-the-wrong/))
>
>
>
While public salary information is also available online in most cases, unlike the internal data this is usually
>
> total annual compensation, a figure that includes summer school, stipends for additional administrative duties, and the like, rendering the number useless (if not damaging) to such negotiations. ([The Professor is In](http://theprofessorisin.com/2014/03/14/the-rescinded-offer-who-is-in-the-wrong/))
>
>
>
Some commenters pointed out another, more-complete-than-CHE (though not department-specific), source of salary data: [IPEDS](http://nces.ed.gov/ipeds/datacenter)
Upvotes: 6 [selected_answer] |
2014/03/13 | 903 | 3,929 | <issue_start>username_0: As the title says.
My background is in Economics/Finance(mostly), many topics in those fields (and I am sure other fields) require fairly complicated programming, enough to where one can easily screw something up. How do Academic journals defend against results that are generated by bugs?
As far as I understand nobody ever sees my code, it could be hundreds or even thousands lines of garbage code without a single function in it (not to mention no unit tests) laced with bugs, and I kept "fixing" things until my results "made sense", and then happily reported them. How could a journal tell that my results are trash?<issue_comment>username_1: Journals never make any guarantees regarding the validity of the content published in them, though this may seem to be implied. Ideally, errors are caught during the review process. Note that this problem is not specific to programming bugs, subtle errors can occur in all kinds of settings including physical experiments. It is important to always be critical about any results in any article, even highly cited articles in top journals (though these are less likely to be wrong, they still might be).
Letters to the editor are not uncommon in my domain when questionable results are published. In a worst case scenario, published papers can get retracted after the validity of their results has been formally rejected. Retractions for this reason seem to be fairly rare, though.
This is one of the reasons why reproducible results are so important. If several independent researchers seem to reach similar conclusions, they are likely correct.
Upvotes: 6 [selected_answer]<issue_comment>username_2: In addition to username_1's answer:
* There exist a few journals that require the code to be submitted and where the review explicitly included that code, e.g. the [Journal of Statistical Software](http://www.jstatsoft.org/)
* In one of my last papers, one reviewer asked exactly this question. Here's our reply in the text:
>
> The checks include unit tests to ensure
> calculational correctness, which consist of ca. twice
> as many lines of code than the actual function definitions.
>
>
>
I include this because even if the journal does not have an explicit policy regarding the code, both reviewers and authors can already start with better coding and testing practices: I'm encouraged by this experience to include such statements also in future as author, and I will ask as reviewer.
Upvotes: 4 <issue_comment>username_3: This is a problem that is beginning to be recognised, and has been described by some as a "[crisis of reproducibility](http://phys.org/news/2013-09-science-crisis.html)". There have been examples of papers in prominent journals being withdrawn after bugs were found in the researchers' code. [This article](http://www.climateknowledge.org/figures/Rood_Climate_Change_AOSS480_Documents/Model_validation/Post_Computational_Science_Demands_PhysToday_2005.pdf) describes some of the problems in more detail.
In my view there are three primary paths to addressing it,
1. Teach scientific programmers good software development practice
2. Make source code and datasets available and citable, with DOIs and with confidence that they will be available and unchanged for the long term.
3. Get journals to *require* that source and data are available (although comprimises must probably be made where data is commercially confidential), and that reviewers conduct code review. This may not be straightforward since (as noted in the Post & Votta article linked above) it will not always be possible to find one reviewer who is qualified to review the code and sufficiently expert on the science involved. (and that's before we even consider how long code review might take!)
The [Software Carpentry](http://software-carpentry.org/) project is aimed at addressing point 1 and 2 above, and may be of interest.
Upvotes: 2 |
2014/03/14 | 913 | 3,638 | <issue_start>username_0: This has been a plague upon my performance for what seems like all my life (or all my graded life).
No, it's not an 100, it's a 98 because of calling bromine a gas carelessly.
No, it's not a 6/6, it's a 5 out of 6 because you didn't realize that such a simple question had a minor twist
No, it's not a 100, it's a 98 because you had all the right work but you mispunched the calculation.
I'm sick of making careless errors that can truly mean the difference between obtaining one grade and the other. I need to make sure that from now on, I don't, because it is killing me inside.
If it is of consequence, I don't have much time for sleep due to 6 hours minimum of work per night. I get around 6-7 hours a night as opposed to the recommended 9 for a teenager.
Any and all tips would be greatly appreciated.<issue_comment>username_1: I had a student come to me trying to get back "lost points" on an exam, because the lost points dropped hm below what was needed to get an A, and he got an A- instead. This was in spite of the fact that his answers were demonstrably wrong.
When I spoke with him, he said that he needed to have the top grades in the class, because he didn't know how else he could demonstrate that he was a "good student." I had to explain to him that the degree to which he was insisting on a regrade (effectively demanding that he be awarded the points, even though they hadn't been earned) actually could hurt him in the long run, because it turns people "off"—no one wants to work with an inveterate complainer. And, ultimately, if he wanted to go further in either the academic world or in a professional career, what he needs are good letters of recommendation from people who are willing to vouch for him and support his career.
My reason for including such a long-winded anecdote here is that the underlying issues are the same: don't worry about little issues here and there. (That doesn't mean don't complain if there's a *big* problem—clerical errors and mistakes do happen!) Small mistakes are a part of life, **and we learn more from those mistakes than from successes.**
Nobody is going to think poorly of you because you make small mistakes. Just keep doing what you're doing, and try not to stress out about it when taking exams and doing your work. Worrying about perfection is a good way not to achieve it.
Upvotes: 3 <issue_comment>username_2: While it a good thing to be able to move on from small mistakes (everyone makes them and there have been studies about rates of error), it also good that you realize that you make "careless" mistakes. You will never be able to do everything 100% all the time and you might drive yourself crazy (and make more mistakes) if you try too hard, but there are somethings you can do to decrease your risks/catch yourself when you make mistakes.
* Prepare yourself mentally and physically. This means being fed, watered, having enough sleep and not stressing yourself out. It isn't always easy to do, especially when you put a lot of pressure on yourself to do well, but it will help you make less errors and catch them when you do.
* For exams, always read questions twice and look over your answers. If you have time look everything over at the end.
* If you have papers, proof read and have someone else look things over. It can also help to read what you wrote out loud.
* Don't rush through things if you can avoid it and still complete your task.
* When you do make errors, keep track of what kinds they are. Do you misread questions more often or do you mix-up words? This will help you be more aware for next time.
Upvotes: 1 |
2014/03/14 | 1,762 | 7,315 | <issue_start>username_0: It seems that most universities in Europe require an outline of the planned dissertation at the application stage. I think even choosing the title of a dissertation needs a lot of dialogue between the student and his supervisor. It also requires a thorough investigation on the state of art in the targeted area.
Is such proposal the definite proposal or it may totally be changed after the admission? Can you provide some tips for writing such proposal draft? How much time you think I should devote for such plan (at least)?<issue_comment>username_1: I once had to write a thesis proposal for admission to a UK university. It was explained to me that this is more of an entrance exam than an actual proposal. It is also used to gauge whether your interests lie somewhere in the vicinity of what is generally done at the department.
I don't know how other countries or universities work, but I can't imagine that anybody would hold you strictly to a proposal you wrote before becoming a graduate student. It's normal to expect that your research should be adapted along the way based on your findings, even for an experienced researcher.
The proposal I wrote (which was successful) had to be short, so I went with the following format:
* Theory so and so implies that A is true
* But this other theory suggests that the converse, B, would be true
* These could be pitted against each other in an experiment involving so-and-so (details details details)
The time you need entirely depends on your knowledge of the field. It is good to invest quite some time in these things though, as they can really improve a lot the more you think about them. I'd say that it's best to try to finish it a good month before the deadline, and then take a look at it at biweekly intervals to make improvements.
Upvotes: 3 <issue_comment>username_2: At least at the European university where I work, we do not require such a proposal.
However, in general, the thesis proposal is a *planning* document, and therefore its contents are not considered binding. Especially given the nature of research, committing someone to a particular course of action before it even begins seems counterproductive.
The proposal should be allowed to evolve over time, and possibly be changed completely if found to be unworkable or unmanageable.
Upvotes: 3 <issue_comment>username_3: First, to answer your titular question:
>
> Is it possible?
>
>
>
Likely yes. Otherwise, all PhD students at this university would fail, wouldn't they :) ? Let me go over the rest of your question one by one:
>
> It seems that most universities in Europe require an outline of the planned dissertation at the application stage.
>
>
>
*Most* seems a bit extreme. I know that this is how it works in some universities, but it certainly did not work like that in all places I worked in.
>
> I think even choosing the title of a dissertation needs a lot of dialogue between the student and his supervisor. It also requires a thorough investigation on the state of art in the targeted area.
>
>
>
Correct. At my current university, people hand in their proposals during their second year usually.
>
> Is such proposal the definite proposal or it may totally be changed after the admission?
>
>
>
It is almost certainly not a very definite plan, but whether it can **totally** change I am not sure. For instance, I would assume if it changed so much that it started to fall out of the area of expertise of your advisor, I would imagine things would get tricky.
>
> Can you provide some tips for writing such proposal draft? How much time you think I should devote for such plan (at least)?
>
>
>
Rsearch the state of the art in the field you are interested in. Take a few days minimum to browse over the keywords of the papers of the top conferences in the field. Find out which professor at your university publishes in these top conferences (if there is nobody, this university may be a bad match for your field of interest), and see what the typical keywords and style of his work are.
Think about ~3 coarse-grained research questions that you think are not answered yet by existing work. You probably already needed to define a research question for e.g., your master's thesis. Make sure that the scope is a bit broader now for a PhD - you don't want research questions that are basically answerable within one paper in a few months of work (Bad: "Q1: is it possible to apply algorithm A to problem B?"). On the other hand, you do not want to be too general either (Bad: "Q2: how can security be introduced in service-oriented systems?" - this one is a real-life example).
Upvotes: 3 [selected_answer]<issue_comment>username_4: >
> Is such proposal the definite proposal or it may totally be changed after the admission?
>
>
>
I had to do a similar task to get into PhD program here in Australia. It is mostly a formality and people's actual topic can vary widely. Actually it would be strange if you did not change your topic slightly. After 6 months - 1 year we are expected to give a seminar and a much more detailed proposal, this was the real one. Although again your topic can still change after that.
>
> Can you provide some tips for writing such a proposal draft?
>
>
>
I would ask your supervisor for tips. Maybe they can provide you with a copy of one from a previous student. Usually the university provides a general outline of what you should discuss.
Upvotes: 3 <issue_comment>username_5: I had to write a proposal as part of the admission process in a university in Ireland for a MLitt in History. Here are some thoughts based on what you have asked and my experience.
>
> I think even choosing the title of a dissertation needs a lot of dialogue between the student and his supervisor.
>
>
>
I had 2 meeting with my supervisor before the proposal was handed in. These meetings did not just entail discussion on the title, but they formed a part of it in so far as was this professor the best person in the department to supervise the masters. We had a working title quite early though.
>
> Is such proposal the definite proposal or it may totally be changed after the admission?
>
>
>
I have found in my case that the title may be refined after admission. It has not being the case where we have made a major chance but rather refined the project as the research is completed while keeping it within the overall framework of the original idea.
>
> Can you provide some tips for writing such proposal draft? How much time you think I should devote for such plan (at least)?
>
>
>
In my case the proposal did not have to be a long document. I think instructions were to keep it under 1,500 words. I used the following heading for my proposal.
* What I'm going to research
* Research Methodology
* What has been researched about topic already
* What will this thesis add to existing knowledge.
Finally I also put together a draft reading list of publications that I thought would form part of my research. This was not required but I felt it was a good exercise for myself and my supervisor appreciated a copy of it as well.
Thinking back I believe I had my proposal document completed in about 2-3 weeks(this includes drafting and amending).
Upvotes: 1 |
2014/03/14 | 2,004 | 7,843 | <issue_start>username_0: A style I'm required to follow, unfortunately unnamed:
>
> [1] <NAME>., <NAME>., <NAME>., “My wonderful title” Int. J.
> Heat Mass Transf., 32(19-21), pp. 234-245, (2001).
>
>
> [2] <NAME>. and <NAME>., Book about pink ponies, New York:
> Wonderful publisher, pp. 34–41, (2003).
>
>
>
Features....
* numeric numbering in []s
* *First, L.L.* name style
* article title in quotes
* year in braces at the end
The closest I could find is MLA Seventh edition, but it never does numeric references.
---
To answer some suggestions below: I didn't find a style file with the publisher. And no clear style description exists, other than the examples provided. (There is a few more, but the features I listed are the main thing in common.)
---
Again more information: <http://spie.org/x14101.xml#Word> is the instructions to follow. They provide an example MS Word file with manually typed citations. While I can use the LATEX file, I already have the paper in .DOC and need a citation style for that. So this is a question of its name.<issue_comment>username_1: I once had to write a thesis proposal for admission to a UK university. It was explained to me that this is more of an entrance exam than an actual proposal. It is also used to gauge whether your interests lie somewhere in the vicinity of what is generally done at the department.
I don't know how other countries or universities work, but I can't imagine that anybody would hold you strictly to a proposal you wrote before becoming a graduate student. It's normal to expect that your research should be adapted along the way based on your findings, even for an experienced researcher.
The proposal I wrote (which was successful) had to be short, so I went with the following format:
* Theory so and so implies that A is true
* But this other theory suggests that the converse, B, would be true
* These could be pitted against each other in an experiment involving so-and-so (details details details)
The time you need entirely depends on your knowledge of the field. It is good to invest quite some time in these things though, as they can really improve a lot the more you think about them. I'd say that it's best to try to finish it a good month before the deadline, and then take a look at it at biweekly intervals to make improvements.
Upvotes: 3 <issue_comment>username_2: At least at the European university where I work, we do not require such a proposal.
However, in general, the thesis proposal is a *planning* document, and therefore its contents are not considered binding. Especially given the nature of research, committing someone to a particular course of action before it even begins seems counterproductive.
The proposal should be allowed to evolve over time, and possibly be changed completely if found to be unworkable or unmanageable.
Upvotes: 3 <issue_comment>username_3: First, to answer your titular question:
>
> Is it possible?
>
>
>
Likely yes. Otherwise, all PhD students at this university would fail, wouldn't they :) ? Let me go over the rest of your question one by one:
>
> It seems that most universities in Europe require an outline of the planned dissertation at the application stage.
>
>
>
*Most* seems a bit extreme. I know that this is how it works in some universities, but it certainly did not work like that in all places I worked in.
>
> I think even choosing the title of a dissertation needs a lot of dialogue between the student and his supervisor. It also requires a thorough investigation on the state of art in the targeted area.
>
>
>
Correct. At my current university, people hand in their proposals during their second year usually.
>
> Is such proposal the definite proposal or it may totally be changed after the admission?
>
>
>
It is almost certainly not a very definite plan, but whether it can **totally** change I am not sure. For instance, I would assume if it changed so much that it started to fall out of the area of expertise of your advisor, I would imagine things would get tricky.
>
> Can you provide some tips for writing such proposal draft? How much time you think I should devote for such plan (at least)?
>
>
>
Rsearch the state of the art in the field you are interested in. Take a few days minimum to browse over the keywords of the papers of the top conferences in the field. Find out which professor at your university publishes in these top conferences (if there is nobody, this university may be a bad match for your field of interest), and see what the typical keywords and style of his work are.
Think about ~3 coarse-grained research questions that you think are not answered yet by existing work. You probably already needed to define a research question for e.g., your master's thesis. Make sure that the scope is a bit broader now for a PhD - you don't want research questions that are basically answerable within one paper in a few months of work (Bad: "Q1: is it possible to apply algorithm A to problem B?"). On the other hand, you do not want to be too general either (Bad: "Q2: how can security be introduced in service-oriented systems?" - this one is a real-life example).
Upvotes: 3 [selected_answer]<issue_comment>username_4: >
> Is such proposal the definite proposal or it may totally be changed after the admission?
>
>
>
I had to do a similar task to get into PhD program here in Australia. It is mostly a formality and people's actual topic can vary widely. Actually it would be strange if you did not change your topic slightly. After 6 months - 1 year we are expected to give a seminar and a much more detailed proposal, this was the real one. Although again your topic can still change after that.
>
> Can you provide some tips for writing such a proposal draft?
>
>
>
I would ask your supervisor for tips. Maybe they can provide you with a copy of one from a previous student. Usually the university provides a general outline of what you should discuss.
Upvotes: 3 <issue_comment>username_5: I had to write a proposal as part of the admission process in a university in Ireland for a MLitt in History. Here are some thoughts based on what you have asked and my experience.
>
> I think even choosing the title of a dissertation needs a lot of dialogue between the student and his supervisor.
>
>
>
I had 2 meeting with my supervisor before the proposal was handed in. These meetings did not just entail discussion on the title, but they formed a part of it in so far as was this professor the best person in the department to supervise the masters. We had a working title quite early though.
>
> Is such proposal the definite proposal or it may totally be changed after the admission?
>
>
>
I have found in my case that the title may be refined after admission. It has not being the case where we have made a major chance but rather refined the project as the research is completed while keeping it within the overall framework of the original idea.
>
> Can you provide some tips for writing such proposal draft? How much time you think I should devote for such plan (at least)?
>
>
>
In my case the proposal did not have to be a long document. I think instructions were to keep it under 1,500 words. I used the following heading for my proposal.
* What I'm going to research
* Research Methodology
* What has been researched about topic already
* What will this thesis add to existing knowledge.
Finally I also put together a draft reading list of publications that I thought would form part of my research. This was not required but I felt it was a good exercise for myself and my supervisor appreciated a copy of it as well.
Thinking back I believe I had my proposal document completed in about 2-3 weeks(this includes drafting and amending).
Upvotes: 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.