23 December 2011

Does Not Compute

Title: How Will Astronomy Archives Survive the Data Tsunami?
Authors: G.B. Berriman & S.L. Groom

Astronomy is abound with data. The scope of currently archived data ranges from large telescope projects to personal data that astronomers have accumulated over the ages from their long nights spent at ground based telescopes. The ability to store and access this data has been implemented fairly successfully, but the quantity of data is increasing rapidly (~0.5 PB of data per year) and is set to explode in the near future (60 PB total archived data by 2020). With this rapid increase of data comes the requirement for more storage space as well as more bandwidth to facilitate large and numerous database queries/downloads.
Figure 1: Growth of data downloads and data queries at IRSA
from 2005 - 2011

The primary focus of this paper concerns how we are to address data archival issues both from the server-side and client-side perspective. LSST, ALMA, and the SKA are projected to generate literally PBs of data and it is possible that our current infrastructure is insufficient to support the needs of those observatories. For instance, even if there is a database large enough to store and archive the data, searching the database for a particular image or set of images would require parsing through all of the data. We might also consider the bandwidth strain if numerous people are downloading data remotely. For the case of LSST, a projected 10 GB image file size would make this intractable. Data rates would suffer greatly.

Berriman and Groom study potential issues that are already cropping up in small archival data sets and attempt to provide a paths forward. These paths include developing innovative means to memory-map the stored data in order to lessen the load on the server and allow for more rapid data discovery, providing server-side reduction and analysis procedures, utilizing cloud computing in order to out-source the data storage issues, and implementing GPU computing. Current technologies are not in place to make all of these paths immediately beneficial, meaning the astronomical community should be looking to promote and partner with cyber initiatives and also educate their own members so they may more effectively contribute to the overall efficiency of computational power.

This last suggestions from the authors is what stirred up the most conversation. In particular, the authors recommend all graduate students in astronomy should be required to take classes in a long list of computational courses (i.e., software engineering). A quick analysis of their learning requirements means that a typical graduate student would be required to take an additional 3-6 courses. That's about an extra year for that course work. While the addition of an extra year for graduate students doesn't seem very attractive, it was suggested that summer workshops would be extremely helpful and advantageous. A 1-2 week program could potentially provide an intensive introduction to many of the highlighted skills astronomers might soon be expected to have (parallel programming, scripting, development of code, database technology). One comment even threw out the idea that Dartmouth hold such a school - quite possible, so keep an eye out!

What do you think about the future of computing in astronomy? Do we need to up the computational coursework for students or just hire computer scientists? Are there any tools or technologies you believe might be beneficial for astronomers to implement?

14 November 2011

Tut-tut, it looks like rain.

Title: Measuring NIR Atmospheric Extinction Using a Global Positioning System Receiver
Authors: C. H. Blake & M. M. Shaw

Ground based astronomical observing has one major obstacle that it must over come in order to produce quality, science quality data: Earth's atmosphere. Methods to correct for atmospheric attenuation are familiar to anyone who has taken data at a ground based telescope, or who have at least studied observational astronomy. As an example, observations of "standard" stars are required in order to calibrate not only the detector, but also to differentially correct for atmospheric effects on a given night.

Figure 1: Components of atmospheric absorption presented in
Blake & Shaw (2011)
Atmospheric absorption and attenuation is particularly noticeable in the near infrared (NIR) where absorption bands due to the molecular species in the atmosphere efficiently absorb much of the incoming flux. Typically, narrow band filters that have transmission peaks between these molecular bands are utilized in order to skirt around the difficult procedure of correcting for molecular absorption. 

However, in this paper, Blake and Shaw propose a very unique and interesting method for correcting astronomical images that have been affected by the absorption due to water molecules. They propose using signals from the Global Positioning Satellite (GPS) system to infer the water content of the atmosphere, allowing for more accurate atmospheric transmission modeling. Relying on the fact that GPS signals must be corrected for atmospheric attenuation, the author's propose that this may then be applied to astronomical studies to correct for the light attenuation of astrophysical sources.

Of greatest interest to the authors is the derivation of the precipitable water vapor (PWV) in the atmosphere. What is PWV? It is actually conceptually very simple - PWV is the column integrated depth of water vapor if all of the water were to precipitate out of the atmosphere instantaneously. As such, it is measured in units of length (typically mm). Basically, what your rain gauge would measure if all of the water vapor in the atmosphere directly above the gauge were to condense and precipitate to the ground. In the 1990s, it was shown that use of multi-wavelength GPS signals combined with a highly accurate barometer could lead to a very accurate derivation of PWV. 

Figure 2: An empirically derived fit for the correlation between
PWV + Airmass and the atmospheric  optical depth of water.
With numerous GPS stations set up across the United States to measure PWV, the authors were easily able to obtain PWV measurements near their location (Apache Point Observatory in NM). An empirical relation was then derived to relate PWV (and the airmass) to the optical depth of water in the atmosphere - assuming the optical depth is related directly to the amount of water in the atmosphere. As you can see on the left, the figure betrays the presence of a fairly obvious correlation (even with the absence of error bars). The empirically derived optical depth can then be input into the atmospheric transmission models, allowing for a more accurate estimation of the correction needed to remove effects due to water molecules.

Figure 3: Differential colors of over 6,000 M stars are binned
according to PWV at the time of observation.
An example correction is presented for numerous stars, but is most evident in the correction for M star colors. From a sample of 6,177 mid-M stars, the authors binned the data as a function of PWV - where the data is the deviation of the stellar color from an assumed stellar color locus. Their corrections are then applied and the result is illustrated to the right.

Overall the technique is very unique and holds a lot of promise, but is strongly dependent on the atmospheric transmission models. While they do provide a very good estimation of the atmospheric transmission, atmosphere models suffer from many uncertainties. However, the authors have demonstrated that their corrections appear to do very well - assuming M star not lying near the color locus are unaffected by other systematics (metallicity, etc). The plan is to have such corrections implemented in large survey telescopes, such as SDSS and LSST in order to allow for more accurate characterizations of M stars in the hunt for exoplanets. Not to mention corrections needed to accurately characterize the transmission spectra of exoplanets - supposing the exoplanets are in a position for this method to be possible (aka: transiting). 


10 November 2011

Chemical Evolution of TN J0924-2201 at z = 5.19

Title: Chemical properties in the most distant radio galaxy
Authors: Matsuoka, K. et al.
Measuring the chemical evolution of galaxies can give clues to their star formation histories. This is often done by measuring the metallicity of galaxies at various redshifts. However, as for all things astronomical, this becomes more difficult at high redshift. To alleviate this, studies have used active galactic nuclei (AGN) to measure the metallicities of high-redshift radio galaxies (HzRGs) because of their high luminosities. Specifically, gas clouds photoionized by the active nucleus emit lines in the ultraviolet (among other wavelengths) that can be observed in the optical. Quite convenient.

The currently accepted model of a typical AGN is as follows. Material from an accretion disk falls onto the central black hole, which is encircled by a large torus of gas and dust. Thus, if we observe an AGN from a pole, we see all the way down into the center, where the velocity dispersions of the gas create broad emission lines (this is known as the broad line region, or BLR). However, if we are fortuitous enough to view an AGN edge-on, the torus obscures the BLR, and instead we see emission lines resulting from the slower-moving gas clouds farther away from the black hole. This, naturally, is known as the narrow line region (NLR).

Figure 1: Line ratios showing possible metallicity evolution.
Studies using AGN have found no metallicity evolution up to z ~ 6. However, this may be due to the fact that many of these studies focused on the BLR, which could have evolved faster than the rest of the galaxy. Matsuoka et al. (2011), then, concentrate on the NLR of the most distant radio galaxy at z = 5.19, TN J0924-2201 (catchy name). Using the Faint Object Camera and Spectrograph (FOCUS), they detect Lyα and CIV lines, the first time CIV has been detected from a galaxy with z > 5. This indicates that a significant amount of carbon exists even in this high-z galaxy. Additionally, the Lyα/C IV ratio is slightly lower than that from lower-z HzRGs (Figure 1), suggesting possible metallicity evolution because of the higher amounts of carbon. However, this could also be attributed to weaker star-formation activity or Lyα absorption. Upper limits of NV/CIV and CIV/HeII were also measured, but these agree with lower-z HzRG measurements.

The authors also investigate the [C/O] abundance ratio by comparing observational limits of NV/CIV and CIV/HeII to photoionization models, the results of which are shown in Figure 2. Carbon enrichment in these galaxies is delayed compared to α elements, because much of the production of carbon comes from intermediate-mass stars (which have longer lives compared to those stars that create α elements). Thus, [C/O] is a good measure of star formation. The analysis finds a lower limit on [C/O] of -0.5, suggesting that this galaxy has experienced some chemical evolution. Comparison of this limit to previous models suggests an age of TN J0924-2201 of a few hundred million years old.

Figure 2: Lower limit of [C/O] abundance from photoionization models.

01 November 2011

JWST Passes Senate

Title: JWST Bill
Authors CJS Subcommittee Chairwoman Mikulski

The James Webb Space Telescope (JWST), the purported successor to Hubble, has been in the news a lot over the past year. As the budget for the project increased once again, the will of the US Government to actually finish the project was called into question. Well, after a long battle, a bill which explicitly outlines supporting and funding a 2018 launch of JWST was passed by the Senate. If you are interested in reading the Appropriations Committee's press release, see the title link above. It must still be approved by the House of Representatives, which might prove to be the more difficult task.

An artists conception of JWST. Credit: NASA/ESA

However, it is very interesting to note that not everyone is in favor of JWST (politicians, astronomers, scientists, the public, etc). There seems to be a divide since the money to fund JWST must come from somewhere - possibly other NASA projects (e.g., smaller satellites, research grants). Other are adamantly in favor of it because the scientific questions which will be answered (or plausibly answered) are important.

This topic came up at astro lunch this afternoon, before we knew about the bill passing. Now, I propose the question to the readers. Is JWST a worthwhile investment? I hope to hear all of your comments and opinions.

Magically Spotted

Title: No magnetic field in the spotted HgMn star μ Leporis
Authors O. Kochukhov et al.

Chemically peculiar stars are, well, peculiar. What makes them so peculiar? When analyzing abundances of stars (say, via spectroscopy) we generally have a good idea of the relative proportions of all the elements. There is no general rule that can be followed, as there are various scenarios which can occur that lead to stars with different relative abundances of particular elements. For instance, stars which are the progeny of the first generation of supernova that exploded in the Universe, will typically be metal poor but have a higher abundance of what are known as α capture elements (O, Ne, Mg, Si, S, Ar, Ca, and Ti). One can easily convince themselves that this is consistent with our view of Type II supernova (the death of a massive star) when considering the by-products of a run-away nuclear reaction chain. However, this is but one scenario of a few that we can use to generalize the relative abundance patterns of stars. When these generalizations are ignored by nature, we get chemically peculiar stars - stars that show a chemical signature (relative abundance pattern) drastically different from what we have come to expect.

HgMn stars are a particular class of chemically peculiar stars. It turns out these stars just happen to be late-B stars, likely a key to understanding the nature of the peculiarities. Specifically, as one might guess, they show anomalous amounts of Hg and Mn (singly ionized to be precise) - these two chemical species are over abundant. Even more interesting, is that they show signs of spot modulation (variations in the light output of the star due to temperature variations on the stellar photosphere). Spot activity is nothing new for anyone familiar with a starspot. Unfortunately, the spots on these stars are not necessarily magnetic in origin (more on this later).

Most chemically peculiar stars are thought to be fairly well understood. The physics involved in creating such star is related to diffusion physics. Plasma is constantly on the move within stars - there is no reason for stellar plasma to remain static. Gravity, temperature gradients, and concentration gradients all force particles to reorganize into the most likely configuration (maximization of entropy!). Thus, plasma particles are shifting around attempting to find this state. Well, one other type of process which can occur is called radiative acceleration (or levitation), and it turns out to solve a few problems.

B stars are hot. Super hot. About 12,000 K hot. This means that photons have a lot of energy and are capable of transferring that energy to anything in their way (say, some gas particle). Depending on the opacity of radiation to a particular chemical species at a certain point in a star, photons can actually "push" gas particles towards the surface of the star, aiding in the particle diffusion process. This substantial side-effect, strong radiation pressure, is what is thought to "correct" our standard approach to particle diffusion in order to remedy the discrepancies with chemically peculiar stars.

Unfortunately, this theory doesn't seem to do a great job of reproducing HgMn the properties of HgMn stars. Magnetic fields have also been shown to help produce chemical inhomogeneities in Ap stars (the "classic" chemically peculiar star). We then have to wonder if magnetic fields are at work in the HgMn stars. To do so, one just has to look for the presence of magnetic fields around a star. The clue which betrays the presence of a magnetic field is the amount of polarized radiation.

I am by no means an expert in E&M, so I cannot provide an intuitive description of the physical process of photon polarization. Instead, I point you to a trustworthy source. To probe a stellar environment for polarized radiation and the presence of a magnetic field, observers use a setup called spectropolarization. Essentially, there is a polarization filter which feeds a spectrograph. From this, one can analyze the profiles of particular lines which are sensitive to magnetic fields (typically lines formed due to ionized atoms). The figure on the right displays what reduced data looks like. Note the squiggly lines on the right. In the presence of a magnetic field, the amplitude of the variations if far larger - thus, the authors concluded there is no sign of a strong field around this star.

Interestingly, though, when they look at the line profiles, while there is no clear indication of a magnetic field stronger than 3 G, there is an odd shape to some of the spectral lines. These odd shapes do not appear to be an artifact of their reduction procedures, as they use the same procedures on a set of theoretical spectra in order to derive theoretical line profiles. Here is what they find:


Figure 2: From Kochukhov et al. (2011), TiII and YII line profiles from both synthetic spectra (dashed) and observed, time averaged spectra (solid).

The dashed line is the theoretical line profile while the solid line is that derived from observations. Indents can be seen near the trough of the line. The authors suggest the shape is derived from the abundance distributions being axisymmetric. This, coupled with rotation, they claim provides the explanation. However, no theoretical line profiles were developed with this hypothesis.

No matter how you look at it, though, there is a lack of a large-scale magnetic field with a strength greater than 3 G. As such, magnetic processes are an unlikely solution to the peculiar abundance pattern on the surface of these stars. To date, not good hypothesis exists to explain the anomalies we observe. The next lines of attack are to look at more rigorous radiative diffusion scenarios (time dependence) and at the effects of hydrodynamical instabilities/circulation, such as meridonal circulation.



25 October 2011

NSF Budget Cuts

Title: Dire Budget Projections from NSF AST
Authors: NOAO

This morning, Julie brought to my attention a recent development in the NSF AST budget drama: Kitt Peak National Observatory (KPNO) might be closed. Due to the enormous impact this would have on the astronomical community, it was decided that this needed to be discussed at today's Astro Lunch before we got to the planned paper. The closure of KPNO hits close to home for Dartmouth astronomers as MDM Observatory is located on Kitt Peak, though about 2 miles from KPNO. Currently, much of the infrastructure at MDM is sub-leased through KPNO, so one can imagine how much strain this would put on MDM. Just imagine, if you will, renting an apartment and having your landlord walk away from the mortgage, leaving you with very few options.

It would be selfish, however, to only analyze this situation in terms of the effects on MDM. The amount of science that would be lost with the closure of KPNO is unimaginable. Small telescopes are crucial on multiple levels, even if we have a few large survey telescopes (LSST, ALMA). I won't discuss the financial details here, as they are available online. I encourage everyone to head to the website linked in the title above. NOAO gives a solid overview of the current situation and just how dire it may be. There is also an NOAO discussion forum in which people can discuss and leave their thoughts and comments on the potential closure of KPNO. We must also be careful NOT to panic, this is not set in stone and hasn't been formally approved, but clearly people are talking about it and the idea has been raised.

Civil discourse is always ideal and there is no reason to overreact. Join the discussion, follow the debates, and stay informed. While we must not panic, we must also work to save KPNO if it is endangered. Let us know your thoughts, we're interested to hear all your perspectives.

24 October 2011

An Expensive Planet

Title: Transformation of a Star into a Planet in a Millisecond Pulsar Binary
Authors: M. Bailes et al.

This past week's article comes our way via Science Magazine instead of the usual Astro-PH listing. The results made science headlines in a wide variety of popular science outlets as the "diamond planet". Normally, pop. sci. articles are filled with exaggerated claims and are mired by shaky conclusions. So, how did the science writers do on reporting these results? Quite well, actually.

After a star dies via a supernova explosion, it will form a neutron star (if it has less than ~20 solar masses), a super compact object which sustains itself against gravity via neutron degeneracy pressure. If you are familiar with electron degeneracy pressure, for instance, in the case of a white dwarf, then neutron degeneracy pressure is already known to you - except  you replace electrons by neutrons. It turns out, that when electron degeneracy pressure is not strong enough to support the gravitational collapse of matter, neutron degeneracy pressure is the next line of defense. Anyway, I digress. Post-supernova explosion, as stellar material collapses down to the core of the stellar remnant, it spins up the core due to angular momentum conservation (think figure skater pulling his/her arms in to speed up their spin). This produces a neutron star that is rapidly spinning (~1000 rotations per second) and one that has a very strong magnetic field (1012G) which spews material out in the form of bipolar jets; a (radio) millisecond pulsar.

Credit: European Space Agency & Francesco Ferraro (Bologna Astronomical Observatory)

Large magnetic fields, however, quickly help to spin-down the neutron star due to magnetic breaking - a process by which angular momentum is transferred away from the star to its surroundings via magnetic processes. The spin-down causes the pulsar to stop emitting at radio wavelengths. Interestingly, a "dead radio pulsar" can be revived if the pulsar is in a binary system with a stellar companion. Processes by which pulsars are revived are not fully understood, but we shall describe a couple of the possible scenarios later.

Now to the results of the paper. Bailes et al. have discovered one of these millisecond pulsars which has a binary companion. Not too uncommon (~70% are in binary systems), but based on fairly straight forward geometrical and dynamical arguments (think Kepler's laws), the authors were able to determine the radius and the average density of the pulsar's companion. They find that the radius is actually smaller than Jupiter's radius! Fantastic. This means there is a planet sized object orbiting the pulsar. Then, using some clever techniques, the authors are able to derive a lower limit to the average density of the object. If the object were gas planet, one would expect to find densities around 2 grams per cm3, instead, they find the lower limit is 23 g per cm3! This is a HUGE density!

After running a few more tests to be sure of the robustness of their findings and to limit, even more, the parameters of the binary companion to the pulsar, the team discovered that, indeed, they found a pulsar with a planetary sized white dwarf companion, supported by electron degeneracy pressure mentioned earlier. What is most interesting, is that the white dwarf did not form in the typical fashion of a lower mass star evolving until its subsequent death. This is evidenced by the low mass (slightly more massive than Jupiter) and tiny radius (smaller than Jupiter) of the white dwarf. A typical evolutionary scenario would be very, very unlikely to produce a remnant of these proportions. Instead, the white dwarf must have been formed by interactions with the primary (aka: the pulsar).

Here there are a couple different scenarios which could lead to the observed configuration. The first is that the companion white dwarf, once a helium burning star, was converting helium to carbon in it's core. However, it may have transferred mass to the neutron star - a process known as accretion - which occurs when material is gravitationally removed from the star by the neutron star. This material would then be either accreted onto the neutron star or expelled in the jets which are generated by the magnetic field. Either way, this would act to spin-up the neutron star and bring it back to life (aka: emit at radio wavelengths).

This process of accretion by the neutron star can happen in a couple ways, so it's thought. The first is if the companion star to the pulsar is fairly far away from the pulsar, but is itself very large. Material can then stream away from the companion. However, this would tend to create white dwarfs of varying size depending on the distance. If, instead, the star is close to the pulsar to start with, matter can be transferred more readily, allowing more easily for the creation of a smaller white dwarf. In either case, for this particular planet to be observed, the mass transfer would have to be at a sufficient rate in order to remove enough mass from the companion BEFORE it has time to generate a typical carbon core. This close-in binary scenario also raises questions about possible interactions during the primary stars life before it became a neutron star. It presumably would have been rather large, thus possibly encompassing the secondary, or at least allowing for mass transfer at this point, as well. Complicated physics, to say the least.

Credit: An artist whose name is not associated with this image on several prominent websites.

In the end, the planet-sized object that remains in the system at hand is a very dense carbon object. Very dense carbon inevitably leads to the image of diamonds. So the popular science articles were not too far off in their analysis and spin on the results. We have here a planet that would fetch a very high price on the market and one that would make the Hope Diamond seem like pebble.

14 October 2011

The Furthest Distance Measure in the World?

Title: A New Cosmological Distance Measure Using AGN
Authors: D. Watson et al.

Measurement of actual distance to celestial bodies has always been a difficult task from the beginning of  astronomy. Every new distance measures have led to fundamental changes in our understanding of the Universe. In particular, the most recent Nobel Prize of physics was awarded to astronomers who took advantage of the powerful distance measure, type Ia supernovae, to prove that the expansion of the universe is actually increasing.


While type Ia supernovae serve as a good distance measure at high redshift, it is still limited within z~2. AGN(active galactic nuclei) reverberation, based on the argument of this paper, provides an alternative way to measure distance at high redshift.  The idea of AGN reverberation was proposed by Blandford & McKee in the early 80s : For a standard model AGN, the radiation from the broad line region(BLR) is basically the recombination lines from the gas clumps ionized by the continuum from the central accretion disk. The causal relation between the central continuum and the BLR emissions lines implies that if there's any change in the central continuum, after a time lag, the recombination lines in the broadline region should change accordingly. This time delay, can be measured using the reverberation technique, by continuously observing the AGN spectra. The distance from BLR to the AGN center, equivalent to the ionizing radius of the central continuum, can be measured from the time lag. Since the ionizing radius of the central continuum is a function of the continuum flux, there must be a direct connection between the time delay and the AGN flux measured on Earth. This relation can be used as a measurement of the luminosity distance. This paper shows that the ratio of time lag and the AGN flux  is consistent with predictions up to z~1.

The downside of this measure is that reverberation requires a lot of observation times. Even though it is possible to apply this method on AGNs at z>2, it would require more observing time from more powerful telescopes. The uncertainty of the AGN selection effect or the cosmological effects at high z can either hinder the accuracy of this measure, or increase the difficulties in measurement. However, the author did address some of the problems and concludes that the high-z reverberation will soon become possible.

Indeed, if the distance measure can be applied to where no one has been able to measure, it will be extremely exciting to see how it will change our understanding to the Universe.

06 October 2011

Making Non-constants Constant

Title: The call to adopt a nominal set of astrophysical parameters and constants to improve the accuracy of funametnal physical properties of stars.
Authors: Petr Harmanec & Andrej Prša

Astrophysical parameters and constants are, admittedly, not the most exciting topic for a paper. Readily available in the back of most textbooks and compiled in every edition of Allen's Astrophysical Quantities, astrophysical parameters are of little concern, right? Well, it turns out this is not necessarily the case (if it was, we wouldn't be reading a paper about it). There are systems for which observational uncertainties are pushing below 1%, especially with the instruments aboard MOST, CoRoT, and Kepler. As detector noise is reduced, the precision of observational measurements encroaches a point where systematic sources of error in data analysis models begins to play a role.

Of primary interest to the authors are the definitions (in natural units) of the solar mass, solar radius, and solar luminosity units. Better instruments, data reduction procedures, and observational techniques have lead to more refined values for the said solar parameters. Tables 1 and 2 in the paper display the evolution of the solar mass, radius, and effective temperature (re: luminosity) since 1976, where the IAU defined a set parameters. The fluctuations are on the order of about half a percent, insignificant in most practical applications. However, for systems where observational uncertainties are being quoted as less than one percent, the difference in the adopted solar parameters begins to become a substantial source of uncertainty.

How, then, do we combat this? For one, the solar mass, radius, and luminosity are not strictly constant. The solar mass may as well be considered constant; mass loss due to the solar wind is a puny 10-14 solar masses per year - not a concern. Stellar evolution theory tells us that the solar radius and luminosity change significantly over the Sun's lifetime, but the time scales for these changes is far greater than anything we need to worry about - at least for the next 100 million years or so. There is also the question of the solar cycle, which may alter the solar radius and luminosity depending on whether the Sun is in an active or quiescient period. Results of such studies are contradictory. One group finds the radius and luminosity increase during active periods while another group finds the exact opposite. Either way, if we average observations over a solar cycle or just measure the Sun when it's in the middle of a solar cycle, we can come to an acceptable solution. That is, if we are careful. Observers need to be very meticulous and perform very precise observations, especially considering that defining the "radius" is difficult when looking at an image of the Sun. The boundary of the surface is rather hazy.

Either way, assuming the solar observers have made good observations over the years (which they have), the values are still a bit different. In order to set all models on a common base, the authors suggest adopteing a nominal set of values for each of these parameters. This way, even if the actual values change as the result of more precise measurements, models (and thus other observations) will be unaffected. The authors propose a set of nominal values, although their selection process and value determination method is not mentioned. It does get the conversation started, though. Unfortunately, to make this happen on a large scale, the IAU would have to set forth a committee (or several of them) to discuss and debate which set of values should be adopted as nominal. Do we use values determined by one observer or one research group? What about an average over several observations? These issues would have to be addressed in order to define the parameters in a rigorous manner.

As a final note, how many astronomers are actually concerned about this? Would the call to adopt nominal parameters be supported strongly enough to force the IAU to invest substantial resources into sorting out this issue? If only a small sample of astronomers are truly concerned, then the discussion is moot. We can simply imagine a situation where particular groups of researchers adopt standard parameters (e.g., eclipsing binary observers adopt their own set of parameters for the sake of consistency) or researchers can simply be forced to quote in their paper which values they utilize, essentially making them define, in natural units, the results of their work.

03 October 2011

Welcome!

Welcome to the blog for Dartmouth's Astronomy Lunch.  We hope that this will be a way to catch up on what was talked about at astro lunch for those who can't make it, as well as a way to spur more discussion outside of our hour meetings.  If you have any comments or suggestions, please let Greg or I know.  Thanks!