Does anyone know network theory? I need to apply it to the universe.
Does anyone know network theory? I need to apply it to the universe.
I must take exception to this. 100% efficiency would be possible if there was zero feedback — if the stars did not act back on the gas from which they formed — and zero angular momentum. This can never happen, as stars give off light, there are magnetic fields, angular momentum transfer, etc. I take that back, I’ll use the word never. 100% efficiency means that ALL of the gas goes into stars, which won’t happen even with zero feedback.
We can, however, change what is meant by “efficiency.”
I took the course “Astrophysics II” which is just an amalgam of three or four classes mushed into one semester. We didn’t really have problem sets, rather our homeworks mostly consisted of doing small projects (I wrote a crappy absolute photometry code for one) or preparing presentations.
Or final project is to write faux proposals requesting time on a telescope. This requires choosing a project, a telescope, and determining when/where/how long/with which instrument/why to look with the telescope. Those are a lot of questions!
I got the scientific justification out of the way pretty quick; I chose a topic I have been thinking about for months now, so I didn’t have to go searching for references and junk. I chose to write a proposal for the FIFI-LS instrument on the SOFIA platform — a 2.5 meter infrared telescope mounted in the back of a modified 747. I want to get a map of ionized carbon, C+, of a certain cloud. If you look through my post archive, I’ve talked about this cloud a lot: MBM 12. Now this map is a big region and to do it at full resolution would take something like 80 hours (without the calibration overhead!!), so the map will have to be very sparse. But there have only been 15 measurements of C+ taken so far, and in 3 hours I can get almost 300 in. I could probably squeeze in perhaps 50% more points if I vary the observation time for each point. I would go by the linear relationship between the far infrared emission and the carbon emission (Ingalls 2002). But that might not be the best thing to do in a mapping project.
So after 10 seconds on target, I should be able to detect the C+ line throughout the whole cloud (sensitive to a flux of ~2.8e-16 W m-2). But then we need also nod and chop positions, so the total observing time per point is up to 4 times the time on target, so 40 seconds. The telescope looks at a nearby empty part of the sky in order to get the background emission from the sky so that can be removed, leaving only the carbon line. Then the telescope slews to a different position and repeats — for the same observation.
The FIFI-LS can take 25 spectra at once, arranged in a 5x5 grid with each spatial resolution element 12” per side: 1’ per side in total. The area I’d like to map is about 2ox4o. Ideally, the map would be bigger, in order to satisfactorily combine it with some Fermi gamma-ray data. I could probably request as much as 4 hours, which would give me 360 pointings to work with.
Of course, this proposal is NOT going to be actually submitted, though with some touch-ups, it might be able to be. Perhaps in cycle 3…
This will be interesting, as I am going to do this from memory. This is the undergraduate list.
I can’t remember any more. There were, of course, the (unpublished lab manuals. The three intro classes (mechanics, E&M, waves) had associated labs as separate classes. The waves lab doubled as an optics “lecture.” Additionally, there was an introductory lab course as well as an advanced lab class.
Beyond this, I only took one additional physics course: string theory, taught by S. James Gates. He was, unfortunately, only able to teach about 3/4 of the time and his post-doc taught the other lectures. Gates had to go off to advise President Obama on matters of science. He used his own notes and sometimes pointed us to the video lectures he gave as part of “The Great Courses” by the teaching company.
I really wish there was a bigger emphasis on programming, since the vast majority of all physics and astronomy students NEED it in research and it is a very employable skill. It also helps to develop the skill of breaking down a problem into manageable parts. I told the undergraduate adviser this on more than a few occasions and, apparently, UMD used to offer such a course. I took the computational astrophysics course instead and was very happy with it.
I was NOT ready for quantum mechanics when I first took it in the spring of my sophomore year. My brain just wasn’t up to the task, I barely scraped by with a B-, and that was a generous grade. I waited a year to do the second semester and did extremely well. By then I had learned how to study more effectively and my mind was much more agile.
I won’t do a list of the math books I used. I did take Calc 3, linear algebra, ordinary differential equations, (semi-applied) partial differential equations, complex variables, and differential geometry.
Time for a list of some of the textbooks I used and for which class they were used.
I can’t remember which book was used for my: theoretical astrophysics, solar system (was basically an orbital dynamics course).
The following are regarded as good intro books:
This only takes care of the astronomy books. It doesn’t mention the physics/math books! Nor anything graduate level.
Hi! I really enjoy your blog. I was just wondering if have any suggestions for astrophysics textbooks for a high school student with knowledge of classical mechanics and calculus II? Thanks!
You’re more advanced than the intro astronomy class I teach (for non-science majors). I’ll dig through my undergraduate stuff and see if I can’t find my old intro textbook for physics and astronomy majors.
The intro stuff is basically a big survey of astronomy. Before you can really start applying your calculus and physics to astronomy, you should get used to the language spoken. There are, unfortunately, a lot of terms and concepts to get used to. But they’re fun!
Hell, maybe I’ll just include a list of books for all the astronomy and physics classes I took. It will look daunting, but remember I took 4 years to go through all of them, and that was with the help of instructors. So stay tuned, and if you don’t hear from me soon, feel free to badger me with more messages.
The next project will be interesting, especially considering it was my idea. MBM 12 turned out to be really hard because a coincident active galaxy. I did a quick check of the Fermi gamma-ray point sources against a catalog of high (galactic) latitude molecular clouds, and fewer than a dozen have point sources even within 1o. So they’re easier. I checked, for instance, MBM 20, and the field was pretty clean. I’ll provide images later. A survey has not been done because there was little to no data for carbon monoxide at these locations. I’ll have to use Planck’s all-sky CO map because Dame and collaborators have yet to publish their high latitude CO survey — but the Planck collaboration had access to the unfinished survey!
For now, I have to finish writing this proposal to request telescope time for the FIFI-LS instrument on the SOFIA platform. It is for a class final project, not to actually submit. But the project is to pretend that we actually want to submit it, so it has to be of a certain quality. And who knows, maybe my adviser and I will submit a proposal next year for cycle 3.
I also have to write up a short 10-15 minute talk about my research for the Fermi summer school (starts in a week!). But that’s not the hardest thing, I’ve only been thinking about the material for > 8 months.
Just one quick word on this subject before we finish up. First, we DO expect the decay of the inflation-causing field, i.e. the stopping of inflation at different parts of space, to happen exponentially. According to wiki, the waiting time between events in a Poisson process follows the exponential distribution. Ok, the number of points still inflating, then, follows a Poisson distribution in general if the field decay is a random process like radioactive decay. Really, we want the cumulative distribution, as that gets to the probability summed up over time. The large-time behavior is simply that of a decaying exponential. It is modified by a polynomial, but the exponential will dominate for large time. So it is characterized by the timescale of decay, and our argument in the first post on eternal inflation mostly holds. We merely consider the simpler case.
So to conclude. We can say that any inflationary theory which can possibly give rise to our universe will result in eternal inflation unless there is an arbitrary condition that the whole universe must stop inflating all at once. Which is silly. If inflation is caused by a quantum field, then inflation should freeze out like anything else. There will be phase transitions and such. What ice form — a whole chunk of water doesn’t all freeze at once. Crystals form at nucleation sites and spread from there. This analogy doesn’t go too far, but it demonstrates the fact that the substrate should not be expected to stop inflation all at once.
In order to explain the various problems encountered in standard big bang cosmology (flatness, relic, etc problems), the universe had to expand by something like a factor of 1030 at around 10-34 seconds, with a doubling time something around 10-37 seconds generically (give or take some orders of magnitude). But the change in scale factor is necessary — it could be bigger, but probably not much smaller.
Therefore the timescale for expansion should be larger than the timescale for the stopping of inflation, or else the universe could not have gotten so big. I glossed over a lot of issues and some of my assumptions are silly. But the conclusion that inflation generically results in eternal inflation is being taken seriously by some in the field. Not that I’m a fan of it at all.
Remember, take nothing I say as truth.
We can also review why the expansion of the universe is exponential as well. This requires a lightning review of cosmology in a homogeneous and isotropic universe. For now, we can ignore matter. The addition of matter will add some correction. In the case of inflation, it can be assumed the energy density of the inflation-causing field us significantly larger than the energy density in matter, therefore the correction due to the addition of matter will be a small perturbation on top of the inflation solution.
The scale factor of the universe obeys the Friedmann equation (derivable from Eintein’s field equations for a homogeneous/isotropic universe).
( da/dt )2 = (Λc2/3)a2
Where Λ is the energy density of the inflation-causing field (cosmological constant, here). Λ cannot be position dependent due to the cosmological principle. So of course, the scale factor will be exponential. But what if Λ is time dependent?
Well then, bets are off. We can do certain things. What if it changes very slowly? Slow has an almost-precise meaning here. Space expands exponentially depending on the value of Λ. Therefore Λ determines the timescale of inflation, τ. If (dΛ/dt) » τ, then Λ is changing much slower than the timescale for spatial expansion.
In this case, we will still have exponential expansion. At each time, we can roughly consider Λ to be constant. So space expands exponentially, but the expansion rate is allowed to change through time. This is the adiabatic approximation I think. We can think about other possibilities later. Recall, however, that the Friedmann equation holds for the classical gravitational field. What corrections will arise from quantum gravity?
In conclusion, spatial expansion is generically exponential using less handwaving arguments. A next step would be to show that the decay should behave exponentially.
Is eternal inflation a generic consequence of any generic theory describing an inflating universe? The short answer is yes — in the case where spacetime is a continuous manifold. I believe Alan Guth glossed over this point by stating what I state in the third paragraph (3rd sentence). But inflation is a semi-classical theory right now, quantum field theory done on a classical (Einsteinian) spacetime. Let’s review the concept of inflation again (so we don’t have to read another boring blog post).
Inflation = exponential expansion of space. Perhaps it is caused by some field (google “inflaton field”) which is generally unstable — exponentially unstable, just as a radioactive nucleus decays. The decay does not happen everywhere at once, so isolated bits of the substrate stop their exponential expansion.
Assume the scale time for space to stop inflating is “τ” seconds. Roughly, the number of spatial points still inflating goes like exp(-t/τ). It will take an infinite amount of time for the whole substrate to stop inflation, as there are an infinite number of points which need to stop inflating. However, let’s move to certain ideas of quantum gravity where space is discrete. If space is finite in extent (let’s assume S3 for simplicity, but it really doesn’t matter), then there are a finite number of spatial “atoms.” The number of atoms, then, which do not inflate will exponentially decrease.
We do have to remember, however, that this substrate is simultaneously exponentially expanding while the inflation-causing field is exponentially decaying. The scale factor of the substrate is increasing exponentially, a(t) ∝ exp(t/T), where T is the timescale for inflation, basically how long it takes for the universe to double in size. This means that new space is being created. If we wish to view the quantum space as a spin network (loop quantum gravity, spin foams, etc), then this expansion might look something like the first Pachner move given below:
Figure reproduced without permission (sorry!) from Backreaction, and can be found in numerous reviews on spin networks and loop quantum gravity. The top move, where the point becomes a triangle, becomes three points, can represent expansion (opposite represents contraction). The number of these new nodes, new points in space if you will, increases exponentially. So there are two competing effects determining the amount of space still inflating, inflation-stopping and space creation of the substrate still inflating. Assume the number of nodes still experiencing inflation is labeled N.
N ∝ exp(-t/τ)exp(t/T) = exp[ t(τ-T)/(Tτ) ]
This suggests that the number of nodes experiencing inflation will run out if the time scale for inflation-decay is shorter than the exponential expansion of space. This makes sense, if the field decays too fast, then space can’t expand and create new space in time to compensate. Conversely, if the timescale for inflation is smaller than the time scale for inflation-decay, we will have eternal inflation. Inflation will continue forever and never run out even with a finite number of spatial points.
This, of course, assumes the decay and expansion rate are both exponential in their behaviors. Geometric behavior (such as mAt, A,m∈ℜ) will provide a correction factor related to the natural logarithm, ln(m). Other behaviors will uniquely determine whether inflation is eternal or not. However, a geometric/exponential behavior is the model which makes most sense, so we won’t cover other behaviors. We’ll see why.
Inflation is essentially the creation of new space, an increase in the universe’s scale factor. We also need to maintain homogeneity and isotropicity. The universe must look the same in every direction from every point. Unless we want to do different cosmology — this is the cosmological principle (and is extended to the laws of physics). In order to reconcile this with the creation of new space, each point in space must spawn new space in every direction. In an infinite space, the positive/negative direction can be ignored and new space may be created in only, say, the positive direction.
By definition, the number of new points of space created is proportional to the current number of points in space. Again, ignoring some difficulties presented by the continuum. Literally we have something of the form:
dN/dt = AN
Where the solution is naturally an exponential or can very easily be written as a sum of exponentials.
Yes, he was there. It was a good talk. This was a series called “great thinkers of our time” or something like that. He talked about inflationary cosmology — he calls it a prequel. Inflation sets the stage for conventional cosmology, explaining why the big bang started hot, flat, etc. Many questions which the standard big bang theory took as assumptions are explained fairly naturally via inflation. In short, the universe expanded exponentially fast for a period of time. In the early universe, it took something like 10-37 seconds to double in size, then another 10-37 seconds to double in size again, etc. Cool.
But what stops it? Does it stop at all points of the universe at the same time? The answer to the second question is no. This will give us eternal inflation in time. As for what stops it … well, inflation is caused by an unstable field. It is analogous to a radioactive nucleus. We know it is going to decay, to metamorphose, after some time. The probability of this decay increases asymptotically to one over time, and this increase is exponential. This inflation-causing field behaves the same. There is a probability that it decays and stops inflation. This probability increases with time.
And as points in spacetime are individual degrees of freedom, this probability of the stopping of inflation occurs almost stochastically. One point may stop inflation and that might nucleate the stoppage around it, I don’t really know how this works. But the decay of the inflation-causing field is what creates the matter (imagine is almost like the spontaneous symmetry breaking of the Higgs mechanism giving particles mass).
So different parts of the universe stops inflation at different times. We’ll call the parts of spacetime which are still inflating the substrate. These places where inflation has stopped may have different constants of nature (electron mass, fine structure constant, etc) related to exactly how the inflation-causing field decayed to the true vacuum. Each region with a coherent set of physical laws and constants will be called a universe and the substrate PLUS the collection of all universes will be called the multiverse.
So there was that, he went over the anthropic principle and everything, which posits that we occupy a very special universe merely because we are here to observe the universe. The requirement that galaxies are able to form predicts the cosmological constant (dark energy strength) to within a factor of 5. But this calculation assumes that the universe’s properties are chosen completely randomly (uniform probability distribution function) from the landscape of 10500 universes. And the probability that a universe with our cosmological constant is roughly 10-120. Which is small by most standards. Is this natural? Is it satisfying? I’ve questions remaining after his talk, but I’ve blabbed on long enough for now. Those questions are for later.
Alan Guth, theoretical cosmologist who is credited with coming up with inflationary cosmology, visited Hunter College in Manhattan today. I’ll go into details later, but there is work to do and I’m tred.