2013年10月29日星期二

the relation between dust extinction and SFR changes with stellar mass

Paper here:   http://arxiv.org/pdf/1211.7062v2.pdf



There is a sharp transition in the relation at a stellar mass of 10^10 M_solar

Useful python script in astronomy.

Out of numpy, scipy, astropy that you have already used, there may be more detail small script you may want for daily work. I have made some script myself, like to read a catalog,  simple plot from a catalog. However, the use of them are quite limit. Here I introduce for you some good scripts from the website. They are more powerful and can be used in a wide area. Hope this will help you to have more time to focus on your job!

http://www-int.stsci.edu/~ferguson/software/pygoodsdist/
http://www.stsci.edu/~ferguson/software/pygoodsdist/doc/index.html

pygoods.tar

Full package of the utilities listed below.
angsep.py html txt Angular separation between two celestial sources
coords.py html txt Utilities for parsing and converting coordinates.
parseconfig.py html txt Utilities for reading SExtractor-style parameter files.
numprint.py html txt Utilities for printing columns of numpy one-dimensional arrays.
readcol.py html txt Utilities for reading columns of numbers from ascii files.
sextutils.py html txt Utilities for reading SExtractor catalogs.
match.py html txt Utilities for coordinate matching.

2013年10月28日星期一

[Oct28] The choice of slit size, one method in work

About the slit size:

You know tht the slit width will impact the spectroscopy. And do you know what is the proper width you should choose?

In an observation that you need both high resolution and perfect normalization, you can do like this. Observed with an narrow slit with enough exposure time to get the spectrum with enough resolution, then with a quite short time exposure with a wide enough slit which contain all of the flux of the source to get the real flux level. The width slit spectrum is only useful to determine the normalization of the flux. Just shift the the high resolution spectrum and you will get the perfect spectrum with both high resolution and real flux level.
You can have a simulation here: http://terpconnect.umd.edu/~toh/models/AbsSlitWidth.html


About the normal study:
    检查类比
Usually, you make a code or you make a simulation similar with others', but you do not know whether it is correct. Then try to compare the different results to make sure it is correct and also can help you to improve the results.

2013年10月25日星期五

BPZ, photometric redshift and some stuff


A short History: (see e.g. Yee 1998 for a review)
   http://arxiv.org/pdf/astro-ph/9809347.pdf
Baum (1962)
Colors of early type galaxies measured from 9 bands with a photometer were turned into a low resolution SED to determine distances of galaxy clusters relative to other clusters of galaxies.



Koo (1985)
Colors (from photographic plate material) were compared to colors expected for

synthetic Bruzual-Charlot SEDs. Redshifts were estimated from iso-z lines in color- color diagrams.

Loh & Spillar (1986) used χ2-minimization for redshift estimates
Pello and others developed a method of `permitted’ redshifts; the intersection of the permitted redshift intervalls for all galaxy colors measured defines `the’ redshift of a galaxy.
Photometric redshifts have become very popular since the middle of the 1990s 
--well calibrated, deep multi-waveband data (HDF, other deep fields, SDSS) 
--representative spectroscopic data sets available to test method (Keck, VLT,SDSS...)
--better cost efficiency if only approximate redshift is needed 



Photometric Redshifts: Methods
Template based:
color-space tessellation, χ2-minimization, maximum likelihood, Baysian ... uses physical information: SED’s (sizes, compactness...),
... and therefore biased extrapolates reasonably ok into unknown territory
Learning based:
Nearest Neighbour, Kd-tree, Direct fitting, Neural Networks, Support Vector Machines, Kernel Regression, Regression Trees & Random Forests...
ignores physical information: and therefore unbiased,
can uncover unknown dependencies 
requires large training set, bad in extrapolation 


Direct Fitting
developed by Connolly et al 1995, applied to z=0-0.6 galaxies with limiting magnitudes in U-, B-, R- and I-photographic plate-bands of 23, 22, 21, 20.
The redshift is described as a linear or quadratic function of the magnitudes of the galaxies in several bands. Coefficients are determined with a spectroscopic training set by linear regression.


`advantage’: no physical assumptions to be made beyond the fact that the training set and data set are statistically very similar.
`disadvantage’: coefficients do not apply to data sets obtained to fainter or higher redshift or modestly different type of galaxies.
Method has been applied in three-dimensional color-space to HDF data by Wang et al. 1998


Template methods
Measured colors (or fluxes) are compared to colors (or fluxes) predicted for various template SEDs and redshifts; best fitting redshift, SED-type and object type (star, galaxy QSO) are derived. Methods: BPZ (Benitez), Hyperz (Bolzanella/Pello), LePhare (Arnouts), COSMOS (Mobasher), ZEBRA (Feldmann et al.), PHOTO-z (Bender), ...
Which templates ?
-- Coleman, Wu & Weedman (1990), empirical spectro-photometric SEDs from low-z gals. -- templates derived from stellar population models, eg. BC-templates
-- self-calibrated, optimized or semi-empirical templates preferred -- difficulties: --- restframe UV-extension of the SEDs galaxies (use
synthetic spectra or broad band photometry)
---
finding a sufficiently representative SED set
How many templates?
--depends on science question ..., too many may hurt!
--eigenspectra can provide continuous set (method: Connolly et al 1995, applied by Yip et al 2004 to 170000 SDSS-spectra with r<18 and median redshift of 0.1),
--> 3 eigenspectra are sufficient to describe the variances of the SEDs up to 2% --> 5 more appropriate for large redshift range
--> ideally, make eigenspectra dependent on redshift
--> other option: fit combination of SSPs or CSP + dust (old+medium+young+dust) 



Baysian photometric redshift estimates:
prob(B and A) = prob(A and B) = prob(A|B)*prob(B) = prob(B|A)*prob(A)
=> Bayes’ theorem:
     prob(B|A) * prob(A) /prob(A|B) = prob(B)
now, translate: A=Model, B=Data:
     prob(data|model) * prob(model)/ prob(model|data) =prob(data)
prob(model) is called the prior probability for the model (parameters), prob(data) is a number and thus simply a normalization parameter.
prob(model) is usually ignored in χ2-minimization and maximum likelihood, it can be used to include our prior knowledge/prejudice: e.g. no red ellipticals at z>1, no low metallicity objects at low z, no galaxies with MB < -26, low Sersic n indicates late type SED, large apparent size means low z and thus helps to improve photometric redshifts.

ABOVE IS FROM: http://www.mpe.mpg.de/opinas/talks/photoz_rb.pdf

PPT you may interested:
  Talked about the influerence of bands to the redshift results

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=9&ved=0CHAQFjAI&url=http%3A%2F%2Fwww.astro.caltech.edu%2Ftwiki_phat%2Fpub%2FMain%2FPHATMeetingJPL%2FCoe.ppt&ei=hdPlUvniOs7hsATsu4L4DQ&usg=AFQjCNGXO4AQf9UqcLPXmLCeRI2GOORo3A&sig2=SqgvOU7wVamxblQKW3Qq1Q&bvm=bv.59930103,d.cWc&cad=rjt





收集程序的网站



photo Z的程序


eazy 
Brammer 2008 
BPZ   
Also see Narciso Benitez's BPZ page, including BPZ v1.98b.

ZEBRA
           can get age, max  et al.

hyperz Bolzonella 2000
lephase 2006

光谱拟合程序FAST

The main difference with HYPERZ is that (1) FAST fits fluxes instead of magnitudes, (2) you can completely define your own grid of input stellar population parameters, (3) you can easily input photometric redshifts and their confidence intervals, and (4) FAST calculates calibrated confidence intervals for all parameters. However, note that, although it can be used as one, FAST is not a photometric redshift code

2013年10月21日星期一

flux convert from Jy to erg/s/A

The mks units of flux density, W m2 Hz1, are much too big for practical astronomical use, so we define smaller ones: 

1 Jansky=1 Jy1026 W m2 Hz11023 erg s1 cm2 Hz1

and 1 milliJanksy = 1 mJy 103 Jy, 1 microJansky = 1  Jy 106 Jy. 



PS:
1 erg s1 cm2 Hz1 =  1 dv/dl  erg s1 cm2 lambda= c/lambda**2   erg s1 cm2 lambda1

1 Jy = 3E-5 erg s1 cm2 lambda1   /(X)**2
        ~ 1.2 E-(23-11)  erg s1 cm2 lambda1   at  5000A


http://www.cv.nrao.edu/course/astr534/Brightness.html

2013年10月13日星期日

MAC how to copy paste under xterm

Firstly, introduce how to give a screenshot in the mac:

在Mac上截图其实很简单,但很多人只知道Command-Shift-3和Command-Shift-4,却不知道Mac的截图快捷键其实还有很多增强,具体如下: 
1)Command-Shift-3: 将整个屏幕拍下并保存到桌面。 
2)Command-Shift-Control-3:将整个屏幕拍下并保存到剪贴板(Clipboard),你可以Command+V直接粘贴到如Photoshop等软件中编辑。 
3)Command-Shift-4:将屏幕的一部分拍下并保存到桌面。按下这个组合键后,光标会变为一个十字,你可以拖拉来选取拍摄区域。 
4)Command-Shift-Control-4:将屏幕的一部分拍下并保存到剪贴板。 
5)Command-Shift-4再按空格键:这时光标会变为一个照相机图标,点击可拍下当前窗口或菜单或Dock以及图标等,只要将照相机图标移动到不同区域(有效区域会显示为浅蓝色)点击。 
6)Command-Shift-Control-4再按空格键:将选取的窗口或其他区域的快照保存到剪贴板。
from: http://hi.baidu.com/ricestudio/item/5a42d614cdbb254fe75e065b

Then, let s talk about how to copy and paste under xterm which make me puzzle for one month:

copy    command + C as usual
pase    
(1) for touch pad:
   option+ click
(2) for mouse:
   option+ click
It is easy!  However, you should enable the three button mouse. It can be found in the preference of x11 or Xquartz. See the first first tick bellow:



2013年10月10日星期四

Be careful while writing a new paper [1]

Writing an paper is a collection of your skills including writing, programming, organization. But all of these will be based on your data analysis. While I am writing my first paper, I made many mistakes and it costed so much time to realize and correct them. Now just want to show you some tips about the data analysis I have faced before writing the paper.

(1) Always try to make correct data. In the very beginning, I just select my sample with simple color-color diagram. Then I have to use my eyes to check the images and spectra. When I found a mistake, I have to doing the eye check again which will cost me one or more days. So please ready for select the correct data at the first time. You can try to check what others have done using the same or similar data.

While using the color color selection, you can using some S/N to obtain a more reliable sample.
Try to think about what kind of sample you will selected before the selection.
Try a selection for test before serous selection, and you will find some tips for improvement.
Make sure your data has been proceed with the correct pipline before the selection.

(2) Make your programs being well organized. Never think so simple that I will only use this program for only one time!  never !  you will revise it, improve it, make other program based on the simple one. Be prepared for the future work. Just assuming that you program will also be used by other astronomers.
   It is also important to let the output more convenient to use and check. Other than good figures and good catalog, usually, you may think you only need the magnitude of the objects, so you just print the magnitude out. You are wrong! Save all the information of the selected sample in one file and also make it readable. It is more complicate but really worth it. You can check the information you need from this reorganized files.
   Last thing, make sure the program is correct!

(3) Do not believe your eyes. You may think the relation is good and the distribution is different. But all these things are based on your eyes which also means it is based on your current experience. They could be fake. So do some tests or experiments to see what is happening. Trues will be inside the figures!




2013年10月2日星期三

Orc02

Daily tips:
I have received the draft of the paper. Should start to deal with the problems.
To complete the selection of Lyman break galaxies in z~ 1-3 as in xinwen's email.


Today, Genevieve Graves (Princeton University) gave us a talk about two works.  The first one is quite interesting. "They have recently developed a new method for weak lensing using background source magnification instead of using gravitational shear. " In the traditional method, people measured the shear signal by comparing the lensed morphology of a group galaxies with the random sample.
However, "Traditional magnification methods have struggled to match the signal-to-noise (S/N) per background source achieved by shear because the intrinsic dispersion of galaxy luminosities and radii are much larger than the intrinsic dispersion of ellipticities." 

We have solved this problem using knowledge about the galaxy population to predict the intrinsic radii to within ∼ 40%. Our new magnification method thus yields a signal that is nearly comparable to shear. Moreover, the dominant sources of systematic error are different from those in shear-based measurements. This means that combining shear and magnification can alleviate the worst biases in each method and produce a substantially more robust, higher S/N measurement of the dark matter distribution for a given survey than can be achieved with shear alone. As a proof-of-concept, we have used this technique to make a galaxy-galaxy lensing measurement using SDSS imaging and spectroscopic data. This talk will describe the new method, and present our first lensing measurements.