2013年12月29日星期日

Linux的五个查找命令:find,locate,whereis,which,type

1. find
find是最常见和最强大的查找命令,你可以用它找到任何你想找的文件。
find的使用格式如下:
  $ find <指定目录> <指定条件> <指定动作>
  - <指定目录>: 所要搜索的目录及其所有子目录。默认为当前目录。
  - <指定条件>: 所要搜索的文件的特征。
  - <指定动作>: 对搜索结果进行特定的处理。
如果什么参数也不加,find默认搜索当前目录及其子目录,并且不过滤任何结果(也就是返回所有文件),将它们全都显示在屏幕上。
find的使用实例:
  $ find . -name "my*"
搜索当前目录(含子目录,以下同)中,所有文件名以my开头的文件。
  $ find . -name "my*" -ls
搜索当前目录中,所有文件名以my开头的文件,并显示它们的详细信息。
  $ find . -type f -mmin -10
搜索当前目录中,所有过去10分钟中更新过的普通文件。如果不加-type f参数,则搜索普通文件+特殊文件+目录。
2. locate
locate命令其实是“find -name”的另一种写法,但是要比后者快得多,原因在于它不搜索具体目录,而是搜索一个数据库(/var/lib/locatedb),这个数据库中含有本地所有文件信息。Linux系统自动创建这个数据库,并且每天自动更新一次,所以使用locate命令查不到最新变动过的文件。为了避免这种情况,可以在使用locate之前,先使用updatedb命令,手动更新数据库。
locate命令的使用实例:
  $ locate /etc/sh
搜索etc目录下所有以sh开头的文件。
  $ locate ~/m
搜索用户主目录下,所有以m开头的文件。
  $ locate -i ~/m
搜索用户主目录下,所有以m开头的文件,并且忽略大小写。
3. whereis
whereis命令只能用于程序名的搜索,而且只搜索二进制文件(参数-b)、man说明文件(参数-m)和源代码文件(参数-s)。如果省略参数,则返回所有信息。
whereis命令的使用实例:
  $ whereis grep
4. which
which命令的作用是,在PATH变量指定的路径中,搜索某个系统命令的位置,并且返回第一个搜索结果。也就是说,使用which命令,就可以看到某个系统命令是否存在,以及执行的到底是哪一个位置的命令。
which命令的使用实例:
  $ which grep
5. type
type命令其实不能算查找命令,它是用来区分某个命令到底是由shell自带的,还是由shell外部的独立二进制文件提供的。如果一个命令是外部命令,那么使用-p参数,会显示该命令的路径,相当于which命令。
type命令的使用实例:
  $ type cd
系统会提示,cd是shell的自带命令(build-in)。
  $ type grep
系统会提示,grep是一个外部命令,并显示该命令的路径。
  $ type -p grep
加上-p参数后,就相当于which命令。
From: http://www.kuqin.com/linux/20091009/70532.html

LINUX 命令—— 文本操作命令 cat more less head tail

【文本文件操作命令】
cat        查看文件内容   
more      逐屏查看文件内容
less       逐行查看文件内容
head      显示文件开头部分内容
tail        显示文件结尾部分内容

tail –f  20 /var/log/messages      -f可以一直追踪这个文件内容,一般是日志文件,20表示显示的行数。运维工作中常用重要命令。

Cut  –d: -f1 /etc/passwd 以:为分隔符 截取每行第一段字符

sort 排序
如:du |sort –n–r        -n是以数字排序,-r是反向排序
-t:以冒号为分隔符
+2以第二列开始排

wc  统计文件下有多少行 多少字符
[root@localhostddd]# wc /etc/passwd
  36   541637 /etc/passwd
如上:36行 54个单词1637个字符
参数:
 –l     行
 –w    单词
-c     字符

unip 将相邻的相同行的去掉
如:
[root@localhostddd]# cut -d: -f7 /etc/passwd |uniq
/bin/bash
/sbin/nologin
/bin/sync
/sbin/shutdown
/sbin/halt
/sbin/nologin

/sbin/nologin
/bin/bash

diff  文件A 文件B  对比A和B的区别


【正则表达式】
echo *与echo “*”的区别
[root@localhost~]# echo *
aaaall.sql anaconda-ks.cfg bastest case Desktop install.log install.log.syslogxunhuan
[root@localhost~]# echo"*"
*
echo*     中bash将*理解为任意位置的任意字符
echo“*”中双引号的作用是将内容注释为字符串

.       任意的一个字符
*       任意多的任意字符
\        脱意符
^       以……开头
$       以……结尾
\<  \>   以……开头以……结尾
a\{18\}    a重复18次

为配置文件瘦身
#grep '.\{10\}' /usr/share/dict/words
.重启10次的(字符为10)的单词
#grep '.\{10,\}' /usr/share/dict/words
 在此文件中找出.重复10次以上的(字符大于10)的单词
grep -v ‘^#’ /etc/httpd.conf  | grep –v ‘^$’
 找出除了以#开头的,并除去以刚开头就结尾的(空行)的内容

[abc]表示当前一个位置a、b或c
#grep ‘^[abc]’ /etc/passwd  以a或b或c开头的
#grep '^[^1-9]' /etc/passwd 不是以1-9开头的


更多0

2013年12月9日星期一

Make good astronomical images or rgb image. (deal with the large dynamic range problem)

Usually, the images we get with a CCD is just in grey color. Would you like to make some amazing color images.

There is some easy way to do that. Check the Trilogy by Dan Coe. The introduction on the website is quite enough for first using.

Another way I found is the
Here is a paper about one method to create the rgb images (http://arxiv.org/pdf/astro-ph/0312483v1.pdf). Some examples are here (http://www.astro.princeton.edu/~rhl/PrettyPictures/).


The thing you should think about is that these images have large dynamic ranges as shown below.

Human Eye 10,000:1
CRT 100:1
Real-life Scenes up to 500,000:1


In real life, the images always contain a large range of flux. It is quite bright in some points, but it is dark in others. So if you use a linear plot, some details will not be obvious. So you should think about to rescale the image. Unlike you take photo in which you should make longer exposure inside room, the exposure time you use is better for longer. For longer exposure time, you can detected faint sources. In the plot, if you want to show these faint sources, some bright objects will be too brighter and make the figure too ugly. Right? The thing is about how to make a good contrast with keeping enough informations. now let us compare different scale methods.
Check the above, for the same observations, it seems different. If you are familiar with DS9, you can try these scale with any fits file. I use log scale usually. There is a comparation of the stretch function.


In order to show some more details about faint objects, I suggest to use log scale or even log(log) scale. In Trilogy sofeware, Coe uses the scale method:
y = log10( k * (x - xo) + 1 ) / r
# Current settings:
# x0: 0 (0 in the input yields black in the output)
# x1: mean + std (1-sigma above the noise)
# x2: set so only some small fraction of pixels saturate (with output = 1)
The x1 and x2 is determined by the two parameters satpercent and noiselum. 


Most of these images are from the ppt: chandra.harvard.edu/graphics/talks/christensen_sixth.ppt

【IRAF】The commands in IRAF you may want to use

I want to collect some commands that I have used or I like to learn here. Just to make a collection. Hope this will help for you.  Most of these commands are used in my works because I am dealing with HST images in these days.

To be continued...

imcopy      copy a region of images  to another file. It will keep the coordinate information!
    iraf.imcopy(infile[0:10,0:10], outfile, verbose=0)

imarith    useful when you want to divide the exposure time or divide two images.
 cl> imarith exp1[10:90,10:90] * 1.2 temp1
 cl> imarith exp2[10:90,10:90] * 0.9 temp2
 cl> imarith temp1 / temp2 final title='Ratio of exp1 and exp 2'
 cl> imdelete temp1,temp2

mosaic_display  display a list of images. 
    mosaic_display image.* ncols=4 nrows=2 


wregister  register a list of images to a reference image using WCS information. I really like this commands. It is much faster and easier then my own program. What is also important, it uses an algorithm to smooth the images, so the results look quite pretty
       wregister input reference output
One important usage in my work. Use wregister to register Spitzer/IRAC images to match HST images. Then you can use the python script to read the images to plot nice stamp images. This image comes from Dan Coe's paper about the redshift z~10 galaxy. 


artdata Make arificial data with IRAF
--> apropos noao.artdata   
gallist - Make an artificial galaxies list (noao.artdata) 
mk1dspec - Make/add artificial 1D spectra (noao.artdata) 
mk2dspec - Make/add artificial 2D spectra using 1D spectra templates
(noao.artdata) 
mkechelle - Make artificial 1D and 2D echelle spectra (noao.artdata)  mkexamples - Make artificial data examples (noao.artdata) 
mkheader - Append/replace header parameters (noao.artdata)   
mknoise - Make/add noise and cosmic rays to 1D/2D images  (noao.artdata)
mkobjects - Make/add artificial stars and galaxies to 2D images  (noao.artdata) 
mkpattern - Make/add patterns to images (noao.artdata) 
starlist - Make an artificial star list (noao.artdata) 
imcombine
combine several images into one fitsfile. You can define the grid and the black region between them. It's useful when you want to plot stamp images for check.
    imcopy rejmask[*,*,1] mask1
    grid [n1] [s1] [n2] [s2]
where ni is the number of images in dimension i and si is the step in dimension i. For example "grid 5 100 5 100" specifies a 5x5 grid with origins offset by 100 pixels.

the purpose is quite similar to mosaic_display. The different is that you can scale the combined fits file easily here.


imexam


Others I have not used so far:
imtranspose
imexpr  
        imexpr a?1:b a='.fits'

ellipse 
    Fit elliptical isophotes to galaxy images.

mkobjects

mknoise

imstat

SPECTRUM
wspectext
 wspectext -- convert 1D image spectra to an ascii text spectra
dopcor, dispcor
Apply doppler correction, Dispersion correct and resample spectra 
cl> dopcor qso001.ms qso001rest.ms 3.2 flux+
ir> dispcor spec dcspec 9,10,447-448

See all task in IRAF here .





2013年11月11日星期一

Installing SExtractor on Mac

Really thanks to the author from here. You should read the article here: http://okomestudio.net/biboroku/?p=824

The easiest way to do that is using Macport! It is easy and safe.
Another way is to install from the source which I fail. The problem is exactly the same as in the blog, "IMPORTANT: I need FFTW and ATLAS installed already, and assume that FFTW was installed at /usr/local/fftw, and ATLAS at /usr/local/atlas following the source install procedures described in those notes."


2013年11月7日星期四

Black Holes——find and binary and interesting stuffs

Today, I heard an interesting topic from Avi Loeb, just about Black Holes. An fantastic topic?  I see many works leaded or participate by the lecturer and every topic is interesting for me,  and I think you will feel the same.


Start with an attractive images, what a BH should look like, black of course. But we can see it from the emissions of the disk and they will tell us about the spin of the galaxies and what is really happening around it, die or burst !

Then, really sciences come, one by one. (just some parts I try to recall.)
(1) Look the horizons of SgrA* and M87
(2) black hole binary and so on
(3) black hole recoil 
(4) tidal disruption
(5) small black holes, BH seeds ?
(6) the gas around Sgr A* in the later few months

At last, for the first time, I heard the word,  "Primordial black holes (PBHs)". Which form in the beginning of the big ban and the mass  span a wide range of masses,10−5g < M < 105M⊙. For the collisionless and nonrelativistic, they are natural dark matter (DM) candidates, which are amazing!!
But the observations have constraint such possibility and nagative results is got for the massive PBHs.
The next generation will constrain to the mass 10^-5 sun BHs. There is a latest paper about this topic:
http://arxiv.org/pdf/1307.5176v2.pdf






STScI FALL COLLOQUIUM SERIES

Wednesday, November 6, 2013
3:30 p.m. -- Bahcall Auditorium  Preceded by light refreshments at 3:15 p.m.

Avi Loeb      Harvard  University

Title
A Closer Look at Black Holes

Abstract


Several new techniques are currently being employed to probe the strong gravitational field in the vicinity of supermassive black holes. Long baseline interferometry at sub-millimeter wavelengths sets constraints on the silhouette of the black holes in the Galactic center (SgrA*) and M87. Stars which get tidally disrupted as they orbit too close to a single black hole are being discovered at cosmological distances. Electromagnetic counterparts of black hole binaries in galaxy mergers are being identified, and can be used to calibrate the rate of gravitational wave sources.  Most interestingly, the recoil induced by the anisotropic emission of gravitational waves in the final plunge of binaries leaves unusual imprints on their host galaxies.

2013年11月4日星期一

an advise for future astronomers

What advice do you think advisors should be giving students regarding their career path?


If students want to stay in astronomy, it’s important to do great research and to make sure others know about that research through publications, but also through attending professional meetings, particularly those topical meetings in the most relevant research areas where they can meet the individuals who may have funds for fellowships in the future.

For faculty positions, becoming an

engaging teacher is important and this takes practice giving talks. Advisors should give students many opportunities to present their research and advise them on how to present it more clearly and for different audiences.

Since about one-third of astronomers work in academic positions, one-third at observatories and national labs and one-third in industry, it is also very important that students broadly consider their future options. While there are currently many post-doctural positions each year, there are generally fewer job openings for more senior positions.

check the entire topic here:
http://www.astrobetter.com/career-profiles-astronomer-to-research-scientist-at-the-smithsonian-astrophysical-observatory/



2013年10月29日星期二

the relation between dust extinction and SFR changes with stellar mass

Paper here:   http://arxiv.org/pdf/1211.7062v2.pdf



There is a sharp transition in the relation at a stellar mass of 10^10 M_solar

Useful python script in astronomy.

Out of numpy, scipy, astropy that you have already used, there may be more detail small script you may want for daily work. I have made some script myself, like to read a catalog,  simple plot from a catalog. However, the use of them are quite limit. Here I introduce for you some good scripts from the website. They are more powerful and can be used in a wide area. Hope this will help you to have more time to focus on your job!

http://www-int.stsci.edu/~ferguson/software/pygoodsdist/
http://www.stsci.edu/~ferguson/software/pygoodsdist/doc/index.html

pygoods.tar

Full package of the utilities listed below.
angsep.py html txt Angular separation between two celestial sources
coords.py html txt Utilities for parsing and converting coordinates.
parseconfig.py html txt Utilities for reading SExtractor-style parameter files.
numprint.py html txt Utilities for printing columns of numpy one-dimensional arrays.
readcol.py html txt Utilities for reading columns of numbers from ascii files.
sextutils.py html txt Utilities for reading SExtractor catalogs.
match.py html txt Utilities for coordinate matching.

2013年10月28日星期一

[Oct28] The choice of slit size, one method in work

About the slit size:

You know tht the slit width will impact the spectroscopy. And do you know what is the proper width you should choose?

In an observation that you need both high resolution and perfect normalization, you can do like this. Observed with an narrow slit with enough exposure time to get the spectrum with enough resolution, then with a quite short time exposure with a wide enough slit which contain all of the flux of the source to get the real flux level. The width slit spectrum is only useful to determine the normalization of the flux. Just shift the the high resolution spectrum and you will get the perfect spectrum with both high resolution and real flux level.
You can have a simulation here: http://terpconnect.umd.edu/~toh/models/AbsSlitWidth.html


About the normal study:
    检查类比
Usually, you make a code or you make a simulation similar with others', but you do not know whether it is correct. Then try to compare the different results to make sure it is correct and also can help you to improve the results.

2013年10月25日星期五

BPZ, photometric redshift and some stuff


A short History: (see e.g. Yee 1998 for a review)
   http://arxiv.org/pdf/astro-ph/9809347.pdf
Baum (1962)
Colors of early type galaxies measured from 9 bands with a photometer were turned into a low resolution SED to determine distances of galaxy clusters relative to other clusters of galaxies.



Koo (1985)
Colors (from photographic plate material) were compared to colors expected for

synthetic Bruzual-Charlot SEDs. Redshifts were estimated from iso-z lines in color- color diagrams.

Loh & Spillar (1986) used χ2-minimization for redshift estimates
Pello and others developed a method of `permitted’ redshifts; the intersection of the permitted redshift intervalls for all galaxy colors measured defines `the’ redshift of a galaxy.
Photometric redshifts have become very popular since the middle of the 1990s 
--well calibrated, deep multi-waveband data (HDF, other deep fields, SDSS) 
--representative spectroscopic data sets available to test method (Keck, VLT,SDSS...)
--better cost efficiency if only approximate redshift is needed 



Photometric Redshifts: Methods
Template based:
color-space tessellation, χ2-minimization, maximum likelihood, Baysian ... uses physical information: SED’s (sizes, compactness...),
... and therefore biased extrapolates reasonably ok into unknown territory
Learning based:
Nearest Neighbour, Kd-tree, Direct fitting, Neural Networks, Support Vector Machines, Kernel Regression, Regression Trees & Random Forests...
ignores physical information: and therefore unbiased,
can uncover unknown dependencies 
requires large training set, bad in extrapolation 


Direct Fitting
developed by Connolly et al 1995, applied to z=0-0.6 galaxies with limiting magnitudes in U-, B-, R- and I-photographic plate-bands of 23, 22, 21, 20.
The redshift is described as a linear or quadratic function of the magnitudes of the galaxies in several bands. Coefficients are determined with a spectroscopic training set by linear regression.


`advantage’: no physical assumptions to be made beyond the fact that the training set and data set are statistically very similar.
`disadvantage’: coefficients do not apply to data sets obtained to fainter or higher redshift or modestly different type of galaxies.
Method has been applied in three-dimensional color-space to HDF data by Wang et al. 1998


Template methods
Measured colors (or fluxes) are compared to colors (or fluxes) predicted for various template SEDs and redshifts; best fitting redshift, SED-type and object type (star, galaxy QSO) are derived. Methods: BPZ (Benitez), Hyperz (Bolzanella/Pello), LePhare (Arnouts), COSMOS (Mobasher), ZEBRA (Feldmann et al.), PHOTO-z (Bender), ...
Which templates ?
-- Coleman, Wu & Weedman (1990), empirical spectro-photometric SEDs from low-z gals. -- templates derived from stellar population models, eg. BC-templates
-- self-calibrated, optimized or semi-empirical templates preferred -- difficulties: --- restframe UV-extension of the SEDs galaxies (use
synthetic spectra or broad band photometry)
---
finding a sufficiently representative SED set
How many templates?
--depends on science question ..., too many may hurt!
--eigenspectra can provide continuous set (method: Connolly et al 1995, applied by Yip et al 2004 to 170000 SDSS-spectra with r<18 and median redshift of 0.1),
--> 3 eigenspectra are sufficient to describe the variances of the SEDs up to 2% --> 5 more appropriate for large redshift range
--> ideally, make eigenspectra dependent on redshift
--> other option: fit combination of SSPs or CSP + dust (old+medium+young+dust) 



Baysian photometric redshift estimates:
prob(B and A) = prob(A and B) = prob(A|B)*prob(B) = prob(B|A)*prob(A)
=> Bayes’ theorem:
     prob(B|A) * prob(A) /prob(A|B) = prob(B)
now, translate: A=Model, B=Data:
     prob(data|model) * prob(model)/ prob(model|data) =prob(data)
prob(model) is called the prior probability for the model (parameters), prob(data) is a number and thus simply a normalization parameter.
prob(model) is usually ignored in χ2-minimization and maximum likelihood, it can be used to include our prior knowledge/prejudice: e.g. no red ellipticals at z>1, no low metallicity objects at low z, no galaxies with MB < -26, low Sersic n indicates late type SED, large apparent size means low z and thus helps to improve photometric redshifts.

ABOVE IS FROM: http://www.mpe.mpg.de/opinas/talks/photoz_rb.pdf

PPT you may interested:
  Talked about the influerence of bands to the redshift results

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=9&ved=0CHAQFjAI&url=http%3A%2F%2Fwww.astro.caltech.edu%2Ftwiki_phat%2Fpub%2FMain%2FPHATMeetingJPL%2FCoe.ppt&ei=hdPlUvniOs7hsATsu4L4DQ&usg=AFQjCNGXO4AQf9UqcLPXmLCeRI2GOORo3A&sig2=SqgvOU7wVamxblQKW3Qq1Q&bvm=bv.59930103,d.cWc&cad=rjt





收集程序的网站



photo Z的程序


eazy 
Brammer 2008 
BPZ   
Also see Narciso Benitez's BPZ page, including BPZ v1.98b.

ZEBRA
           can get age, max  et al.

hyperz Bolzonella 2000
lephase 2006

光谱拟合程序FAST

The main difference with HYPERZ is that (1) FAST fits fluxes instead of magnitudes, (2) you can completely define your own grid of input stellar population parameters, (3) you can easily input photometric redshifts and their confidence intervals, and (4) FAST calculates calibrated confidence intervals for all parameters. However, note that, although it can be used as one, FAST is not a photometric redshift code

2013年10月21日星期一

flux convert from Jy to erg/s/A

The mks units of flux density, W m2 Hz1, are much too big for practical astronomical use, so we define smaller ones: 

1 Jansky=1 Jy1026 W m2 Hz11023 erg s1 cm2 Hz1

and 1 milliJanksy = 1 mJy 103 Jy, 1 microJansky = 1  Jy 106 Jy. 



PS:
1 erg s1 cm2 Hz1 =  1 dv/dl  erg s1 cm2 lambda= c/lambda**2   erg s1 cm2 lambda1

1 Jy = 3E-5 erg s1 cm2 lambda1   /(X)**2
        ~ 1.2 E-(23-11)  erg s1 cm2 lambda1   at  5000A


http://www.cv.nrao.edu/course/astr534/Brightness.html

2013年10月13日星期日

MAC how to copy paste under xterm

Firstly, introduce how to give a screenshot in the mac:

在Mac上截图其实很简单,但很多人只知道Command-Shift-3和Command-Shift-4,却不知道Mac的截图快捷键其实还有很多增强,具体如下: 
1)Command-Shift-3: 将整个屏幕拍下并保存到桌面。 
2)Command-Shift-Control-3:将整个屏幕拍下并保存到剪贴板(Clipboard),你可以Command+V直接粘贴到如Photoshop等软件中编辑。 
3)Command-Shift-4:将屏幕的一部分拍下并保存到桌面。按下这个组合键后,光标会变为一个十字,你可以拖拉来选取拍摄区域。 
4)Command-Shift-Control-4:将屏幕的一部分拍下并保存到剪贴板。 
5)Command-Shift-4再按空格键:这时光标会变为一个照相机图标,点击可拍下当前窗口或菜单或Dock以及图标等,只要将照相机图标移动到不同区域(有效区域会显示为浅蓝色)点击。 
6)Command-Shift-Control-4再按空格键:将选取的窗口或其他区域的快照保存到剪贴板。
from: http://hi.baidu.com/ricestudio/item/5a42d614cdbb254fe75e065b

Then, let s talk about how to copy and paste under xterm which make me puzzle for one month:

copy    command + C as usual
pase    
(1) for touch pad:
   option+ click
(2) for mouse:
   option+ click
It is easy!  However, you should enable the three button mouse. It can be found in the preference of x11 or Xquartz. See the first first tick bellow:



2013年10月10日星期四

Be careful while writing a new paper [1]

Writing an paper is a collection of your skills including writing, programming, organization. But all of these will be based on your data analysis. While I am writing my first paper, I made many mistakes and it costed so much time to realize and correct them. Now just want to show you some tips about the data analysis I have faced before writing the paper.

(1) Always try to make correct data. In the very beginning, I just select my sample with simple color-color diagram. Then I have to use my eyes to check the images and spectra. When I found a mistake, I have to doing the eye check again which will cost me one or more days. So please ready for select the correct data at the first time. You can try to check what others have done using the same or similar data.

While using the color color selection, you can using some S/N to obtain a more reliable sample.
Try to think about what kind of sample you will selected before the selection.
Try a selection for test before serous selection, and you will find some tips for improvement.
Make sure your data has been proceed with the correct pipline before the selection.

(2) Make your programs being well organized. Never think so simple that I will only use this program for only one time!  never !  you will revise it, improve it, make other program based on the simple one. Be prepared for the future work. Just assuming that you program will also be used by other astronomers.
   It is also important to let the output more convenient to use and check. Other than good figures and good catalog, usually, you may think you only need the magnitude of the objects, so you just print the magnitude out. You are wrong! Save all the information of the selected sample in one file and also make it readable. It is more complicate but really worth it. You can check the information you need from this reorganized files.
   Last thing, make sure the program is correct!

(3) Do not believe your eyes. You may think the relation is good and the distribution is different. But all these things are based on your eyes which also means it is based on your current experience. They could be fake. So do some tests or experiments to see what is happening. Trues will be inside the figures!