VOSA Documentation
Version 7.5, July 2022

Table of contents

1. Introduction
2. Input files
2.1. Upload files
2.2. VOSA file format
2.2.1. Required format
2.2.2. Examples
2.5. Single object
2.6. Manage files
2.7. Archiving
2.8. Filters
3. Objects
3.1. Coordinates
3.2. Distances
3.2.1. Errors
3.2.2. VO Search
3.2.3. Example
3.6. Extinction
3.6.1. Extinction law
3.6.2. VO Search
3.6.3. Example
4. Build SEDs
4.1. VO photometry
4.1.1. VO Search
4.1.2. Outlier detection
4.4. SED
4.5. Excess
4.5.1. Infrared
4.5.2. IR Algorithm
4.5.3. Fit Refine
4.5.4. UV/Blue
4.5.5. Example
5. Analysis
5.1. Model Fit
5.1.1. Fit
5.1.2. Best Fit
5.1.3. Extinction Fit
5.1.4. Chi-square
5.1.5. Errors
5.1.6. Excess
5.1.7. Syn.Phot.
5.1.8. Bol.Lum.
5.1.9. Param. Errors
5.1.10. Radius, Mass
5.1.11. Partial Refit
5.1.12. Example
5.14. Bayes analysis
5.15. Template Fit
5.16. Templates Bayes
5.17. Binary Fit
5.18. HR diagram
5.18.1. Interpolation
5.18.2. Flags
5.18.3. Example
5.22. Upper Limits
5.23. Statistics
6. Save results
6.1. Download
6.2. SAMP
6.3. References
6.4. Log file
6.5. Plots
7. VOSA Architecture
8. Phys. Constants
9. FAQ
10. Use Case
11. Quality
11.1. Stellar libraries
11.2. VO photometry
11.3. Binary Fit Quality
12. Credits
12.1. VOSA
12.2. Th. Spectra
12.3. Templates
12.4. Isochrones
12.5. VO Photometry
12.6. Coordinates
12.7. Distances
12.8. Dereddening
12.9. Extinction
13. Helpdesk
14. About
  
  Appendix
A. Excess calculation
B. Total flux calculation
C. VOphot quality info

 

Introduction

VOSA (VO Sed Analyzer) is a tool designed to perform the following tasks in an automatic manner:

See this documentation in a single page.

This can be useful to print or to search text, but take into account that it is a large page and can be heavy to load for your browser.

 

Input files

There are two main ways to start working with VOSA:

And, at any time, you can select a previously uploaded file and continue with it in the same point where you left the work. Below you can see details about these three options.

 

Upload data files

Whenever you click on the "Files" tab, you have the option of uploading a new file.

In order to do that you have to:

To select a file, you click in the button and look for the file in your computer:

When you click the "Upload" button, your file is transfered to the VOSA server and then it starts been analyzed. This can take a while if the file is large.

If everything is OK, you will get a message saying so. Please, click in "Continue" to go ahead.

You will go back to the "Files" page. Now you can see the details about the just uploaded file, that will be already available to work with it.

Even if there were not errors detected by VOSA, it is a very good idea to check if the format of your file has been correctly understood. So, please, whenever you upload a new file, click in the "Show Objects" button to see the information that VOSA has save for each object.

For each object in the file you should see its properties (name, position, extinction, distance...) and its photometric points. See if this is what you expected. If not, delete this file, check your input file and upload it again.

(while you are seeing the objects details, the "Show Objects" button is changed to "Hide Objects": you can use this one to hide the details)

Once the file is uploaded and you have checked that everything is ok, you can go to any of the other tabs in the index above and start working.

 

VOSA file format

VOSA is mainly designed to work with several objects at the same time so that the same or equivalent operations are performed on all the objects. In order to do this, we have defined a format so that the user can upload the info about these objects together with user photometric data.

Thus, the main way to use vosa is to upload a VOSA input file with this format (or selecting a previously uploaded one).

Nevertheless, we have added the Single Object Search so that you can directly search for a single object using its coordinates. See more information below.

Required input file format

The uploaded file must be an ascii document with a line for each photometric point.

Each line should contain 10 columns:

 ----------------------------------------------------------------------------
| object  | RA  | DEC | dis | Av | filter | flux | error | pntopts | objopts |
| ---     | --- | --- | --- | ---| ---    | ---  | ---   | ---     | ---     |
| ---     | --- | --- | --- | ---| ---    | ---  | ---   | ---     | ---     |

Take into account that:

Please, check in advance that your file conforms to these requirements. Next, after uploading it, you can try to see the analyzed contents of the file in "Upload files → Show". If what you see does not correspond to what you expect it will probably mean that there is something wrong in your data file. Delete it from the system, try to correct the mistake and upload it again.

Examples of valid files

1.- A complete file

Obj1 19.5  23.2 80 1.2 DENIS/DENIS_I     5.374863e-16 4.950433e-19 ---    Av:0.5/5.5
Obj1 19.5  23.2 80 1.2 CAHA/Omega2000_Ks 2.121015e-16 1.953527e-19 ---    Av:0.5/5.5
Obj1 19.5  23.2 80 1.2 Spitzer/MIPS_M1   6.861148e-15 1.390352e-16 nofit  Av:0.5/5.5
Obj2 18.1 -13.2 80 1.2 WHT/INGRID_H      1.082924e-14 2.194453e-16 ---    ---
Obj2 18.1 -13.2 80 1.2 2MASS/2MASS_J     2.483698e-17 2.287603e-19 ---    ---

In this file we have two different objects, their positions (RA and DEC), the distance to the objects, the AV parameter and some values of the photometry (three for Obj1 and two for Obj2). For the first object, the MIPS_M1 will not be used for the fit, and Av will be considered as a fit parameters with values from 0.5 to 5.5

2.-Only object names

BD+292091 --- --- --- --- --- --- --- --- ---
HD000693  --- --- --- --- --- --- --- --- ---
HD001835  --- --- --- --- --- --- --- --- ---

This file is also correct, and although we have little information in it, VOSA can try to find some more data about these objects so that the analysis can be performed. Assuming that the names for the three objects are the real ones, we can try to find these objects coordinates. Then, using these coordinates, some observed photometry could be retrieved from VO catalogues, and so on.

2.-A mixed case

#objname  RA   DEC     DIS Av  Filter          Flux               Error             PntOpts ObjOpts
#=======  ===  ======= === === =============== ================== ================= ======= =======
BD+292091 ---  ---     --- --- 2MASS/2MASS_J   7.14724167946E-14  5.14601400921E-16 ---     ---
BD+292091 ---  ---     --- --- 2MASS/2MASS_H   3.69142119547E-14  2.3625095651E-16  ---     ---
Obj2      18.1 -13.2   80  1.2 DENIS/DENIS_I   1.082924e-14       2.194453e-16 	    ---     ---
Obj2      18.1 -13.2   80  1.2 2MASS/2MASS_J   2.483698e-17       2.287603e-19 	    ---     ---
HD000693  2.81 -15.467 --- --- ---             ---  		  --- 		    ---     ---
HD001835  ---  ---     --- 1.4 ---             --- 		  --- 		    ---     ---
Obj3      19.5 23.2    80  1.2 Omega2000_Ks    2.121015e-16 	  1.953527e-19 	    ---     ---
Obj3      19.5 23.2    80  1.2 Spitzer/MIPS_M1 6.861148e-15 	  1.390352e-16 	    ---     ---
HD003567  ---  ---     --- --- ---             ---           	  --- 		    ---     ---

You can combine in the same file objects with different type of information. Just keep in mind that each line must have 10 columns and, when you want to leave a data blank, you must write it as '---'.

And remember that the different columns can be separated by blanks or tabs or any combination of them. For instance, this next example would be completely equivalent to the previous one:

BD+292091 --- --- --- --- 2MASS/2MASS_J 7.14724167946E-14 5.14601400921E-16 --- ---
BD+292091 --- --- --- --- 2MASS/2MASS_H 3.69142119547E-14 2.3625095651E-16 --- ---
Obj2 18.1 -13.2 80 1.2 DENIS/DENIS_I 1.082924e-14 2.194453e-16 --- ---
Obj2 18.1 -13.2 80 1.2 2MASS/2MASS_J 2.483698e-17 2.287603e-19 --- ---
HD000693 2.81 -15.467 --- --- --- --- --- --- ---
HD001835 --- --- --- 1.4 --- --- --- --- ---
Obj3 19.5 23.2 80 1.2 Omega2000_Ks 2.121015e-16 1.953527e-19 --- ---
Obj3 19.5 23.2 80 1.2 Spitzer/MIPS_M1 6.861148e-15 1.390352e-16 --- ---
HD003567 --- --- --- --- --- --- --- --- ---

 

Single object search

In the case that you only want to work with a single object (or you just want to test how VOSA works) you don't need to build a input file.

You only need to specify the RA and DEC (in decimal degrees) of your objects. The object name and description are optional (if you leave any of them blank VOSA will fill them using the information in the other fields).

With those coordinates VOSA builds a very simple input file that is saved in your Default folder and you can then work with it, use VO catalogues to find out information or photometry for that object and then try to fit the observed SED with theoretical models.

Example

You only need to specify the RA and DEC (in decimal degrees) of your objects. The object name and description are optional (if you leave any of them blank VOSA will fill them using the information in the other fields).

With this information VOSA will make a very simple "VOSA input file" and it will be loaded automatically.

From then on, you will work with this file as with any other vosa file.

Just remember that the only information that we have for this object now is its coordinates. You will need, at least, to search for photometric data in VO catalogues using the "VO Phot." tab.

 

Managing your files

All the files that you upload to VOSA will be shown in the "Files" page.

You can organize them using folders. In the form in the bottom you can create folders as you like (or rename them).

To start working with VOSA You need to select one of the files.

For the selected file you can also:

In order to do that, you just need to edit that information in the form and click the "Save" button.

Click the "VOSA Input" button to recover the VOSA Input file that you first uploaded (you will get the same ascii file).

Click the "Show Objects" button to see the info about the objects in the file. Remember to do this after uploading the file to check that all the info has been understood by VOSA properly.

Click the "Delete" button to delete the file from VOSA (all the information about it will be lost). You will be asked for confirmation.

 

Archived files/Restore

Every file that you upload into VOSA is keeped in our server together with all the information related to every action that you do to the objects in that file (photometry, fit results, plots, etc.). You can come later and continue your work on any of your files at the point where you left it.

But if you haven't done any action on a file for 3 months we understand that you are not doing an active work on it and you do not really need it to be so easily accesible.

Thus, we archive files that have not been used in the last 3 months to save VOSA disk space and maintanaince.

Those files will be displayed in a different style in VOSA and you will not be able to select them directly.

But if you really want to use that file again, you can click in the "Restore" link. VOSA will recover all the content so that you can work with it again.

The process will be almost inmediate for small files but could take a while if your file is big.

When everythink is ready you will see a message.

And when you click the "Continue" link, the content of your file will be available again.

In any case, please, whenever you are done with a file and you do not need it to be archived by us anymore, we would appreciate if you could delete it. VOSA space is large but it has its limits!

 

Available Filters

Most of the filters from the SVO Filter Profile Service are available to be used in VOSA using the FilterID as name.

Please, check the Filter Profile Service for details. The link will open in a different window.

The filter properties are used by VOSA in a number of ways.

The link above shows a summary on how VOSA will use the filter properties. You can click on any filter name to see more details and you can also use the table column titles to sort the table using that field.

Besides that, you can access the full information in the Filter Profile Service using the "Browse" or "Search" links in the top menu. You can see a summary of all the filters in a given "family" (instrument, mission, survey, generic...) or click in any filter to see more details on the filter properties and how they are calculated by the service or where they were found in the literature.

 

Objects

There are some object properties that are important in order to be able to use all the potential of VOSA and get reliable results.

You can upload all this information in your Input file if you know it, but VOSA also can help to find values for these object properties searching in VO catalogs.

 

Object coordinates

VOSA offers the possibility of finding the coordinates of the objects in your user file.

Having the right coordinates for each object is necessary if you want to be able to search in VO services for object properties (distance, extinction) or photometry.

In order to do this, the object name is used to query the Sesame VO service.

Then you can choose to incorporate the found coordinates (if any) into your final data or not.

Take into account that this only will give proper results if the object name given in the user file is the real one. Otherwise either you will find nothing or the obtained coordinates will not have anything to do with the real ones and, if they are used to search for catalogue photometry, the obtained values (if any) will not really correspond to the object under consideration.

Two examples

We upload a very simple file with some object names and no coordinates.

So the first thing that we do is clicking the button to "Search for Obj. Coordinates".

When you click the search button, VOSA starts the operation of querying Sesame for coordinates.

This search is performed asynchronously so that you don't need to stay in front of the computer waiting for the search results. You can close your browser and come back later. If the search is not finished, VOSA will give you some estimation of the status of the operation and the remaining time.

When the search is finished VOSA shows you the data obtained from Sesame, but these coordinates are not incorporated to the final data yet.

You have two different forms available. The one on the left allows to save data for all the objects in the file with a single click. The one on the right is useful to mark/save data corresponding ONLY to those objects that are displayed in the current page (not doing anything to objects in other pages, when there are many objects).

In this example we are going to use the form on the right.

First we click the "Sesame" button so that all the values coming from Sesame are selected.

Then, we click the "Save Obj. Coordinates" so that the marked values get saved.

But we still see the warning saying that there are some objects without coordinates!

If we use the pagination form to go to that page we see that we haven't saved the distance for those objects yet.

In this case we just mark those two Sesame value by hand and click the "Save Obj. Coordinates" again.

Now he have the coordinates for all the objects in the file.


As a second example, we upload a file with the same objects but including RA and DEC values.

We can see the user values already selected and saved as final values.

We could stop here, but we want to check these coordinates comparing them with what we find in Sesame.

Thus, we click the "Search for Obj. Coordinates" button, we wait for the process to finish and we see, side by side, both user coordinates and Sesame values.

An extra column shows the difference, in arcsec, from the user coordinates and the sesame ones. This difference is shown in red when the difference is bigger than 1'' so that it is easier for you to discover suspicious cases.

In this example we use directly the form on the left. We select the option to use Sesame values values when available and to use them always. We click the "Make ail changes" button and Sesame values are directly saved as final for all the objects in the file.

 

Object distance

The distances to the objects are used by VOSA to transform the total fluxes given by the 'model fit' into bolumetric luminosities as:

Lbol = 4πD2 Ftot

If you don't give a value for the distance, VOSA will assume it to be 10pc to calculate the Luminosity.

If you don't care about the final luminosities and you don't intend to make an HR diagram, you can forget about distances and write them as "---" in your input file.

Distance errors

You can also provide a value for the error in the distance in your input file. In order to do that write D+-ΔD (for instance: 100+-20), without spaces, in the fourth column of your input file. See below for an example. (Remember to write both symbols, + and -, together, not a ± symbol or something else; otherwise vosa will not understand the value).

ΔD will be propagated as a component of ΔL as follows:

ΔLbol (from D) = Ftot * 2ΔD/D

If you don't give a value for ΔD or you don't find one in the VO, it will be zero. This will imply very small errors in ΔL as only errors coming from the observed fluxes will be considered.

VO search

VOSA offers the possibility of searching for the distance of the objects in VO catalogs.

In order to do this, the object coordinates are used to query some VO services (like Hipparcos catalog) to find observed parallaxes. Thus, object coordinates must be known (either provided in your input file or obtained in the Objects:Coordinates tab) if you want to search the VO for information about distances,

Take into account that the tool queries VO services using the object coordinates and returns the closer object to those coordinates, within the search radius, for each catalog. It could happen that the obtained information corresponds to a different object if the desired one is not in the catalog. In that case, the obtained distance could be erroneous because it corresponds to a different object. So, please, check the coordinates given by the catalog for each object to see if they seem to be the appropriate ones (within the catalog precision) before using the obtained values.

VOSA marks as "doubtful" those values found in catalogs so that the observed error is bigger than 10% of the parallax. It has been shown that for bigger errors the estimation of the distance from the parallax is biased (See Brown et al 1997). These values will be shown in red so that you are easily aware of large uncertainties.

The user can choose to incorporate the found distance (if any) into the final data or not. This decision can be taken in two different ways:

Take a look to the corresponding Credits Page for more information about the VO catalogs used by VOSA.

An example

We have uploaded a file with information about the distance to some of the objects (in some cases we have included errors for the distance too). As you see we have values for distance and error for 4 objects, only the distance for HD004307 and no information about 7 objects.

We want to check the VO to search for more information, so we enter the Objects:Distances subtab to try to find something.

At this stage, we see three main functionalities:

In this last form there are several groups of columns:

The first thing that you can do is editing the User values as you wish. For instance, you can give a value of 350±50 pc for the HD002665. You just need to write those values in the User column, mark the "tick" to its right and click the "Save Obj. Distances".

And you see that the final value for this object has been changed accordingly. If you leave this tab now, whenever a distance value is needed, VOSA will use a distance of 350±50 pc for this object.

The next natural step is searching the VO for distance values. In order to to this you can just click the "Search for Obj. Distances".

When you click this button, VOSA starts the operation of querying VO catalogs.

This search is performed asynchronously so that you don't need to stay in front of the computer waiting for the search results. You can close your browser and come back later. If the search is not finished, VOSA will give you some estimation of the status of the operation and the remaining time.

And, when everything is ready, you will see the values found in the VO catalogs for the distance to these objects.

Values with large relative errors are shown in red so that you are easily aware of large uncertainties.

At this point you still can choose to edit User values one by one and save them with the "Save Obj. Distances" button (as explained above). Or you also can decide to mark individually what value you prefer for each object among those available and click the "Save Obj. Distances" button to save those values as the final ones.

But you also can see a new form that offers you some options to choose the final values for all the objects in the file with just one click.

The form has two main parts:

When the "Make all changes" button is pressed, VOSA makes the selection adequate for your criteria and the corresponding values are saved as final.

For instance, if you mark the first option on the left, for those objects where is a value of the user distance, it will be the selected one, for the other objects, the van Leuween values are selected because they have smaller relative errors that the Kharcheko's ones.

Then, we select the third option on the left. And we set our preferences as: (1) user, (2) Kharchenko, (3) van Leuween. When we press the "Make all changes" button, Kharchenko's values for the distance is selected for HD002796 and HD003567, because there is not a user value for those objects.

Then, we change the order of preference to: (1) user, (2) van Leuween (3), Kharchenko. And we also set a limit ΔDis/Dis < 0.2 to make changes. In this case, for HD003567 there is not user value, then the van Leuween one is considered and ΔDis/Dis = 0.116 so it is selected and saved. But for HD002796 ΔDis/Dis = 0.92 in van Leuween and ΔDis/Dis = 10,4 in Kharchenko. So none of the values is selected and no change is made: the final value is kept as it previously was.

 

Extinction

The value of the interestellar extinction is necessary to deredden the observed photometry before analyzing it. If the extinction is not negligible, the shape of the real SED can be very different from the real one and any physical property estimated using the SED, if not properly unreddened, can be erroneous.

For instance, see the difference the observed SED (gray line) and the dereddened one (red points) for an object with Av=3.

You can provide a value for the visual extinction Av for each object in your Input file. But, if you don't have those values, VOSA also offers the posibility to search VO catalogs for extinction properties.

And, finally, you can also give a range of values for Av so that the model fits (chi2 and bayes) fits together the model physical parameters and the value for Av.

The extinction law.

For dereddening the SEDs we make use of the extinction law by Fitzpatrick (1999) improved by Indebetouw et al (2005) in the infrared. Take a look to the corresponding Credits Page for more information.

(You can download the tabulated data for the extinction law).

The extinction at each wavelength is calculated as: Aλ = AV * kλ/kV, where kλ is the opacity for a given λ and kV=211.4

VO extinction properties.

The tool offers the possibility of finding extinction properties of the objects in the user file.

In order to do this, the object coordinates are used to query some VO services to find AV or RV and E(B-V) for each object.

Then you can choose to incorporate the found values (if any) into the final data or not. In fact, if it happens that diffferent catalogues give different information about the relevant quantities, you can choose which data to use to build the final AV value.

Remember that, if you decide to save new values for AV, the original data will have to be deredden again using the new values. This will change the final SED and, thus, if any other analysis has been done for the corresponding SED (for instance, a model fit) this analysis will have to be done again.

The first time that you enter this section for a given input file, the tool shows the AV values given in the input file (if any) and a button to search into VO services. When a search has been done, the tool will show the user values together with the found values for each relevant quantity so that you can choose which ones should be used (checking the corresponding box).

In fact, this form has several options that can be combined. Take into account that

Take into account that the tool queries VO services using the object coordinates and returns the closer object to those coordinates for each catalogue in a given search radius. It could happen that the obtained information corresponds to a different object if the desired one is not in the catalogue. In that case, the obtained data could be erroneus, as it corresponds to a different object. So, please, check the coordinates given by the catalogue for each object to see if they seem to be the appropiate ones (within the catalogue precision) before using the obtained values.

Take a look to the corresponding Credits Page for more information about the VO catalogues used by VOSA.

An example

We have uploaded a file with some objects and their coordinates, but we don't have information about the extinction for each object.

Thus, when we enter the "Objects:Extinction" tab in VOSA we see the list of objects and no extinction properties. We also see some forms:

We will see all these options with some detail below.

But, given that we don't have any information, our first step is searching for these objects in VO catalogs. And, thus, we click the "Search for extinction properties" button.

We get a list of all the catalogs that VOSA can use to search for VO properties. You can leave it as it is and just click the "Search" button. But you also could unmark some of them if you know, for some reason, that they are not going to be useful. You also can change the default Search Radius for some catalogs if you are aware that a differrent radius is more adequeate for your case.

We just click "Search". When we click this button, VOSA starts the operation of querying VO catalogs.

This search is performed asynchronously so that you don't need to stay in front of the computer waiting for the search results. You can close your browser and come back later. If the search is not finished, VOSA will give you some estimation of the status of the operation and the remaining time.

When the search is finished, VOSA shows you, on the right, all the information that has been found for each object. In some cases, we don't get any information at all (for instance, for objects 'test01' and 'test02'). In other cases we only get information from one catalog. But in some cases (for instance, objects 'test03' and 'test04') we get heterogeneus information from more than one catalog.

It happens very often that catalogs give values for E(B-V) but not for Av (like the Savage one in this example) and we need a value of RV to calculate AV using the expression AV=RV * E(B-V).

Thus, our first action is going to be adding 'Default user values' for some quantities. We write a value RV=3.1 in the "Default User Values" form and also a default fit range of (0-1) for Av. Then we click in the "Add user values" button (we could write the RV in the "User" column, object by object, but it's easier to do it this way).

Now we have values for RV so that VOSA can use them if they are necessary to build a AV value for some object.

Next, we use the form on the right to let VOSA try to build values for Av for all the objects. We mark the tick correspoding to "Select any combination of values that permits that a value for Av can be built" and click the "Save values" button.

As you can see:

But we decide that we prefer Av=1.8 (from Morales) for the object 'test03' instead of the 1.891 value calculated before. And we want to make that particular change only.

Thus, we go to the list and:

and the 1.8 value is set as the final one for 'test03'.

But then we notice that, given that for objects 'test03' and 'test04' we have Av values 1.8 and 2.139 it does not make sense that, later, when performing model fits, we try an Av range between 0 and 1. We set that default range before, when we didn't have any information, but now we should change that range, at least, for these two objects.

Thus, we go to the list and make these changes one by one.

And the Av fit ranges are changed only for these two objects.

 

Build SEDs

VOSA helps you to build and/or improve the observed Spectral Energy Distribution (SED) for the objects in your file in different ways.

First, you can upload your own photometry into VOSA for each object including it in your input file.

If you include your data as magnitudes or Jy, VOSA will transform them into erg/cm2/s/A using the information for each filter provided by the SVO Filter Profile Service.

You can search in VO catalogs to find more photometry for your objects and those new points (if any) will be included in your objects SED. Again, if the catalogs provide data as magnitudes or Jy, VOSA will transform them into erg/cm2/s/A using the information for each filter provided by the SVO Filter Profile Service.

In the case that, for an object, there are several photometry values corresponding to the same filter but coming from different sources (user and VO, different VO catalogs, same source at different epochs...) VOSA will average them and include the average value in the final SED.

Every observed SED will be dereddened using the value for Av provided by you in your input file or in the "Objects:Extinction" tab (with the option of searching VO catalogs for extinction properties).

For each object, VOSA will try to detect the presence of infrared excess using an automatic algorithm.

Then you have the option to inspect (and optionally edit) the final SED object by object.

 

VO photometry

Search for photometry in VO catalogues.

The tool offers the possibility of searching in the VO for catalog photometry for the objects in the user file.

In order to do that, the object coordinates must be known as precisely as possible. Either the user can provide these coordinates in the input file or they can be obtained also from the VO.

VOSA offers access to several catalogs with observed photometry from the infrared to the ultraviolet.

You can choose which catalogs to use and the search radius within each one.

For each catalog, you have the option to establish magnitude limits, so that only photometry values in that range will be retrieved.

For each object in the user file, each catalog will be queried specifying the given radius, and the best result (the one closer to the object coordinates) will be retrieved. For some catalogs there are special restrictions. For instance, for the UKIDSS surveys, the search is restricted to class -1 (star) or -2 (probable star) objects. These special restrictions, when applied, are explicitly commented in the brief catalog description in the VOSA form.

When you click the "Search" button, VOSA starts the operation of querying VO catalogs.

This search is performed asynchronously so that you don't need to stay in front of the computer waiting for the search results. You can close your browser and come back later. If the search is not finished, VOSA will give you some estimation of the status of the operation and the remaining time.

When the search process is finished you will see the photometric values obtained for each object (if any).

If the catalog provides magnitude values, these are automatically converted to fluxes.

Take a look to the Credits section for information about the available VO catalogs.

Detection of outliers in VO data

When new data are found in VO catalogues and before incorporating them to the object SED, VOSA tries to identify the presence of outliers, that is, photometric points that, for one or another reason, seem not to be part of the real SED.

In particular, VOSA looks for V patterns and inverted V patterns, that is:

V pattern

VOSA looks for points that seem to be clearly below the main SED, that is points so that both the previous and next points have much higher fluxes. To be more precise, if all these criteria are met:

the point (λn,Fn) is considered suspicious and thus is marked as 'bad'. A 'lowflux' flag will also be included in the vosa and SED files if they are downloaded later.

Take into account that to make these calculations only the points (both from VO catalogues or User data) that are not flagged as 'bad' or 'upper limit' will be considered.

A simple example can be seen in this image:

We can see a first suspicious point for CTIO/DECam.g:

But VOSA will not flag it as bad because it does not meet the criteria

But the point for CTIO/DECam.Y will be marked as bad:

because all the criteria are met.

 

Inverse V pattern

VOSA looks for points that seem to be clearly above the main SED, that is points so that both the previous and next points have much lower fluxes. To be more precise, if all these criteria are met:

the point (λn,Fn) is considered suspicious and thus is marked as 'bad'. A 'highflux' flag will also be included in the vosa and SED files if they are downloaded later.

Take into account that to make these calculations only the points (both from VO catalogues or User data) that are not flagged as 'bad' or 'upper limit' will be considered.

A simple example can be seen in this image:

The point for will be marked as bad:

 

Object SED

VOSA helps to build a Spectral Energy Distribution (SED) for each object in the file combining user input data with data obtained from VO catalogues, taking into account extinction properties for deredening the observed fluxes and marking photometric points where IR or UV excess is detected.

In the SED section of VOSA you can visualize how the final SED has been built, what points have been considered, where the photometric points come from (VO catalogue, user input, etc), some properties of the data when coming from VO catalogues (including data quality when available) and, finally, where an IR excess has been detected by VOSA.

You can also edit the final SED and make decisions about what points are considered and how they enter the final SED. This is specially tricky when there are different photometric values for the same filter (coming from the user input file and/or VO catalogues).

Point options and actions

There are some options that allow you to decide how the final SED is built:

Several values for the same filter

In some cases it happens that there are several observed photometric values for the same filter. For instance, if you have given a value for one filter in your input file and another value is found, for the same filter, in a VO catalogue.

When this happens, VOSA will calculate an average of the different values and this average is the value that goes to the final SED.

The average is calculated as: $$ \overline{F}=\frac {\sum ( {\rm F}_{\rm i}/\Delta{\rm F}_{\rm i} )}{\sum ( {1}/\Delta{\rm F}_{\rm i} )}$$ $$\Delta\overline{F} = \sqrt{\sum \Delta{\rm F}_{\rm i}^2}$$ if the observed error for any of the involved fluxes is zero, the value of the error that will be used in this calculation will be $$\Delta{\rm F}_{\rm i} = 1.1 \ {\rm F}_{\rm i} \ {\rm Max}(\Delta{\rm F}/{\rm F})$$ (so that it is the biggest relative error, that is, the smallest weight).

If it happens that all errors are zero, the average will be done withouth using weights.

Take into account that:

VO photometry information

When available, you will see, for each point coming from a VO catalogue, some information that we have extracted from the catalogue to help you to decide if you want to incorporate it to the final SED or not.

An example

For instance, in this case (click in the image to enlarge):

SED download

When you download the final resuls (see ) you will get a file (xml and/or ascii) with the final SED for each object. Most of the information is the same shown in the SED section of VOSA, but with some peculiarities.

When a data point has been calculated as an average of the photometry coming from different services (or user input file) some of the columns in the SED final file are built in terms of the original values for each catalogue. In particular:

 

Excess

Most of the models used by VOSA for the analysis of the observed SEDs include only a photospheric contribution.

But the observed SED for some objects can include the contribution not only from the stellar photosphere but also from other components as disks or dust shells.

In these cases, some excess will appear and using the full SED for the analysis can be misleading.

Thus, VOSA offers the option to mark some part of the SED as "UV/Blue excess" or "Infrared excess" so that the corresponding points are not considered when the SED is analyzed using photospheric stellar models.

Infrared excess

VOSA tries to automatically detect possible infrared excesses.

Since most theoretical spectra used by VOSA correspond to stellar atmospheres only, for the calculation of the Χr2 in the 'model fit' the tool only considers those data points of the SED corresponding to bluer wavelengths than the one where the excess has been flagged.

(Some models, as the GRAMS ones, include other components as dust shells around the star. For those cases the points marked as 'infrared excess' will be also considered in the model fit).

The last wavelength considered in the fitting process and the ratio between the total number of points belonging to the SED and those really used are displayed in the results tables.

The point where infrared excess starts is calculated, for each object, when you upload an input file, but it is also recalculated whenever the observed SED changes, that is:

The excesses are detected by an algorithm based on calculating iteratively in the mid-infrared (adding a new data point from the SED at a time) the α parameter from Lada et al. (2006) (which becomes larger than -2.56 when the source presents an infrared excess). The actual algorithm used by VOSA is somewhat more sophisticated. A more detailed explanation is given below.

Apart from the automatic estimation made by VOSA, you can override this value specifying manually the point where infrared excess starts (so that more or less points are taken into account in the model fit) using the SED tab. Take into account that if you change the SED later (adding VO photometry or deleting a photometric point) this value will be recalculated again by VOSA.

It is also possible to specify the point where infrared excess start, for each object, as an 'object option' (10th column) in your input file. If you want to do this you have to include 'excfil:FilterName' (for instance: excfil:Spitzer/IRAC.I1) in the 10th column of the file. If you do that VOSA will not calculate the infrared excess for this object on upload and will accept the value given in the input file. But take into account that, if you change the SED later (adding VO photometry or deleting a photometric point) VOSA will recalculate the value even in this case.

Finally, you also have the possibility of changing the point where infrared excess starts for all objects at the same time. In order to do that, go to the SED tab and look for the "excess" link in the left menu. Once there, you have a form where this can be done.

IR excess automatic detection algorithm

The algorithm used by VOSA to estimate the presence of infrared excess is an extension of the in the idea presented on Lada et al. (2006).

The main idea is calculating, point by point in the infrared, the slope of the regression of the log-log curve showing $\nu F_{\nu}$ vs. $\nu$. At a first approximation, when this slope becomes smaller than 2.56, infrared excess starts.

In what follows, when we talk about regressions, we mean the regression of $y=log(\nu F_{\nu})$ as a function of $x=log(\nu)$, and taking into account observational errors as a weight for the regression. From error propagation, the "y" errors can be calculated as $\sigma(y) = \sigma(F_{\lambda})/(\ln10 F_{\lambda})$.

In order to avoid false detections due to "bad" photometric points, we refine the procedure as follows:

Apart from this, one more final criterium is applied. The slope (calculated as explained above) for at least one of the last two points in the SED must be sigma-compatible with being smaller than 2.56. $$b-\sigma(b) < 2.56$$

If this does not happen for any of the last two points, then there is no excess in the SED. The idea is that, if the infrared excess starts in some point it must continue for larger wavelengths. If that does not happen, any previous apparent detection of excess will be probably due to some "evil" combination of misleading points. In summary:

In the "Save Results", the user will be able to download files with a summary of the excess determination and with the details of each linear regression. These summary and details can also be visualized in the "SED" tab.

You can see some detailed examples of these calculations.

Fit refinement of the IR excess

When a model fit is completed, VOSA compares the observed SED with the best fit model synthetic photometry and makes a try to redefine the start of infrared excess as the point where the observed photometry starts being clearly above the model.

The procedure is as follows:

Let's see some examples.

In the next case, when comparing the observed photometry with the model, VOSA sugests that the real infrared excess starts later than when the automatic algorith had detected:

In this image, looking at the fit, there is no apparent infrared excess (although the automatic algorithm had detected it):

In the following case, according to the "fit excess" criteria there is no infrared excess. This is due to the big observational errors. Instead, the automatic algorithm had detected it:

Onthe other hand, there are cases where the automatic detection algorith had not detected infrared excess but according to the fit, we see some excess:

And, obviously, in many cases both algorithms give the same result:

If for some objects the IR excess starting point calculated in this way is different from the one previosly calculated by the automatic algorithm, VOSA offers you the option to "Refine excess". If you click the corresponding button you will see the list of objects where this happens, the filters where excess starts according to both algorithms for each case, and the possibility of marking the start of infrared excess in the point flagged by the fit refinement instead of the one previously calculated by VOSA. If you choose to do this, and given that this would change the number of points actually used in the fit for those objects, the fit results are deleted and you have to restart the fit process. But, in what follows, the IR starting point will be the one suggested by the previous fit.

UV/blue excess

In some cases, there is also some excess in the bluer (UV) part of the SED.

VOSA does not detect this automatically, but you can specify it so that the application does not consider these points in the fits either.

The UV/blue excess can be set in two different ways:

Finally, you also can specify the same UV/blue excess range for all objects at the same time. In order to do that, go to the SED tab and look for the "excess" link in the left menu. Once there, you have a form where this can be done.

This Blue excess, as it happens with the infrared one, will not be taken into account for models that include not photospheric components (as the GRAMS ones).

An example

We have an object where VOSA detects infrared excess starting at the Paranal/VISTA.J filter.

We are going to consider three different examples.

(1) Infrared excess only

First, we leave the excess as detected by VOSA, starting at VISTA.J.

Those points are plotted in black in the SED.

If we make a model fit for this object, the last two points in the SED won't be used. We see, in the results table, that only 8 of the 10 points have been used, and the wavelength of the last point fitted in the SED is the one for VISTA.J

And these two points are shown in black also in the fit plot.

(2) Both UV/blue and infrared excess

Now we decide to go back to the SED tab and we make a change:

This changes the SED plot accordingly.

And when we repeat the model fit, the points that are fitted are only those that doesn't have excess now.

Actually, the best fit model is now a different one.

And the points in black in the fit plot are the ones corresponding to the excess that we specified manually (the GALEX.NUV point is not taken into account for the fit).

(3) No excess

As a last example, we go back to the SED tab and set that there is no infrared or UV/blue excess.

This changes the SED plot accordingly.

And when we repeat the model fit, all the points are considered for the fit now.

And all the points are shown in ref (fitted) in the plot.

 

Analysis

VOSA offers several options to analyze the observed Spectral Energy Distributions and estimate physical properties for the studied objects.

First, observed photometry is compared to synthetic photometry for different collections of theoretical models or observational templates in two different ways:

The Chi-square fit provides the best fit model and thus an estimation of the stellar parameters (temperature, gravity, metallicity, ...). It also estimates a bolometric luminosity using the distance to the object, the best fit model total flux and the observed photometry.

On the other hand, the Bayesian analysis provides the projected probability distribution functions (PDFs) for each parameter of the grid of synthetic spectra.

When these analysis tools are applied to observational templates (chi-square and bayes), we obtain an estimation of the Spectral Type too.

Once the best fit values for temperature and luminosity have been obtained, it is possible to build an HR diagram using isochrones and evolutionary tracks from VO services and making interpolations to estimate values of the age and mass for each object.

 

Model Fit

One of the main analysis options of this application is the Model fit.

Here the observed SED for each object is compared to the synthetic photometry for several theoretical models using a chi-square test. This gives an estimate of the physical properties of the given object.

If you provide a range for the visual extinction (AV), this fitting will also consider it as a fit parameter, as explained below.

Fit

When a fitting process is started you can choose among a list of theoretical spectra models available in the VO. Only those that are checked will be used for the fit.

In the next step the application uses the TSAP protocol (SSAP for theoretical spectra) for asking the model servers which parameters are available to perform a search. According to that, a form is built for each model so that you can choose the ranges of parameters that you want to use for the fit. Take into account that:

Once the fit has been finished, you can see a list with the best fit for each object and, optionally, a plot of these fits.

Besides that, for each particular object, you can also see a list with the best 5 fits for each model sorted by χ2. For each result you can see the corresponding SED and plot (with the "See" button) or use the "Best" button to mark a different result as the preferred best one. If you do that, this fit will be highlighted and it will be the one that will be shown in the "Best fit" table later.

Best Fit

Once a fit has been done, you can see the Best Fit table with the best fit properties for each object.

A number of results are shown for each object:

When the fit has been made with the option of calculating parameter uncertainties using a Monte Carlo method, a statistical distribution is obtained for these parameters and some other values are shown in this table:

Extinction fit

If a range for the visual extinction (AV) is given, it will also be considered a fit parameter.

You can provide this range for each object in two different ways:

If you don't provide a range for AV, the default value provided by you (also in the input file or the Extinction tab) will be used.

If you provide a range, like for instance AV:0.5/5.5, the fit service will compare each particular file of the model with the observed SED dereddened using 20 different values for AV in that range. Then the best fit models will be returned by the service with the best corresponding value of AV.

Reduced chi-square

The fit process minimizes the value of Χr2 defined as:

$$\chi_r^2=\frac{1}{N-n_p}\sum_{i=1}^N\left\{\frac{(Y_{i,o}-M_d Y_{i,m})^2}{\sigma_{i,o}^2}\right\}$$

Where:

N:Number of photometric points.
np:Number of fitted parameters for the model.
(N-np are the degrees of freedom associated to the chi-square test)
Yo:observed flux.
σo:observational error in the flux.
Ym:theoretical flux predicted by the model.
Md:Multiplicative dilution factor, defined as: $M_d=(R/D)^2$,
being R the object radius and D the distance between the object and the observer.
It is calculated as a result of the fit too.

Visual goodness of fit

Two extra parameters, Vgf and Vgfb are also calculated as estimates of what we call the visual goodness of fit.

The underlying idea is that, some times, the fit seems to be good for the human eye but has a large value of chi2. One reason why this could happen is that there are some points with very small observational flux errors. Thus, even if the model reproduces the observation apparently well, the deviation can be much smaller than the reported observational error (increasing the value of chi2).
Given that it could happen that some observational errors could be understimated, we have defined these two vgf and vgfb as two ways to estimate the goodness of fit avoiding these "too small" uncertainties.

The precise definition of these two quantities is as follows:

These two parameters can help to estimate if the fit "looks good" (in the sense that the model is close to the observations). But, in any case, the best fit selected by VOSA will be the one with the smallest value of $\chi^2$.

Observational errors

The values of the observational errors are important because they are used to weight the importance of each photometric point when calculating the Χr2 final value for each model.

When σ=0 (that is, when there is not a value for the observational error) VOSA assumes that, in fact, the error for this point is big, not zero.

In practice, VOSA does as follows:

Excess

Since the theoretical spectra correspond to stellar atmospheres, for the calculation of the Χr2 the tool only considers those data points of the SED corresponding to wavelengths bluer than the one where the excess has been flagged.

The excesses are detected by an algorithm based on calculating iteratively in the mid-infrared (adding a new data point from the SED at a time) the α parameter from Lada et al. (2006) (which becomes larger than -2.56 when the source presents an infrared excess). See the Excess help for details about the algorithm.

The last wavelength considered in the fitting process and the ratio between the total number of points belonging to the SED and those really used are displayed in the results tables.

Excess fit refinement

When the fit has been done, VOSA compares the observed SED with the best fit model synthetic photometry and makes a try to redefine the start of infrared excess as the point where the observed photometry starts being clearly above the model. See the Excess help for more details.

If for some objects the IR excess starting point calculated in this way is different from the one previosly calculated by the automatic algorithm, VOSA offers you the option to "Refine excess". If you click the corresponding button you will see the list of objects where this happens, the filters where excess starts according to both algorithms for each case, and the possibility of marking the start of infrared excess in the point flagged by the fit refinement instead of the one previously calculated by VOSA. If you choose to do this, and given that this would change the number of points actually used in the fit for those objects, the fit results are deleted for these objects and the fit process is restarted for them (the results for other objects will remain unchanged). But, in what follows, the IR starting point will be the one suggested by the previous fit.

Synthetic photometry

Each theoretical spectra is a function Fi(λ) with units erg/cm2/s/Â.

Each filter is represented by a dimensionless response curve Gf(λ)

The synthetic photometry corresponding to the Fi spectra when it is observed through the filter Gf can be expressed as an integral: $$F_{i,f}=\int_{\lambda}F_i(\lambda) \ N_f(\lambda) \ d\lambda$$ where Nf(λ) is the normalized filter response function defined as: $$N_f(\lambda) = \frac{G_f(\lambda)}{\int G_f(x) \ dx}$$

Total flux and Bolometric luminosity

The best fitting model is used to infer the total observed flux for each source of the sample. We note that if the model reproduces the data correctly, this correction is much more accurate than the one obtained using a bolometric correction derived only from a color.

Total observed flux

The total theoretical flux for the object would be calculated as the integral of the whole model (multiplied by the corresponding Md factor): $$F_M = \int {\rm Md \cdot F_M}(\lambda) \ d\lambda$$

In order to estimate the total observed flux for the object, we want to substitute the fluxes corresponding to the observing filters by the observed ones, so that as much as the flux as possible comes from the observations. $${\rm Ftot} = \int{\rm Md \cdot F_M(\lambda) \ d\lambda} \ + {\rm Fobs} - {\rm Fmod} $$

The theoretical density flux corresponding to the observed one $\rm F_{o,f}$ can be calculated using the normalized filter transmision $N_f$: $$F_{M,f} = \int {\rm Md \cdot F_M}(\lambda) \cdot N_f(\lambda) \ d\lambda$$

In order to calculate the total observed flux, we have to estimate de amount of overlaping among diferent observations. In order to do that we, first, approximate the coverage of each filter using its effective width, then we identify spectral regions where there is a continues filter coverage an, for each of those regions, we define a "overlapping factor" as: $$ {\rm over}_r = \frac{\sum {\rm W}_i}{\rm (\lambda_{max,r} - \lambda_{min,r})}$$

using these overlapping factors we can estimate the degree of oversampling in each region by the fact that several observations are sampling the same range of the spectra. And we can approximate the total observed flux as: $$ {\rm Fobs} = \sum_f\frac{ {\rm F}_{o,f} \cdot {\rm W}_{eff,f}}{ {\rm Over_f}} $$

And the same for the corresponding contributions from the model: $$ {\rm Fmod} = \sum_f\frac{ {\rm F}_{M,f} \cdot {\rm W}_{eff,f}}{ {\rm Over_f}} $$

Thus, the total flux is given by: $${\rm F}_{\rm tot} = F_M + \sum_f\frac{ [ {\rm F}_{o,f} - {\rm F}_{M,f}] \cdot {\rm W}_{eff,f}}{ {\rm Over_f}} $$

where $F_{M,f}$ and $F_{o,f}$ are the model and observed flux densities corresponding to the filter $f$.

The corresponding error in the total flux is calculated as: $$ \Delta {\rm Fobs} = \sqrt{ \sum_f \left(\frac{ \Delta{\rm F}_{o,f} \cdot {\rm W}_{eff,f}}{ {\rm Over_f}}\right)^2 } $$

You can see a detailed example about this calculations.

Bolometric luminosity

The tool scales the total observed flux to the distance provided by the user and therefore estimates the bolometric luminosities of the sources in the sample (in those cases where the user has not provided a realistic value of the distance, a generic value of 10 parsecs is assumed): $$L(L_{\odot}) = 4\pi D^2 F_{obs}$$ $$\left(\frac{\Delta L}{L}\right)^2 = \left(\frac{\Delta F_{obs}}{F_{obs}}\right)^2 + 4 \left(\frac{\Delta D}{D}\right)^2 $$

Estimate of parameter uncertainties

VOSA uses a grid of models to compare the observed photometry with the theoretical one. That means that only those values for the parameters (Teff, logg, metallicity...) that are already computed in the grid can be the result of the fit. For instance, if the grid is calculated for Teff=1000,2000,3000 K, the best fit temperature can be 2000K, but never 2250K (because there is not a 2250K model in the grid to be compared with the observations). But this only means that the model with 2000K, reproduces the observed SED better that the other models in the grid. And it could happen that, if it were in the grid, the model with 2200K were a better fit.

Thus, by default, VOSA estimates the error in the parameters as half the grid step, around the best fit value, for each parameter. For instance, if we obtain a best fit temperature Teff=3750K for the Kurucz model, and given that the Kurucz grid is calculated at 3500,3750,4000...K, the grid step around 3750 is 250K and the estimated error in Teff will be 125K.

Statistical approach

In order to obtain parameter uncertainties with a more statistical meaning, VOSA offers the option to "Estimate fit parameter uncertainties using an estatistical approach". If you mark this option the fit process will be different.

Taking the observed SED as the starting point, VOSA generates 100 virtual SEDs introducing a gaussian random noise for each point (proportional to the observational error). In the case that a point is marked as "upper limit" a random flux will be generated between 0 and ${\rm F}_{uplim}$ following a uniform random distribution.

VOSA obtains the best fit for the 100 virtual SEDs with noise and makes the statistics for the distribution of the obtained values for each parameter. The standard deviation for this distribution will be reported as the uncertainty for the parameter if its value is larger that half the grid step for this parameter. Otherwise, half the grid step will be reported as the uncertainty.

Although this means making 101 fit calculations for each object (instead of only one) the process time is not multiplied by 101. It takes only a little longer (around twice).

Estimate of stellar radius and mass

We can use the value of Md and the distance $D$ to estimate the stellar radius: $$M_d = \left(\frac{R}{D}\right)^2 $$ $$R_1 \equiv \sqrt{D^2 M_d} $$ $$\Delta R_1 = R_1 \frac{Δ D}{D} $$

But we can estimate the radius also using $T_{eff}$ and $L_{bol}$. $$L_{bol} = 4\pi\sigma_{SB} R^2 T_{eff}^4$$ $$R_2 = \sqrt{L_{bol}/(4\pi\sigma_{SB} T_{eff}^4)}$$ $$\Delta R_2 = R_2 \sqrt{\frac{1}{4} \left(\frac{\Delta L_{bol}}{L_{bol}}\right)^2 + 4 \left(\frac{\Delta T_{eff}}{T_{eff}}\right)^2}$$

We can estimate also the mass using $logg$ and $R$ $$ g = \frac{G_{Nw}M}{R^2} $$ $$ M = 10^{logg} R^2 / G_{Nw} $$

In this formula we can use either $R_1$ or $R_2$ to obtain two different estimate of the mass: $$ M_1 = 10^{logg} R_1^2 / G_{Nw} $$ $$\Delta M_1 = M_1 \sqrt{\ln(10)^2 (\Delta logg)^2 + 4 \left(\frac{\Delta R_1}{R_1}\right)^2} $$ $$ M_2 = 10^{logg} R_2^2 / G_{Nw} $$ $$\Delta M_2 = M_2 \sqrt{\ln(10)^2 (\Delta logg)^2 + 4 \left(\frac{\Delta R_2}{R_2}\right)^2} $$

WARNINGS.

Take into account that the values obtained, both for the mass and radius, will be make sense only if the value for the Distance is realistic. What's more, these values will be more trustable when Fobs/Ftot is closer to 1. Otherwise, the obtained values could not be realistic.

In the other hand, given that the uncertainty of $logg$ given by models is typically large, and SED analysis is not very sensible to the value of logg, take into account that the value of the Mass obtained using logg could be far from real.

Parameter polynomial fit

When you go to see all the fits for a particular object you will also see a section named "Parameter polynomial fit".

For each fit parameter, VOSA will take into account all the values obtained in the best fits and try to adjust a 2 degree polynomial to the (param,chi2) points.

If this polynomial has a minimum and this minimum is in the range between the minimum and maximum values obtained for this parameter, VOSA will offer this value as possible "best fit value" for this parameter, trying to go further than the constraints due to the discrete nature of the model grid.

In some cases a mimimum is found but this is out of the range given by the obtained parameter values in the fit. In this case VOSA does not recommend the use of this value.

It can also happen that the parabola fit does not have a minimim but a maximum. Of course, the value of the parameter at the maximum does not provide better information.


Partial refit

After you have finished the fit process, sometimes it is useful to make small changes in the SED for some objects and repeat the fit. But, when your file contains many objects it is boring and slow to repeat the fit process for all the objects when only a few SEDs have changed.

VOSA keeps track of what SEDs have been changed in a significant way after the fit, so that the current fit results could be not valid for those objects anymore (for instance, you edit the SED, add/remove some point, search for VO photometry, add VO photometry, change where the excess starts, change the value of extinction, etc.)

When you go back to the chi2 fit tab, VOSA will show you a message saying the the SED for some objects has been changed after the fit was finished and offers you the option of repeating the fit only for those objects. If you click in the "Repeat the fit process" button, the fit process will be done again with the same previous options (model choice, parameter ranges choices, etc) but only for the objects that have changed. The fit results for the other objects will remain the same.

A particular case is the one when you choose to refine the excess setting the start of the IR excess at the point suggested by the model fit. When you do this, the fit is repeated only for the objects where the excess have changed (the results for other objects will remain unchanged).

Example

When we access the Chi-2: Model Fit tab we see a form with the available theoretical models, so that we can choose what ones we want to use in the fit. In this case we decide to try Kurucz and BT-Settl-CIFIST models. Thus, we mark them and click in the "Next: Select model params" button.

For each of the models, we see a form with the parameters for each model and the available range of values for each of them. We choose the ranges that best fit our case and then click the "Next: Make the fit" button.

The fit process is performed asynchronously so that you don't need to stay in front of the computer waiting for the results. You can close your browser and come back later. If the fit is not finished, VOSA will give you some estimate of the status of the operation and the remaining time.

When the process finishes VOSA shows a list with the best fit model (that is, the one with a smaller value for the reduced chi-2) for each object. Optionally you can also see the best fit plots, with the observed SED and the corresponding synthetic photometry for the best fit model.

If we click in the LOri002 object name in the table we can see the 5 best fits for each collection of models. And clicking on the "See" link on the right of each fit, we can see the details about it.

Sometimes the fit with the best Χ2 is not the one that the user considers the best one, maybe for physical reasons, taking into account the obtained values of the parameters, or maybe because one prefers a model that fits better some of the points even having a larger Χ2... Whatever the reason, we have the option to mark as Best the model that we prefer. In order to do that we just click in the Best link at the right of the fit that we prefer. In this case, just as an example, we choose the second BT-Settl one for LOri002.

And, when we go back to the best fit list, we see that the one for LOri002 has changed.

For some objects, for instance LOri10, we see a vertical dashed line in the plot at the point where the observed fluxes start being clearly above the model ones. VOSA marks it this way so that you are aware that infrared excess could start here.

If we click in the "Refine excess" button, we can see the list of objects where VOSA detects a possible infrared excess starting at a point different from the one previously detected.

If we click the "Yes, set new IR excesses and delete fit results" button, the start of infrared excess will be flagged at the point coming from the fit comparison and these fit results will be deleted. Then we could restart the fit taking into account the new infrared excesses.

We also have the option of deleting these fit results so that we can restart the process with different options. And we do so clicking in the "Delete" button.

VOSA asks us for confirmation, we confirm the decision, and we see the initial form again.

We select the same models again but we also mark the two extra options at the bottom.

When the fit process ends, we see two main differences in the results:

 

Bayes analysis

While the chi-square fit gives the best fit model for each object, the Bayesian analysis provides the projected probability distribution functions (PDFs) for each parameter of the grid of synthetic spectra.

The procedure followed by VOSA to perform a Bayesian analysis of the model fit is as follows:

In the case that you have decided to consider Av as a fit parameter (giving a range of Av values to try), the probability distribution for Av is calculated too.

Example

We enter the "Model Bayes Analysis" tab and we see a form with the available theoretical models, so that we can choose what ones we want to use in the fit. In this case we decide to try Kurucz and BT-Settl-CIFIST models. Thus, we mark them and click in the "Next: Select model params" button.

For each of the models, we see a form with the parameters for each model and the available range of values for each of them. In this case we are going to try the full range of parameters, so we leave the form as it is and then click the "Next: Make the fit" button.

In this case, VOSA will have to calculate the chi-square fits and then use them to perform the analysis. The fit and analysis process is performed asynchronously so that you don't need to stay in front of the computer waiting for the search results. You can close your browser and come back later. If the process is not finished, VOSA will give you some estimation of the status of the operation and the remaining time.

When the process finishes VOSA shows us a list with, for each object and each model collection, the most probable value for each parameter and its probability.

And if we click in one of the object names, we can see all the details of the analysis for this object.

We see first the probability of each value of each model parameter (only those values with a non-negligible probability are shown).

And then some simple plots of these probability distributions.

 

Template Fit

In some occasions, the limited understanding of the physical processes and/or the nature of some astronomical objects makes the theoretical models failed to reproduce with good accuracy the real observations. In this case, the comparison with benchmark objects, whose properties can be accurately determined without the use of models are largely preferred.

VOSA offers the possibility of performing both the Χ2 fitting and Bayes Analysis with standard objects. Four template collections covering M, L and T spectral types are now available: Chiu et al. (2006); Golimowski et al. (2004); Knapp et al. (2004); Kirkpatrick et al. (1991, 1999), McLean & Kirkpatrick7 and the SpeX Library. Take a look to the corresponding Credits Page for more information about these collections.

Take into account that these templates are usually the observed espectra of some well known objects and that means that the wavelength coverage of these spectra is not as wide as it is for most theoretical models. This implies that it is not possible to calculate the photometric photometry for all the filters, but only for the ones that are fully covered by the observed spectrum. In practice this means that only a few of the points in the observed SED will be used when comparing with templates. Thus, in some cases you will receive a "Not enough points to make a fit" message (even having quite many points in the SED). In any case, the number of points used for the fit will be shown in the results table and you can see which points have been actually fitted in the plots.

This is the main reason why, for template fitting, the AV extinction parameter is NOT considered a fit parameter. Having extra parameters would imply that less objects could be fitted. The value for AV given in the input file (or specified in the objects:extinction tab) will be used.

An example

We enter the Chi-2 Fit tab and then select the 'Template Fit' option. In this case we select all template collections and mark the 'include spectrum in plots' option to get nicer plots (the template spectra are not as big as theoretical spectra usually are, so using this option doesn't make the fit process much slower).

The fit process is performed asynchronously so that you don't need to stay in front of the computer waiting for th results. You can close your browser and come back later. If the fit is not finished, VOSA will give you some estimation of the status of the operation and the remaining time.

When the process finishes you can see a best fit results table with the spectral type that best fits the observed SED and, optionally, the corresponding plots.

In the plots you can see that only a few points in the SED are used for the fit (only 3 points for the Chiu et al. collection).

If you click in one of the object names you can see the best 5 fits for each collection. If you click in the "See" link you can see the corresponding plot. As you see, for the Spex Prism collection, we are able to fit 4 points (instead of the 3 ones that are fitted with the Chiu et al. one).

You have the option of choosing one of these fits as the best one if you wish, just clicking in the "Best" link on its right.

We see that only a few of the points in the SED are used for the fit. And in some cases there are not enough points.

Bayes analysis

We can also make the bayes analysis using templates to get an estimation of the probability for each spectral type. Note that the probability for the AV value will always be 100% (because it's not actually fitted).

 

Template Bayesian analysis

We can also make the bayes analysis using templates to get an estimation of the probability for each spectral type.

Take into account that, as it happened in the Template fit, the AV extinction parameter is NOT considered a fit parameter.

For more details about the Bayesian approach, please read the section about Bayes analysis.

Example

We enter the "Template Bayes Analysis" tab and we see a form with the available template collections, so that we can choose what ones we want to use for the analysis. In this case we decide to try all of them and click in the "Make the fit" button.

The fit and analysis process is performed asynchronously so that you don't need to stay in front of the computer waiting for the search results. You can close your browser and come back later. If the process is not finished, VOSA will give you some estimation of the status of the operation and the remaining time.

When the process finishes we can see the list of objects and the spectral type with the biggest probability for those collection where there were enough points to make the analysis.

Remember that Av is not considered a fit parameter for the analysis, so its value is fixed and its probability is always 100%.

If we click in one object name, for instance, LOri001, we can see the probability of all the spectral types for each collection.

 

Binary Fit

The idea of "binary fit" appears when we face the case where an observed SED cannot be fitted well with a single theoretical model (that is, the flux comming from a single object) but it seems that it could be fitted well adding a second model (the flux comming from a second object).

A typical case could be a SED with a clear infrared excess where we could have two clear contributions: the flux from a somewhat hotter object for the main part of the SED (in orange in the plot), and the flux coming from a colder object (cold star, dust...) for the infrared excess (in blue in the plot).

Binary Fit procedure

We can write this down as the fact that we want to represent the observed flux as the linear combination of two different models (the fluxes coming from two different objects), that is: $$ {\rm F}_{\rm obs}(x) \sim A \ {\rm F}_{\rm a}(x) + B \ {\rm F}_{\rm b}(x) $$

We know the observed fluxes ${\rm F}_{\rm obs}(x)$, and we know what theoretical grids to use for objects a and b (these are inputs from the user). We need to find the best parameters for each theoretical model and both dilution factors $A$ and $B$.

The method to do this, and estimate model parameters and A and B, is trying to minimize $\chi^2$ defined as: $$\chi^2 = \sum_x \left(\frac{A \ {\rm F}_{\rm a}(x) + B \ {\rm F}_{\rm b}(x) - {\rm F}_{\rm obs}(x)}{\Delta{\rm F}_{\rm obs}(x)}\right)^2 $$

Most of the explanations given in the chi-square model fit section are also valid for the binary fit. But there are very important differences.

We will focus here mostly in those aspects that are specific of the binary fit.

In the case of the one model chi-square typical fit, VOSA compares the observed SED with the synthetic photometry of all the models in the grid, calculates the best $M_d$ for each case and chooses the model so that chi-square is minimal. And this process, trying all the posible model parameter space, is quite deterministic as $M_d$ is calculated for each case, not estimated, fitted, etc, but calculated as one of the fit results.

But this is imposible for the binary fit. Here we can calculate one of the two dilution factors ($A$ or $B$) but we need to estimate the other one in a different way. And there is no deterministic way in which we can calculate all the parameters. For instance, if we rewrite our equation above as: $$ {\rm F}_{\rm obs}(x) \sim A \ \left( \ {\rm F}_{\rm a}(x) + R_{\rm f} \ {\rm F}_{\rm b}(x) \ \right) $$

we can make a loop through all the models (a and b) parameter space, try/estimate a value of $R_{\rm f}$, and then calculate the corresponding value for $A$. And we will have the best fit for that "estimation".

but we will never be sure that we have chosen the best posible value for $R_{\rm f}$.

Thus, it is very important to remark from the begining that we will not be able to be sure that we find a global minimum of $\chi^2$ (as we could do in the one fit model). We could try to follow paths of decreasing $\chi^2$ and, maybe, get to local minimums. Thus, we could get good fits, but without being sure that it is the absolute minimum.

In most of the cases, the success of the binary fit process relies in a good estimation of $R_{\rm f}$ (then, a loop of values around that estimation could help to refine the results).

We are going to explain briefly the algorithm used by VOSA to estimate the best binary fit parameters.

VOSA Binary Fit algorithm

There are several posible ways to implement this process. After many tests we have chosen the one explained here as a compromise of the time and resources used by the process and the accuracy of the results. You can get an analysis of the quality of the results obtained at the Binary Fit Quality section. And, please, remember not to use the binary fit as a black box that you can use and trust with your eyes closed.

Remember that the main equation can be writen as: $$ {\rm F}_{\rm obs}(x) \sim A \ \left( \ {\rm F}_{\rm a}(x) + R_{\rm f} \ {\rm F}_{\rm b}(x) \ \right) $$

1.- Loop in models parameters space.

we try all the posible parameter values for both models (a and b). For instance, just to simplify the notation, we can imagine that the only parameter for these grids is the temperature and, thus, we try all the posible pairs (Teff$_{\rm a}$, Teff$_{\rm b}$).

For each of these pairs we need to estimate a value of $R_{\rm f}$ (and $A$) that we think that will make sense.

To do this first estimation we need two equations to obtain values for A and B (or, actually, A and $R_{\rm f}$).

To get these two equations we take two different sets of points in the observed SED. For instance, one of them starting at short wavelengths and the other, on the contrary, starting at the longest wavelengths.

There are different approaches that we could use here, depending of the size of the sets and other conditions. But we have chosen one of the simplest ones:

We can thus apply the corresponding equations and, for each point in the model parameter space, we get a first estimation of ($A$ and $R_{\rm f}$) as the values that fit well sets 1 and 2.

(teff$_{\rm a}$, teff$_{\rm b}$...)$_i$ $\Rightarrow$ (A,$R_{\rm f,estim}$)$_i$
(teff$_{\rm a}$, teff$_{\rm b}$...)$_j$ $\Rightarrow$ (A,$R_{\rm f,estim}$)$_j$
...

1.- Loop around first estimation.

After the first estimations, for each point in the model parameter space (teff$_{\rm a}$, teff$_{\rm b}$...)$_i$ , we define an interval of possible values of $R_{\rm f}$ ($R_{\rm f,min}$...$R_{\rm f,estim}$...$R_{\rm f,max}$) around this first estimate and we check if the global value of $\chi^2$ (using the full combination of models over the complete observed SED) improves when making a loop around the first estimation. In this way, for each pair of models we obtain a series of:

(teff$_{\rm a}$, teff$_{\rm b}$...)$_i$ $\Rightarrow$ (A,$R_{\rm f,best}$)$_i$ $\Rightarrow$ $\chi^2_i$
(teff$_{\rm a}$, teff$_{\rm b}$...)$_j$ $\Rightarrow$ (A,$R_{\rm f,best}$)$_j$ $\Rightarrow$ $\chi^2_j$
...

And we finally select the one that gives the smallest value of $\chi^2$. And the corresponding values for model parameters, A and $R_{\rm f}$ that lead to this best fit.

(teff$_{\rm a}$, teff$_{\rm b}$...)$_{Best}$ $\Rightarrow$ (A,$R_{\rm f,Best}$)$_i$ $\Rightarrow$ $\chi^2_{Best}$

These are the values that will be returned by the binary fit process.

 

HR diagram

VOSA offers the option to estimate values for the age and the mass of the objects. In order to do that, the (Teff,log(L)) values obtained from the chi-square fit are used as starting points for interpolating collections of theoretical isochrones and evolutionary tracks obtained from the VO. Then, a HR diagram is displayed showing the data points, isochrones and evolutionary tracks.

For each object, only the theoretical isochrones and evolutionary tracks more adequate to the model that best fits the observed photometry are used in the process. For instance, in the case where this model is "Kurucz" the Siess isochrones are used.

In the case that several collections are used (because we use one for some objects and another one for other objects) a HR plot will be generated for each collection, showing the isochrones, tracks and the points corresponding to the objects analysed using that collection.

You can play with the plots, decide to plot more or less information, locate the objects in it, etc.

Error estimation

In order to make an error estimation, the errors coming from the chi-square fit for Teff and LogL are used to generate a small grid with 9 points.

For each of these 9 points we make the interpolation as explained below.

The final values for (Age,Mass) will be the ones obtained for the point (Teff,LogL). But in some cases, the interpolated value of Age or Mass is different for some of the other 8 points. Thus, in the results table we show the minimum and maximum value obtained for each parameter when using any of the 9 points in this small grid.

Interpolation

(Below, all the explanations are given for the case of obtaining an estimation of the object age interpolating on isochrones. Everything is valid also for the case of obtaining an estimation of the mass interpolating on evolutionary tracks.)

The interpolation between isochrones involves to find the two closer isochrones to the (Teff,log(L)) point (one to each side of the point), calculate the distance from the point to each of the curves and, then evaluate a weighted average between the values of t for each isochrone.

Fig 1   Fig 2   $t=\frac{t_2 D_1+t_1 D_2}{D_1+D_2}$

In order to do this it is necessary to design an algorithm able to estimate the distance from a point to a curve defined by discrete points (note that we do not have an analytical curve but just a series of points that are assumed to define a curve).

Distance from a point to a curve

1.-The main method that we use to estimate the distance form the point to an isochrone is as follows:

Fig 2    

2.-In some cases, it is not possible to use the above method because none of the proyections lie inside the interval between the two points that define the line.

Fig 2    

When that is the case, we can estimate the distance to the curve as the distance D1 from P to the closest point in the curve P1

Note that we consider this a worse approximation in general. Actually it is highly probable to be bad when P1 is the fist or last point in the curve.

Fig 2  

That is why we this method will only be used if the first one fails and if the closest point P1 is not the fist or last point in the curve.

Interpolated value for the age

If we have been able to find a curve on each side of the point P and the distance from that point to each curve, we can use the inverse of the distance as weights: $$t=\frac{\frac{1}{D_1}t_1+\frac{1}{D_2}t_2}{\frac{1}{D_1}+\frac{1}{D_2}}=\frac{t_2 D_1+t_1 D_2}{D_1+D_2}$$

In some cases, we are able to determine only the distance to one curve, but we know that there exist an isochrone on each side of the point. If that happens we just show a range of values for the age using the ones corresponding to each isochrone as lower and upper limits.

Finally, if the point lies outside the area covered by the isochrones, we do not even try to estimate a value for the age or the mass of the object.

outside the area

Flags for interpolated values

Whenever we are not able to find a value for the age or the mass of an object or it has been determined using a worse approximation that the one that we consider the best (See above) a flag is shown right to the value.

These are the possible flags and their meanings:

[1]The distance to one of the closer curves has been estimated as the one to the closest point in the curve
[2]The distance to both the closest curves has been estimated as the one to the closest point in each curve
[3]Only a range of values can be estimated
[4]The point lies outside the area covered by the isochrones
[5]No estimation has been posible

Example

We have made a chi-square model fit for a set of objects. The best fit model for all the objects was BT-Settl-CIFIST. Thus, when we enter the "HR diagram" tab we see the collection of isochrones and tracks that is going to be used as default for all the objects: BHAC15.

But we can click in the "click to add more options" link to change the default behaviour.

When we click the link a new form opens that allows to choose different isochrones/tracks collections depending on the Teff and Lbol values of each object. For instance, in this case we configure:

Take into account that if some object meets several conditions (for instance, Teff<=3800K and Lbol >= 0.75) priority will be assigned from bottom to top, being the default the last choice (that is, in this case, Parsec 1.2 will be used).

When we click the "Continue" button, we will see the available ranges of values (age and mass) available for each of the choosen collections. We could play with the ranges of parameters, restricting the values of the age and mass to be considered in the analysis. But we prefer to keep the full range and click the "Make the HR Diagram" button.

The interpolation process, to obtain the best values (and ranges) for the age and mass of each object, is performed asynchronously so that you don't need to stay in front of the computer waiting for the results. You can close your browser and come back later. If the process is not finished, VOSA will give you some estimation of the status of the operation and the remaining time.

When the process is finished, you can see the list of objects with the interpolation results, and three HR plots, one for each collection of isochrones and tracks.

If you click in any graph, VOSA will locate the object closest to the click point and will show you its properties.

If, instead, you click on one object name in the list, VOSA will locate that object in the corresponding graph.

You also can play with the plots. There are options to zoom to the objects range or to the models range. Other options allow you to define the exact range of each coordinate. And you also can decide what isochrones or tracks you want to display.

 

Upper Limits

In some cases, there are points in the SED marked as "upper limit" (because VO catalogs label them as that, or because the user has marked the corresponding option at the 'edit SED' tab.

These points are displayed in the SED plots with a triangle instead of a dot.

Photometric points marked as "upper limit" are taken into account for the chi2 and bayes analysis but in a different way than the other points.

To perform the corresponding fit an upper limit with flux ${\rm F}_{uplim}$ is included in the SED to fit as: $${\rm Flx} = 0 $$ $$\Delta{\rm Flx} = {\rm F}_{uplim}$$

When the chi2 model fit is performed with the option of estimating parameter uncertainties using a statistical approach, a 100 iteration Monte Carlo simulation is done. In this case, 100 different virtual SEDs are generated introducing a gaussian random noise for each photometric point (proportional to the observational error). But for the upper limits, in the virtual SEDs a random flux will be generated between 0 and ${\rm F}_{uplim}$ following a uniform random distribution.

In the case that the user does not want to treat upperlimits in this way, there is the option to perform the chi2 fit ignoring upperlimits. In that case, these points will not be taken into account at all during the process.

When you visualize the individual fit results, you will see what points are upperlimits and if they have been used for the fit or not.

 

Statistics

Definitions

We have obtained a set of N different values for the quantity X: $\{X_i\}$.

Definitions for a grouped distribution

The values can be grouped in different bins, so that we have a set of ordered pairs {value,frequency}. $$ \{X_i,Freq(X_i)\}$$ $${\rm with } \ X_i > X_{i-1}$$