### Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

### Messages - Constanze Kalcher

Pages: 1 ... 3 4 [5] 6 7 ... 9
121
##### Support Forum / Re: Calculating RDF for molecules, not atoms
« on: January 11, 2019, 09:14:26 AM »
Dear Sahar,

your molecule ID's don't need to be continuous but you'll have to make them unambiguous to use the above script. Reassigning molecule IDs should be easy though.

-Constanze

122
##### Support Forum / Re: remove atom from spherical system
« on: January 10, 2019, 09:58:54 AM »
Hi,

this will select the atoms inside a sphere of radius 72 centered at [74.516, 74.516, 74.516]:

Code: [Select]
`(Position.X - 74.516)^2 + (Position.Y - 74.516)^2 + (Position.Z - 74.516)^2 <= 72^2`
-Constanze

123
##### Support Forum / Re: Calculating RDF for molecules, not atoms
« on: January 09, 2019, 04:02:57 PM »
Dear Sahar,

here's an idea how you could perform the calculation: So each atom also has a particle property "Molecule Identifier" such that you know to which molecule it belongs. In order to compute RDFs for your molecules you first need to calculate the center of mass of each molecule. This could be done by a using a python script modifier in the graphical user interface:

Code: [Select]
`from ovito.data import *import numpy as npdef modify(frame, data):    num_particles = data.particles.count    pos = data.particles_["Position"]        #Part 1    #Create new particle property that contains the atomic masses    # Edit the following list     my_masses = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]        mass_property = data.particles_.create_property('Mass')    p_types = data.particles["Particle Type"]    for atom_index in range(data.particles.count):        with mass_property:            mass_property[atom_index] = my_masses[ (p_types[atom_index] - 1) ]        #Part 2    #Calculate center of mass for every molecule and save as new particle property     mol_IDs = data.particles_['Molecule Identifier']    mol_com_property = data.particles_.create_property('Molecule COM', dtype = float, components = 3)        #Loop over all Molecule Identifiers    for current_molecule in set(mol_IDs):        #Find all the atoms that belong to that molecule and average their positions        with mol_com_property:            mol_com_property[(mol_IDs == current_molecule)] = np.average( pos[((mol_IDs == current_molecule))], weights = mass_property[((mol_IDs == current_molecule))], axis = 0)`
This custom modifier has several parts. In part 1 you will create a particle property "Mass" and add the corresponding atomic masses since they are (currently) not automatically read in when importing a dump file. In this example I used a look up list called "my_masses" that contains the masses of all 11 particle types (they're all 1 in this test case), which you should edit.
This information is necessary to compute the center of mass vector for each molecule, which is calculated and stored in a new particle property 'Molecule COM' in Part 2 (see result in the attached screenshot).

From there you can go on and compute RDF's by either writing your own script or maybe a simpler procedure would be to export the molecule center of masses together with the molecule types (to the lammps dump or xyz format e.g.) as if they were atoms. In that way you can re-import that file and simply apply the coordination analysis modifier in the graphical user interface.

Please note, however, that the above script does not take care of periodic boundaries yet, so you'll need to add that part where you average the positions in case you have applied periodic boundary conditions in you simulation. Also, can you explain how you defined the molecule identifier? I noticed that in some cases the atoms that share the same  molecule identifier, e.g. molecule 1, are very far apart. See screenshot 2 for an example. Was that done deliberately?

Hope that helps,

-Constanze

124
##### Support Forum / Re: Calculating RDF for molecules, not atoms
« on: January 09, 2019, 12:42:45 PM »
Dear Sahar,

what you're asking for should be achievable by using a Python script modifier, however, that will require some python coding on your part. Can you give me an example of how your input data format looks like? Then I can give some help with the script.

-Constanze

125
##### Support Forum / Re: WS Analysis for Occupancy Plot
« on: January 07, 2019, 01:38:50 PM »
Dear Vivek,

what do you mean by
Quote
...however, such movement occurs only at some particular time-steps, which made me think if there is any issue arising out of the WS analysis used...?
Just take your original data and compare the coordinates of the atoms in the different snapshots, the observed displacement has nothing to do with any of the analyses you applied.

In this previous topic you mentioned and also in a current thread here:
http://forum.ovito.org/index.php?topic=447.0
it is discussed how to deal with the reference structure shifting as whole. In that case you could calculate the drift vector and map the damaged structure back to the reference structure so to say.
However, I'm afraid this won't help you since these artifacts only occur in specific regions.

Constanze

126
##### Support Forum / Re: Unphysical generation of point defects through W-S defect analysis
« on: January 07, 2019, 12:48:58 PM »
Dear wufc,

as you already noticed there have been some changes in the python API. This is documented here:
http://ovito.org/manual_testing/python/introduction/version_changes.html

In older versions, you cannot use the "with" statement to get write permission to particle properties, but the following should work:
Code: [Select]
`new_pos = output.copy_if_needed(output.particle_properties["Position"])new_pos.marray[...] -= offset`
-Constanze

127
##### Support Forum / Re: WS Analysis for Occupancy Plot
« on: January 07, 2019, 12:15:59 PM »
Dear Vivek,

this seems to be an issue of your simulation setup. Have you noticed that in some of your snapshots, the atoms in the corners clearly drift into one direction?
In the attached snapshot you can see that they have non-negligible displacements with respect to the lattice constant. Since the WS-analysis simply looks for the current number of atoms present in the
WS-cells defined by every atom in the reference structure, it makes sense that the WS analysis will detect a lot of defects in this region.

https://ovito.org/manual/particles.modifiers.wigner_seitz_analysis.html

-Constanze

128
##### Support Forum / Re: Lammps data file bond property coloring
« on: January 07, 2019, 10:50:22 AM »
Dear Botond,

as I said, it's the latest developer version available here:

Note that there have been some changes in the python API, which are documented here:
http://www.ovito.org/manual_testing/python/introduction/version_changes.html

In your case it's important to understand that OVITO 3.x no longer works with a half-bond representation, which will make it easier for you to handle the information of your custom bond property. More specifically, in OVITO 2.9 syntax you would have had to create the new bond property like this:
Code: [Select]
`output.create_user_bond_property('coeff', data_type = "float", data = bond_coeffs) `where bond_coeffs, however, would have needed 22 entries instead of 11, since there are 22 half bonds e.g. [0 1], [1 0], [0 3], [3 0] ... etc. So if you insist on using OVITO 2.9 you'll need to take that into account when creating the additional files containing the bond property.
Another issue is that in OVITO 2.9 the file source is not an attribute of the DataCollection interface but was stored in FileSource.loaded_file which is however not accessible from within the python script modifier. So you will need to use a different way to load the correct bond_prop_*.dat file. If the numbering of your files follows the timestep, then you could for example calculate the current timestep from the current frame number input.attributes['SourceFrame'].

-Constanze

129
##### Support Forum / Re: WS Analysis for Occupancy Plot
« on: January 07, 2019, 10:15:45 AM »
Dear Vivek,

if you upload the simulation data here I could investigate this further.

-Constanze

130
##### Support Forum / Re: Lammps data file bond property coloring
« on: January 04, 2019, 02:19:25 PM »
Hi Botond,
here is how you would do this with a python script modifier in the GUI of the latest developer version of OVITO. For each frame, this custom modifier will look for the corresponding text file called "bond_prop_<TIMESTEP>.dat" that has the information of the additional bond properties. So for the file "lammps_495000.dat" you uploaded it would have to be called "bond_prop_495000.dat" and would contain the following information according to your example case, right?

 1 0.12 0.23 0.34 0.45 0.56 0.17 0.18 0.19 0.110 0.111 0.1

Here, the first column is the bond index and the second column the bond coefficient you specified.
After importing the additional information, you then only have to create a custom bond property which is then accessible to all modifiers that follow.

Code: [Select]
`from ovito.data import *import numpy as npdef modify(frame, data):    #import bond_coeff file corresponding to the current frame    myfile = "bond_prop_{}".format(data.attributes["SourceFile"].split("_")[1])    bond_index, bond_coeffs = np.loadtxt(myfile, unpack = True)        #create new bond property called 'coeff' and assign imported values    data.bonds_.create_property('coeff', data = bond_coeffs)`
Hope that helps,

Constanze

131
##### Support Forum / Re: Averaging over the frames
« on: January 04, 2019, 02:04:08 PM »
Since you also asked about scatter plots and histograms, maybe there is an easier solution for you. If you're using the latest developer version of OVITO, you could export the time series of scatter plots or histograms you computed as text files (as shown in the attached screenshot).
In a second step, then write a simple python script (you won't need ovitos for this) that reads in all these textfiles files and calculates the average. Let's say you have files that are named according to the following pattern: histo_1.txt, histo_2.txt ...etc.

Then the following python script should do the job:
Code: [Select]
`import numpy as npimport globY = []for file in glob.glob('histo_*.txt'):    x, y = np.loadtxt(file, unpack = True)    Y.append(y)np.savetxt( "average_histo.txt", np.column_stack( (x, np.mean(Y, axis = 0))))`
-Constanze

132
##### Support Forum / Re: Averaging over the frames
« on: January 04, 2019, 12:07:21 PM »
Dear Sahar,

you're right, the way to achieve this is to use write a short batch script and execute it with ovitos, the python script interpreter that comes with ovito. Please see the documentation about running scripts first
http://ovito.org/manual_testing/python/introduction/running.html

I can also recommend this introduction into the topic that explains the data pipeline concept and how to import data and apply modifiers (analogous to the way you would do it in the graphical user interface).
http://ovito.org/manual_testing/python/introduction/overview.html

As an example, say you have a particle property "My property" and you would like to calculate per-atom values averaged over all frames. In principle, you just need to loop over all frames (or only specific frames) and then sum up your desired property e.g. like this:

Code: [Select]
`from ovito.io import import_filefrom ovito.modifiers import *import numpy as np# Import a sequence of files.                                                                                                                                                                                                          pipeline = import_file('simulation*.dump')#Add modifiers here                                                                                                                                                                                                                    #pipeline.modifiers.append(Name of modifier)                                                                                                                                                                                           my_prop = np.empty( pipeline.source.data.particles['My Property'].shape )# Loop over all frames:                                                                                                                                                                                                                for frame_index in range(pipeline.source.num_frames):    #The computation results for each frame can be requested using compute()                                                                                                                                                               data = pipeline.compute(frame_index)    my_prop += np.asarray(data.particles['My Property'])average = my_prop/pipeline.source.num_framesnp.savetxt("average.txt", average)`
Hope this will serve you as a good starting point. Let me know if you have further questions.

-Constanze

133
##### Support Forum / Re: Varying Number of Particles
« on: January 03, 2019, 12:16:12 PM »
Hello Egor,

could you please upload an example file for me, I would like to look into this.

-Constanze

134
##### Support Forum / Re: Unphysical generation of point defects through W-S defect analysis
« on: January 03, 2019, 12:14:09 PM »
Dear wufc,

no you explained the simulation procedure very clearly from the beginning which I really appreciate, but there is no way for me to diagnose why your shifting process is not fully reversible from only seeing the two configurations. My suggestion would be to try to estimate the translation vector between your reference and your damaged structure by averaging over all displacements below a certain threshold value (excluding the cascade damage), so you will be able to use the WS defect analysis. In the first attached screenshot you can see the displacement vectors you uploaded and a histogram of the displacement magnitude, seems like a cutoff of 2 Å is reasonable.
Just in case you were not familiar with this, screenshot 2 shows you how to import the displacement vectors calculated in your simulation using the Compute property modifier.
Furthermore, this is how you approximate the offset between the two configurations using a Python script modifier.
Code: [Select]
`from ovito.data import *import numpy as npdef modify(frame, data):    pos = data.particles_["Position"]    displ = data.particles_["Displacement"]    c_dis_4 = data.particles_["c_dis_4"]       offset = np.mean( displ[ (c_dis_4 < 2)], axis = 0)    print(offset)`The result was [ 0.69055972 -1.08762384  0.06726288] in this case. My idea would be you use that information to shift back the damaged structure, e.g. by adding this to the above code snippet:
Code: [Select]
`new_pos = data.particles_.create_property('Position')    with new_pos:        new_pos[...] = pos[...] - offset` and then try to apply the WS defect analysis again and let me know what the outcome is.

Quote
Besides, your expression to select non-defective atoms is not right in my opinion. All the remains defects are Occupancy.1==1&& Occupancy.2==1, which makes me confusing.
I was just playing around with your structure there and deleted all sites that are either occupied by one atom of type 1 or one atom of type 2. But you're right, in this case I also might have accidentally deleted some sites with a total occupancy of 2, where both Occupancy.1=1 and Occupancy.2=1. To actually delete all non-defective sites the expression selection that the delete atoms modifier is based on should be (Occupancy.1 + Occupancy.2) == 1, right?

Hope that helps,

Constanze

135
##### Support Forum / Re: Unphysical generation of point defects through W-S defect analysis
« on: January 02, 2019, 02:57:46 PM »
Dear wufc,

Quote
It should be mentioned in the damaged configuration file, the computed displacement magnitude from LAMMPS is included, which is different from that using OVITO.
Yes, I saw that. OVITO calculates displacement vectors for particles with the same particle identifier. So I assume that somehow during the shifting or file export particle identifiers are reassigned which then gives you the artefacts you observe in OVITO since you're actually not comparing the positions of the same particle.

Quote
As stated in my question, in each loop, we firstly shift the system with certain distance. After the cascade, the system is shifted back in the inverse distance.
As for the Wigner-Seitz-defect analysis, if you compare the final and the reference structure, you can see that the final configuration is slightly translated with respect to the reference frame. As a test, I tried to map it back (by hand) to the reference lattice by using the Affine Transformation modifier. As shown in the attached screenshot, you get a more reasonable result then.
If you have stored the displacement vectors calculated with lammps, you of course have more elaborate means to compensate the offset of the damaged structure.

-Constanze

136
##### Support Forum / Re: Unphysical generation of point defects through W-S defect analysis
« on: January 02, 2019, 12:33:36 PM »
Dear wufc,

I had a look at the displacement vectors computed between your structure and the reference configuration and you're right it looks rather unphysical. Are you absolutely sure that the shifting of the cell doesn't mess with the Particle Identifiers? What code did produce the cfg snapshots?
Another thing I noticed is there is a small offset between the lattices of the final structure and and the reference configuration, which is probably what causes your problems with the WS analysis.

-Constanze

137
##### Support Forum / Re: Create custom bins for each frame
« on: December 19, 2018, 11:40:48 AM »

in case you didn't follow the other topic, Alexander has fixed the issue. Please download Ovito 3.0.0-dev322 and let me know if this solves the problem of changing particle types.

-Constanze

138
##### Support Forum / Re: Create custom bins for each frame
« on: December 17, 2018, 11:21:39 AM »
what's the file format you're importing? Could you please upload an example? Maybe it's related to the bug report here.
http://forum.ovito.org/index.php?topic=439.0

-Constanze

139
##### Support Forum / Re: Create custom bins for each frame
« on: December 14, 2018, 04:23:52 PM »
So just to clarify, I used an example structure of a binary alloy to show you what the region you select in your script would look like.
When you select atoms based on their y-position you create a thin slab that will look similar to the region marked in red in the screenshot. It's not a cube. Is that what you wanted to achieve?

-Constanze

140
##### Support Forum / Re: Create custom bins for each frame
« on: December 14, 2018, 04:09:27 PM »

it's not really clear to me what you're trying to do. So in your code snippet, you're calculating the average y-Position of all particles with particle type 1. Then you're calculating how many particles of type 2 are located in a thin slab (d_y=4), whose center coincides with the average position of all particles of type 1.

What do you mean by this?
Quote
... it showed the results for selection 1 particle types instead of selection 2 particle types.

-Constanze

141
##### Support Forum / Re: Regarding Wigner-Seitz defect analysis
« on: December 05, 2018, 12:07:14 PM »
Dear Bahman,

at the moment both version (OVITO 2.9 and the current developer version) do not support 2D configurations in the Wigner-Seitz defect analysis. But it might well be added in a future release. If you like, you can upload an example of your data here so we can look into the problem.

-Constanze

142
##### Support Forum / Re: Position Y histogram of of each frame
« on: December 04, 2018, 10:42:10 AM »
I seem to miss what the OVITO specific question is here.
As stated in the numpy documentation, using "density=True" will cause the histogram to be normalized such that the integral over the range is 1. And from the link you posted it seems that you already found the corresponding page in the scipy documentation to plot the cumulative sum of your histogram?
-Constanze

143
##### Support Forum / Re: Position Y histogram of of each frame
« on: November 29, 2018, 12:16:08 PM »

global attributes are actually not meant to save whole arrays. In your case it would be easier to just loop over all frames and use numpy.savetxt() to save the current histogram to a txt file, for example:

Code: [Select]
`node = import_file(...)for frame_index in range(node.source.num_frames):     data = node.compute(frame_index)     hist, bin_edges = numpy.histogram(data.particles["Position"][:,1])    numpy.savetxt("histogram%d.txt" %frame, np.column_stack((bin_edges[:-1],hist)))`
-Constanze

144
##### Support Forum / Re: Compute & Visualization of temperature field
« on: November 23, 2018, 03:58:26 PM »
Hello,

note that what I explained above is not a python script modifier, it's how you would implement the Compute property modifier in a stand-alone python script which you execute from the terminal.
Please explain again what you're trying to calculate. If you want to average the per-atom values you calculate you don't need a python script modifier. The Compute property modifier already does that when you activate "Include Neighbor terms" (and divide by NumNeighbors +1 of course).

http://www.ovito.org/manual/particles.modifiers.compute_property.html

~Constanze

145
##### Support Forum / Re: What kind of method that I can find defect about amorphous SiO2 material ?
« on: November 23, 2018, 02:30:59 PM »
Hello,

I don't think the Wigner-Seitz defect analysis would be the right analysis tool for you, since it would require comparison to a (in your case amorphous) reference configuration.

~Constanze

146
##### Support Forum / Re: Compute & Visualization of temperature field
« on: November 23, 2018, 01:33:30 PM »
Hi,

if I understand your question correctly you're asking how to use the Compute Property modifier in a python script?
If you have imported the per-atom kinetic energy with your data it will appear as particle property with the name you gave it in lammps. So you should have an extra column named e.g. "c_ke" in your dump file, right?

You can access that data in the ComputePropertyModifier.

Code: [Select]
`pipeline.modifiers.append(ComputePropertyModifier(    output_property = 'temp',    expressions = [ '2.0 * c_ke/(... )' ],    neighbor_mode = True,    neighbor_expressions = [ '2.0*c_ke /(...)'],    cutoff_radius = ....,))pipeline.compute()`where you would use your formula from above in the expression field and neighbor_expression field to calculate and average the property over all particles in the given cutoff. Note that you would have to divide by (NumNeighbors +1 ) though, since the central particle also counts.

http://ovito.org/manual_testing/python/modules/ovito_modifiers.html#ovito.modifiers.ComputePropertyModifier

~Constanze

147
##### Support Forum / Re: Partial radial distribution function (RDF)
« on: November 19, 2018, 04:00:18 PM »
Hi,
an example of how to calculate averages of only specific particles is also shown in the FAQ: http://forum.ovito.org/index.php?topic=376.0.

For each particle you calculated a property called "Coordination". If you now want to calculate the mean value of this property for only particles of type 2 e.g., you need to select these particles based on their type:

Code: [Select]
`from ovito.data import *import numpy as npdef modify(frame, input, output):    coord = input.particles["Coordination"]    p_types = input.particles["Particle Type"]    output.attributes["Average coord ptype2"] = np.mean( coord[ (p_types == 2) ] )`You will now see the global attribute "Average coord ptype2" appear in the Data panel when you click on "Attributes".

-Constanze

148
##### Support Forum / Re: Partial radial distribution function (RDF)
« on: November 15, 2018, 01:44:12 PM »
Yes, you could do that. I just wanted to clarify that the difference between the two solutions is that the python script counts all neighbors up to a certain cutoff radius, whereas in the Compute property modifier you can specify both a lower and upper cutoff value.
So in the end it depends on what you want to evaluate.

-Constanze

149
##### Support Forum / Re: Partial radial distribution function (RDF)
« on: November 15, 2018, 01:17:41 PM »
Hi,
just to confirm, so when you talk about the number of atoms in the second shell e.g., do you mean "only" the second, or does that include the first and the second? In the first case, the solution from the other forum topic would require an extra step where you subtract the results from the inner shells from the outer ones.

-Constanze

150
##### Support Forum / Re: Partial radial distribution function (RDF)
« on: November 15, 2018, 11:04:24 AM »
Hello,
I would suggest you to use the latest developer version, where you can see particle properties and data plots you calculate directly in the Data inspector below the viewports.
So here is one way to calculate what you're asking for:
First, I used the Coordination analysis modifier to find out the distance of the different coordination shells (see screenshot RDF.png).
Then, you can insert a Compute property modifier which allows you to calculate and safe new particle properties, in this case let's call it "Type 2 coord. 1st shell". So to count how many atoms of type 2 are in the first coordination shell of each atom, you can use the following expressions:
Expression
0
Neighbor Expression
(Distance <= 2.6) && (ParticleType == 2)

The Neighbor Expression is evaluated for each neighbor atom in the given cutoff and is either 1 if both conditions are true or 0 otherwise. "&&" means "and" in this context. In the data-panel you will see your new particle property appear: The particle with Id 1 e.g. is of particle Type 2 and has 2 neighbors of type 2 in the first coordination shell (see last column in 2nd screenshot).

Now it's up to you to repeat that for the different particle types and distances. So simply insert more Compute property modifiers into your pipeline. As an example, "Type 1 coord. 2nd shell" could then be:
Expression
0
Neighbor Expression
(Distance >= 2.6) && (Distance <= 3.0) && (ParticleType == 1)
etc.
Don't forget to increase the cutoff radius as well.

From there, you could go on and use the Histogram modifier in conjunction with the Select particle Type modifier or a python script modifier to calculate averages of that information for only type 2 center particles e.g. But first, you should try if the above works for you.

-Constanze

Pages: 1 ... 3 4 [5] 6 7 ... 9