question
stringlengths 19
6.88k
| answer
stringlengths 38
33.3k
|
---|---|
Problem Statement: Would there be any problems running on VM servers instead of physical servers for Aspen Supply Chain Analytics? | Solution: As of our V7.1 release, we now test and officially support the following virtualization environments for our product:
? MS Virtual PC
? Virtual Server
? Hyper-V
? Terminal Services
Keywords: None
References: None |
Problem Statement: Some Built-in limits to consider when use Aspen Petroleum Scheduler and Refinery Multi-Blend Optimizer. | Solution: Below is a list of built-in limits:
? Blend Specifications:
When using SBO (Single Blend Optimization), the maximum number of specs for a blend is 50. Minimum specs of 0.0 and maximum specs of EMPTY are ignored and thus not counted toward this 50.
? Crude Units:
The number of crude units is limited to 99.
? Feed and products:
Since V7.1, there is a limit of 65 feed and products that can be connected to each unit. For 2006.5, this limit is 50. Exceeding this value may cause Petroleum Scheduler to crash during simulation or errors may appear with use of ABML.
? Trends:
The number of displayed plots is limited to 50. If more than 50 plots are designated for display, only the first 50 will be visible.
Keywords: -Build-in limits
-Specifications
References: None |
Problem Statement: When the routine gets to udt.MixStrms ResultStrm, an error message is emitted:
Microsoft Visual Basic:
Run-time error -214717851(80010105):
Method MixStrms of object IOrionAutoUdt failed. | Solution: The reason this type of error happens running this automation routine is because the user has exceeded the limit of maximum feed and products of 50 (in v 2006.5). In V7.1 the limit is 65.
In version V7.2 we have added the limit information to the Built-In limits topic of the Help file in Limits.
Feed and products:
=================
There is a limit of 65 feed and products that can be connected to each unit. Exceeding this limit may cause Petroleum Scheduler to crash during simulation.
Keywords: ? Built_in Limits
? Automation
? Built-In
References: None |
Problem Statement: We have a Gantt chart control variable with a lot of activity and therefore a number of different events being shown on the same horizontal line on the Gantt chart. This gets very cluttered and almost impossible to read. Is there some feature that will allow the events to be shown on a wider horizontal area on a Gantt chart to eliminate this clutter. | Solution: In the new version 7.1, the Control Variables option has been added that allows you to set the font size for the displayed control variables. Now it is easier to see labels above event bars:
-This is how Gantt Chart looks in v.2006.5:
- Since v7.1 Control Variable area can be expanded using a new option called "Fit All" . After select this option and click on "+" sign located at the left of Control Variable, the corresponding horizontal line will be expanded showing all labels and event information more clear in a wider horizontal area.
Keywords: -Gantt Chart
-Control Variable
References: None |
Problem Statement: When developing XML reports, or writing programs that submit XML to the Aspen Production Record Manager Server, it can be helpful to test your XML by submitting it to the server and seeing the XML that is returned. Also, if XML is submitted that does not give an expected response, having a way to view the returned XML is helpful for troubleshooting.
This solution explains how to use the Aspen Production Record Manager Web Service Test Application to submit XML directly to the Server and see the response.
NOTE: All XML in this solution is shown via Internet Explorer for readability. In the test application it will appear as run-on text. You may want to use a third-party XML editor, or Internet Explorer to view and edit your XML. | Solution: There are several ways to automatically generate XML correctly formatted for the Production Record Manager Server. One way is to create a query in the Production Record Manager Query Tool then save the file, choosing XML as the file format. With some slight edits the resulting file can be used directly with the Web Service Test application. Solution 121675 points out that the Production Record Manager web site also saves its reports to XML format. Those XML source files can also be used with the test utility. Below are examples using both.
First, submitting XML using output from the Query Tool:
1. Create a query in the Query Tool that returns the batches/data you want. Make sure it works correctly, then save it, making sure to specify XML as the file type. For this example, a simple query was made asking for the two most recent batches.
2. Open the Web Service Test Application. It is named Batch21WebServiceTestApplication.exe, and is found in this directory on the Production Record Manager Server:
C:\Inetpub\wwwroot\AspenTech\Batch.21\bin
3. The application has three tabs. First update the credentials on the Security Context tab. If running it on the Production Record Manager Server, localhost is fine for Host. The other fields should define an account that has rights to see Production Record Manager areas and data.
4. Next click the input xml tab. The default basic XML query example returns all Production Record Manager areas and configuration information. Replace the default string batch-600-w2k with your own Production Record Manager Server nodename. If you don't have many areas with a lot of configuration information, you can go ahead and hit Submit to bring back all your area information (note, this will NOT include all your production data!) This quick test verifies whether the account you are using and specified nodename are correct. Click the output xml tab to see the results.
5. Now modify the generic query using the saved XML from the Query Tool. Here are the editing steps:
A. Original query in the Web Service Test Application (with nodename already updated, in this case to EDXP20065):
B. Query as saved into an XML file by the Query Tool to return the two most recent batches:
C. Replace the AreaQuery tag shown in "A" above with all the XML contained between (and including) the Area tags:
Notice that unnecessary parameters have been removed from the BatchQuery tag, since that information is now specified in the Datasource tag.
D. Click Submit, and all data for the two most recent batches is returned to the output xml tab.
6. Here is a second example, using output saved from the Production Record Manager Reporting website. In this case a report was configured that brings back the COST characteristic for the most recent batch:
A. When report is saved as BatchCostOnly from the website, the XML file is saved to:
C:\Program Files\AspenTech\Batch.21\Data\Reports\Queries\Public\edxp20065.batchdemo\BatchCostOnly.xml
B. Here is content of that saved XML file:
C. Here is the same XML with slight edits to work via the Test utility (drop the ReportQuery tag, add the Area tag, and drop empty parameters from BatchQuery.)
D. For the XML shown directly above, here is the XML response coming back from the server that, when returned to the Reporting Service and formatted by the XSL file, results in an HTML-based report:
Keywords: None
References: None |
Problem Statement: Aspen Batch.21 documentation states that it is possible to configure publish items and shows how it's done. However, it doesn't explain the mechanism of publishing batch items. This Knowledge Base article attempts to answer that question. | Solution: The Publish feature in Batch.21 is used to publish characteristic data associated with a batch or subbatch.
A subject name is given to Batch.21 messages published on the Infobus. Other applications which listen for the Batch.21 data will need to be configured to subscribe to the subject configured in Batch.21. If you already have the Infobus configured on your system there's an easy way to subscribe to the Batch.21 messages through a DOS prompt.
Type the following into a DOS prompt:
ecrvlisten [SubjectName]
If you're using the Batch.21 demo, it would look like this:
ecrvlisten AEP.Prod.Batch.ProductionPerformance.Publish.BU1.Plant1
(This assumes that you've configured a publish item in Batch.21 with the subject name used above.)
When publishing a characteristic, a message would be published whenever the characteristic selected by you to be published was updated. The XML message generated in this case would consist of characteristic name, characteristic value and the timestamp.
Note that it may take several (5-10) minutes for the first message to be published. However, once the Infobus starts working messages will be published very quickly.
The schema file used by the XML message that is published by Aspen Batch.21 is BPDProductionPerformance.xsd. This file is located in this folder:
...\Program Files\AspenTech\Batch.21\Data\BusinessProcessDocument\
When publishing a subbatch, the subbatch information is sent to the Infobus whenever a subbatch characteristic was modified. Attached is a sample XML output file which was generated when a subbatch was published.
Keywords: None
References: None |
Problem Statement: Logging Apex IVR Question | Solution: http://support.apexvoice.com
Keywords: Apex
IVR
References: None |
Problem Statement: After I load Aspen Web Fulfillment Management, I am unable to bring up a customer's name | Solution: Aspen Web Fulfillment Management uses the ini Configuration files in locations different from the ones used by your Aspen Retail Application.
To make sure Aspen Web Fulfillment Management is looking at the correct ini files, check the following locations and make sure the ini files are the same:
C:\Documents and Settings\Username\Aspentech\Retail\Settings
C:\Program Files\Common Files\AspenTech Shared\Settings
In the first location listed, the Username should be whatever user is used to log onto the server.
In both these folders, the Customize.ini should contain the same entries in the following sections:
[RetailTPS]
[RetailTCIF]
These sections are where the login information is stored.
Keywords: WFM, Login
References: None |
Problem Statement: IVR Field Test Patch for Suncor | Solution: CQ00541726 Release Notes
April 10, 2014
The enclosed file, RETIVR.EXE, is a replacement for the DNIS.EXE program which is installed by the original release of Aspen Fleet Optimizer V7.3 Order Manager - Voice. This file supports the analog telephony interface provided by Apex Communications OmniVox 5.2 software.
******************************************
******************************************
NOTE: THIS SOFTWARE HAS NOT BEEN TESTED BY ASPEN TECHNOLOGY AND IS PROVIDED ON AN "AS-IS" BASIS.
******************************************
******************************************
INSTALLATION INSTRUCTIONS:
The enclosed file, RETIVR.EXE, should be copied to the same location as the existing DNIS.EXE/RETIVR.EXE file, which is usually C:\USR\APEX\appbin.
INSTALLATION CONFIRMATION:
The installation can be confirmed by inspecting the File Version. Navigate to the folder in Windows Explorer, change to Details view, click the right mouse button over the heading to display the list of optional columns, and select "More...". Scroll down the list of columns until you find 'File Version", select the checkbox, and then click OK to exit the dialog to show the added File Version column.
The file version for RETIVR.EXE should be 2009.2.2.422
Keywords: None
References: None |
Problem Statement: When starting Aspen PIMS V7.x, you encountered the following error message:
When you click on the <Details> button, it shows no further information such as: | Solution: The problem may be caused by missing Aspen Base Load Services at the SLM server. This is true when:
- Your license file is a Network license.
- Your license file contains a license feature named 'SLM_PIMS_'. For example:
- SLM Server do not have 'Aspen Base Load Service' installed or running.
To resolve this, ensure that Aspen Base Load Service is installed and running as a service at the SLM Server. This service installer is listed together with 'SLM Server' install package distributed in V7.x DVDs:
Keywords:
References: None |
Problem Statement: My model is a P-PIMS model. The model is running with no problem even though there is no ESTxxx row defined in Table CRDDISTL. However if I convert into standard model, PIMS says I must have ESTxxx row in CRDDISTL.
Why doesn't P-PIMS require the ESTxxx rows? | Solution: The PPIMS model can have tables CRDTANKS and CRDALLOC - they are only available for PPIMS models. When these tables are present, the ESTxxx rows for Tabel CRDDSTL are automatically generated by PIMS internally. So if all crude oils go to CRDTANKS, ESTxxx rosw are not necessary in T.CRDDISTL for the PPIMS model. However you must still define the ESTxxx row if the crude goes directly to a crude unit.
Keywords: CRDDISTL
CRDTANKS
CRDALLOC
References: None |
Problem Statement: When starting Aspen PIMS V7.1, I get this error message and Aspen PIMS does not start. | Solution: This message can happen when there are multiple license files in your license source directory. Make sure your new V7.1 license file is the only license file in the license source directory. The default location for your licenses is C:\Program Files\Common Files\Hyprotech\Shared.
Keywords: PIMS-SA
license
References: None |
Problem Statement: Using Engineering DVD1 of V7.1, the installer is not able to locate Mpich2 during XML file
creation for silent install of PIMS. | Solution: There are two different ways to solve this problem:
Use Manufacturing DVD 2 for installation or creation of the XML file
Or
Make an image of the Engineering DVD1 on your C or any other driver. Then,
Copy x:\aspenONE_V7.1\Aspen Manufacturing and Supply Chain
V7.1\aspenONEV7.1DVD2\psb\aspenonev7.1dvd2\core\Mpich2 Folder to
C:\aspenONEV7.1dvd1\AES\aspenonev7.1dvd1\core
Start the installation or XML file creation.
Keywords: Mpich2
PIMS
References: None |
Problem Statement: After installing Language Pack V7.1 Russian the following message is received: | Solution: The Language Pack for V7.1 must be applied to PIMS version 18.1.15. The installation package for 18.1.15 is attached. Download the file and then run it from your machine to install 18.1.15. Then proceed with the Language Pack.
Keywords:
References: None |
Problem Statement: In PIMS 17.1.6 and later (up to 17.x.x), I used the Licensing tab under Program Options to control which features were checked out when Aspen PIMS starts up. This allowed me to minimize my token license usage. I no longer see this in Aspen PIMS V7.1 (18.x). How do I now control my token usage?
For additional information about the Licensing feature available in Aspen PIMS 17.1.6 through 17.x.x, please refer to solution 120535. | Solution: When Aspen PIMS is started, the basic Aspen PIMS license and associated tokens will be checked out.
In V7.1, Aspen PIMS now dynamically checks out additional tokens based on the type of model that is open. If a multi-plant model is opened, Aspen PIMS will checkout the applicable tokens for the MPIMS feature in additional to the basic PIMS license and tokens. If the model type is then changed back to "standard", Aspen PIMS will return the MPIMS related tokens. The same happens for models that use the Aspen PIMS Advanced Optimization feature.
Note that if available, the Submodel Calculator and CPLEX optimzer licenses are always checked out when PIMS starts up.
Keywords: license
token
References: None |
Problem Statement: The model tree is different in Aspen PIMS V7.1. Where do I find the items that used to be listed under "Program Options" on the old model tree? | Solution: In versions prior to V7.1, the Aspen PIMS model tree included a section at the top called Program Options as shown below:
The new model tree in Aspen PIMS V7.1 no longer has this section at the top. All the program options have been moved and can be accessed by the Aspen PIMS menu buttons as described in the chart below.
Keywords: model tree
program options
References: None |
Problem Statement: When I runAspen PIMS, I see an incomplete Windows Error message as shown below. What causes this and how can I fix this? | Solution: This is related to parallel processing. An Aspen PIMS Advanced Optimization license is required and the model must be using both XNLP and the XLP matrix generator to activate parallel processing. This message is from the Windows Firewall and can be avoided by adding an exception to your Windows Firewall settings. The following steps will resolve this.
1. Go to START | Control Panel | Windows Firewall
2. Click on the EXCEPTIONS tab
3. Check this list for CaseParallel.exe. If it is not on the list, then you will need to add it. Select 'Add Program' and then Browse to the file CaseParallel.exe in the PIMS installation folder. The default location for this is C:\Program Files\Aspentech\Aspen PIMS. Select the CaseParallel.exe file and click OPEN. Back on the 'Add a Program' dialog, click on OK.
You should now be able to run Parallel Processing without receiving the security message.
Keywords: Windows Firewall
parallel processing
References: None |
Problem Statement: In V7.1, the model tree is different and the model name is no longer displayed on the model tree. So how do I change the model type - for example, from Standard to Periodic? | Solution: In versions prior to V7.1 (i.e. PIMS 18), the model name was displayed on the model tree. To change the model type, the user would right click on the model name. In V7.1 the model tree is modified and no longer displays the model name. To change the model type, right click on MODEL SETTINGS on the model tree. Then select "Model Type" and you will see a list of the available model types. Select the desired model type.
Aspen PIMS has also been enhanced in V7.1 so that if you have an XPIMS (multi-plant, multi-period) model open and you change the model type to MPIMS (just multi-plant), then Aspen PIMS will automatically update the model types for each of the local models from Periodic to Standard. Aspen PIMS will also update the local models from Standard to Periodic if an MPIMS model is switched to XPIMS.
Keywords: periodic
global
standard
model type
References: None |
Problem Statement: As all the data in an Aspen PIMS model is segregated in multiple spreadsheets, it is not straightforward to do a global search in all input files for a specific row, material or property tag. One way would be to do it from within MS Excel, but you probably still have to open multiple files to perform the search.
Is there a way in Aspen PIMS to do such a global search? | Solution: You can perform a global search on the Model Documentor report. To generate this go to Run | Data Validation and select the Model Documentor report. Select the "Open Report" option at the bottom so that it opens after it is generated.
The report will include all tables in the model, and it will include commented out rows (*) and columns (!).
Keywords: Model Documentor
Global search
Validation Report
References: None |
Problem Statement: Where can I find additional information on how to setup the ABML table for the sample model "Amended CARB 3"? | Solution: The Attached document contains clarifications on the data and a Question & Answer section on the model setup.
The sample model is also attached. To run it, please remember to copy the PUBML.dll file into the Aspen PIMS installation folder.
Keywords: ABML
CARB 3
References: None |
Problem Statement: There is a new swing cut gradient feature in PIMS V7.1 for users with a PIMS Advanced Optimization license. | Solution: In traditional PIMS structure, PIMS will optimize the distribution of swing cuts in the crude distillation tower. However as it does this, it assumes that the quality of the swing cut is constant. For example, the top 10% of the cut has the same qualities as the bottom 10%. If the swing cut is wide and the properties have significant variance, then this can lead to a less optimized solution.
PIMS now offers swing cut gradients. The requirements to use this feature are:
? PIMS Advanced Optimization license
? Model using XNLP and the new V7.1 Matrix Generator
? Qualities for the swing cut and the two adjacent wide cuts
? The final vapor temperature (FVT) for the swing cut, the cut below the swing cut and the two cuts above the swing cut.
To activate this option, go to MODEL SETTINGS | Non-Linear Model (XNLP) | Advanced tab. Check the box for Swing Cut Gradients as shown below.
When PIMS optimizes, it will now calculate quality gradients for each property of the swing cut. This allows more accurate representation of the quality differences between the 'top' and the 'bottom' of the swing cut. These calculations are non-linear formulations based on an approximation of the quality variations across the swing cut.
Gradient calculations for a swing cut are only included for those qualities for which data is also provided in the adjacent wide cuts. If an adjacent cut quality is missing, then the traditional square cut calculation method is applied.
Users may not want to use this feature when:
? * the swing cuts are narrow and do not show significant quality variation across the cut
? * the additional model size or complexity is believed to be unnecessary
* consistency with legacy model results is desired.
Note that the calculations are internal to the crude unit structure. Therefore the reports will be the same as when this feature is not used. The estimated qualities of each portion of the swing cut are not reported anywhere.
Keywords: Swing cut
References: None |
Problem Statement: How do I change the value of Yield Factor in MAKF? Changing the YF (Yield Factor) value in MAKF does not work; the value is returned back to the previous value after simulate. | Solution: Check the value for CMAN(TDATTIM). If CMAN(TDATTIM) is ST (Scheduled Start Time), you will be able to make changes to YF in MAKF. If CMAN(TDATTIM) is AS (Actual Start Time) or PS (Process Start Time), then YF is re-calculated when simulator runs.
For more information, please see the SCM help file under [Aspen SCM Scheduing] -> [CMAN Table - Scheduling Controls] -> [Start Time Option for TDAT Lookups (TDATTIM) Parameters].
Keywords: None
References: None |
Problem Statement: What is the purpose of ias_license.xml file when you install ASCCV7.1 and Informatica8.x? | Solution: The license key ias_license.xml is used for Data Analyzer and Metadata Manager installation. It is not a mandatory key, ASCC will not use Data Analyzer and Metadata Manager. AspenTech recommends not to install Data Analyzer and Metadata Manager.
Keywords: ias_license.xml
Data Analyzer
Metadata Manager
References: None |
Problem Statement: How to change the Rate when Lot Size is changed in Activity Editor? | Solution: 1. In TDAT(EDITLS), we need to set the value to CHGRATE.
2. In CMAN(TDATTIME), we need to set the value to ST.
? If CMAN(TDATTIM) is set to ST or not set (default is ST), simulator will not recalculate PT, LT and YF in MAKF as those data should be up-to-date when algorithms change their schedule start time. For CMAN(TDATTIM) = ST, those data are input to M SIM.
? If CMAN(TDATTIM) is set to AS or PS, simulator will recalculate all possible time-dependent columns such as BOM, PT, LT and YF in TDAT as needed. The updated BILIND will then be used for PROSLI calculation.
For more information, please see the Aspen SCM Help files under topics:
? About Production Operations;
? Start Time Option for TDAT Lookups (TDATTIME) Parameter; and
? Specifying Edit Change Options.
Keywords: None
References: None |
Problem Statement: Without a valid log-on, changing the Assigned or Unassigned Tree Items in 'Modify Role' will not work. This solution will attempt to use Plant Scheduler as an example to describe the resolution steps. | Solution: In order to set up Role-based access, you must first add an user into the CAS file. For every new CAS file, there is only 1 user defined: '.DEFAULT user'.
1. Create your Windows login ID in the CAS file.
(a) Go to 'Role Based Trees \ User Management' and add your Windows login ID.
(b) In the following diagram, 'samch' is the Windows login ID added from this interface. It is given 'Developer' role as PRIMARY.
2. Go to 'Role Based Trees \ Role Management' - [Modify Role] tab.
(a) Select [Role] = 'Developer' and you can make adjustment to the Tree Items accordingly.
(b) Please note that at least 1 role (excluding DEFAULT role) needs to have 'Role Based Trees', 'User Management' and 'Role Management' CHECKED or ENABLED.
(c) Click on the <Apply> button when done.
3. Change default user's role to 'User' instead of 'DEFAULT'.
(a) 'DEFAULT' role is created with all Tree Items enabled. It cannot be changed or modified.
(b) As a good practice, 'DEFAULT - Default user' should be assigned to 'User' role. This will ensure that new Windows users, without a corresponding user record in the CAS file, will not have full access to the case.
(c) To change, go to 'Role Based Trees \ User Management' - [Modify User Role] tab.
(d) Select [User] = 'DEFAULT - Default user' and set 'User' role as PRIMARY. See example:
(4) Save the CAS file (use SAVE AS) and re-open it. You will see that the role-based navigation is applied.
Keywords: Setting up Role Based Tree Navigation
References: None |
Problem Statement: What is the license requirement to run Aspen Hydraulics in steady and dynamics state? | Solution: 1. Aspen Hysys_Upstream license is needed to run in steady state.
2. Aspen_Hydraulics license is needed to run in dynamics. HYSYS_Upstream is needed to solve in steady state before you are able to run in dynamics. Aspen_Hydraulics license is needed only to run in dynamics, not in steady state.
Keywords: Aspen Hydraulics license, Hydraulics, Aspen Hydraulics license requirement
References: None |
Problem Statement: SP CAP 7.1 & 7.2 - rule procedure >RPURMIN doesn't convert properly. There are two entries for the same data keys but changing effective date. This means that the minimum purchasing constraint varies over time which is accepted. The rule procedure >RPURMIN doesn't convert properly.
Note: the bug seems to occur only if the CNVCTL(TEMAP column) is set to DAVGSUM | Solution: After our testing, we are unable to reproduce your results using the standard CAP for multiple releases (V7.1, V7.2). We found that the computing results are same in these standard case files.
Using the steps we received, we added two entries as blow in 'Data management / purchase minimum'.
And CNVCTL(TEMAP column) was set to DAVGSUM.
Then generated the plan and checked the result in table PURMIN as below.
According to document 'AspenSPV7_2-Impl.pdf', the method DAVGSUM is 'Sum of daily averages in period', this is working as expected.
So when period is from 2010-2-1 to 2011-1-1 and the minimum value is 60, the results of Feb 2010 will be 60(minimum value) * 28(days) = 1680. And another values are OK too.
We have tested again in V71. And the results and steps are as below.
1. Opened V71 PS file which name is PS_V7-1-1. Then added two items in Purchase Minimums.
2. Checked table IPURMIN.
3. Opened another V71 PS file which name is PS_V7-1-2. Then added two items in Purchase Minimums.
4. Checked table IPURMIN. Didn't use sorting button. We just sorted these two records manual.
5. Run pre-gen procedures in these tow PS case files. Then checked table PURMIN.
Based on these results, again we can't reproduce this in our standard case file.
Keywords: None
References: None |
Problem Statement: How do I increase number of CPU cores that Aspen Supply Chain Planner can use? | Solution: SCM software allows you to only use one CPU to run the application.
Note: We have only one other option that can be achieved this is "If you use parallel XPRESS in SCM, XPRESS itself will manage using different CPU.
Keywords: CPU
Increase CPU's
References: None |
Problem Statement: In Orion 10 (2006.5) there were two Zoom options in the View menu, and corresponding buttons on the tool bar. In the V7.1 release I can't find them. | Solution: Use these procedures to enlarge or zoom the Gantt chart display:
To zoom horizontally:
1. Place your cursor above the Gantt chart area.
2. Click and drag the mouse to the right to zoom.
To zoom vertically:
1. Place your cursor to the left of the Gantt chart area.
2. Click and drag the mouse down to zoom.
To restore the Gantt view to the original size:
Click .
To restore the Gantt view to a size prior to the most current zoom:
Click .
Keywords: Zoom
Gantt Chart
References: None |
Problem Statement: Sometimes when a new stream is created in Aspen Petroleum Scheduler (ORION) , it gets an error message saying "PXXX is not a stream ID". This happens because the name of the stream already exists as a pump. | Solution: For a Process tank, if you choose not to suppress automatic pump, a pump unit will automatically be added for you with a name P followed by the last 3 characters of the tank name. To prevent possible errors, avoid using the same Pump unit name as a tag for other Units and/or Streams.
For the next Aspen Petroleum Scheduler release the following has been added to the Unit Definition dialog box description:
Note: For a process tank, if you do not select Do Not Add Pump when creating a new tank, a pump unit will be added automatically with a name starting with P followed by the last 3 characters of the tank name. To prevent possible errors, avoid using the same pump unit name as a tag for other units and/or streams.
Also, the following has been added to the Tank Definition dialog box description:
Note: Do Not Add Pump
Select this option to omit automatically adding a pump unit. If you clear this option, a pump will automatically be added with a name P followed by the last 3 characters of the tank name
Keywords: - Tank Definition dialog box
- Unit definition dialog box
- Pump Unit
- Stream name
References: None |
Problem Statement: Following a change to the crude assays, after hours, when no users are logged into the system but ADMIN. Does each user need to reload crude data when next they login? | Solution: When the ADMIN user changes the Crude Assay tables (CRUDES, CRDCUTX or CRDPROP) and the local user applies "Reload Crude Data", then Orion clears the existing data in the .crd files located in the local working directory that user ADMIN defined in SETTINGS (User Settings tab) and reloads the assay data.
Local Working Directory Folder on this example using demo model is:
C:\Documents and Settings\All Users\Documents\AspenTech\Aspen ORION\Demo\Access\
Each user can select his/her own "Local working directory" - remember that each user may customize the User Settings as desired to meet their individual needs and preferences. Therefore the need to "Reload Crude data" is dependent upon their specified "Local working directory". If users are pointed to the same Local working directory as ADMIN (using a server location, for instance), then they don't need to "Reload Crude data" If they have specified a different "Local working directory", then they must apply "Reload Crude data" in order to refresh assay data.
Keywords: Reload Crude Data
Settings
Local Working Directory
References: None |
Problem Statement: This knowledge base article describes the Archive Model Utility. The Archive Model Utility is now a wizard that helps you to easily archive database models and related files in v7.1. | Solution: This utility is developed using .Net Framework. Based on the database OrionDBGen.mdb, generate an access model from Oracle / SQL Server / MS Access Database then zip this access file with other model files like Units.xls. This utility can be run standalone and can be launch by Aspen Petroleum Scheduler / Aspen Refinery Multi-Blend Optimizer/ Aspen Olefins Scheduler folders. It can also be launched from the command line.
? To Run Standalone
1. Access ArchiveModel.exe file (from Orion/MBO/Olefins scheduler folder)
2. From the "Welcome to the Archive Model Wizard " page, specify the model file and the other information. And click "Next ", this wizard will help you to archive the specific model.
Archiving is performed by the Archiving Model Wizard. This wizard consists of three basic steps:
? - Specifying model and file information
? - Providing Model details
? - Indicating output details
? To Launch in Aspen Petroleum Scheduler(Orion) or Refinery Multi-Blend Optimizer (MBO)
1. Start Orion or MBO, open a model.
2. Click menu File | Archive Model. Then there will have a wizard help you to archive the current model.
3. Click menu File | Unarchive Model. Then there will have a dialog help you to unzip an archived file.
? To Launch from Command Line
This utility would be launched from Command Line. The external application would launch it by executing command line.
For mere details, review the Aspen Petroleum Scheduler Help file documentation under topic called "Archive a Model".
Keywords: None
References: None |
Problem Statement: I have upgraded MS Office 2003 to MS Office 2007, then I cannot open ORION XT 2006 model.
How do I open OrionXT 2006 model with MS Office 2007? | Solution: Security settings for Excel 2007 must be changed and Data source must be created to open a model with MS Office 2007.
The following steps shows how to open Orion XT 2006 sample model with MS Office 2007.
How to change Excel 2007 Macro Security setting
You need to change the security setting for macros that used by Unit.xls.
Launch Excel 2007
Click Office Button, Excel Options
Click Trust Center / Trust Center Settings?
Click Macro Settings, select
Enable all macros
Trust access to the VBA project object model
Click OK
How to open Orion XT 2006 MS Access model file with MS Office 2007
From Orion XT 2006, File / Model Open
You need to create a DSN file to access existing MS Access model file at first.
You can do same thing from control panel / Administrative Tools / Data Sources
Click New?
Select ?Driver do Microsoft Access (*.mdb)?
Click Next >
Type any name
Click Next >
Click Finish
Click Select?
Select the Model file, then click OK
Click OK
Now the file data source has been created
Click OK
Keywords: MS Office 2007
Excel 2007
References: None |
Problem Statement: In the #INPUT section of the Units worksheet, the #tttt key word for all records does not work (Column H:vol/wt in and Column I:vol/wt out). This knowledge base article describes what can cause this. | Solution: Below the #INPUT section of the Units worksheet, the #tttt key word can be used to retrieve tank information from the Aspen Petroleum Scheduler model as shown in the table below:
To be able to see the values of the tank input and output, the WRITETANKS record must be added in the CONFIG table of the database first. An example is shown below.
Make sure WRITETANKS has a value of "Y" in the CONFIG table in the database. By default, this is "N".
The WRITETANKS keyword and description will be added to the Setting
Keywords: topic associated with the CONFIG table in next Aspen Petroleum Scheduler Help file.
Keywords
- CONFIG table
- #tttt keyword
- #INPUT section
- Setting Keywords
References: None |
Problem Statement: Can you please describe the relationships between the old Event Table and the Normalized Event Tables? | Solution: Relationships
The main (parent) normalized Event Table is ATORIONEvents. In principle, this table has direct relations to all children tables. The children tables are:
? ATORIONEventComments
? ATORIONEventExternalSystemIds
? ATORIONEventParams
? ATORIONEventResources
? ATORIONEventPipelines
? ATORIONEventTanks
? ATORIONEventProps
? ATORIONEventResourceDetails
There are no child-to-child relationships. Some parent-to-child relationships may not be active. For example, table ATORIONEventPipelines is not relevant to non-pipeline events, such as Blend or Crude Run. The relations are established by event sequence number match. Each child table has column EVENT_XSEQ. The integer values in this column must match the X_SEQ values from the parent table ATORIONEvents. If for a specific event there are no matches, the relation for this event is also inactive.
Mapping ID, ID1, ID2, ID3 in Normalized Event Tables
In the old event table EVENTS, columns ID, ID1, ID2, and ID3 had different meanings dependent on the event type. For some event types, a cell in such a column could contain a single identifier. For some event types some ID-columns contained comma-separated lists. Therefore, for these event types there will be multiple rows in the normalized table with the same EVENT_XSEQ. The meaning of various ID-columns for different event type as well as the mapping of them to the normalized tables are presented below.
Mapping ID
Event Type
Meaning
Normalized Table
Column in Normalized Table
Pump
Parameter Number
ATORIONEventParams
PARAM_ID
Unit Parameter Unit Mode
ID is a comma-separated list of parameters numbers and parameter values
ATORIONEventParams
PARAM_ID - parameter No
PARAM_VALUE - parameter value
Blend
ID is a comma-separated list of sources composition
ATORIONEventTanks
TANK_ID - tank id
TYPE - SOURCE or DESTINATION
FRAC - fraction
Crude Receipt
ID is a comma-separated crude composition list
ATORIONEventProps
OBJ_TYPE - must be COMPOSITION
OBJ_ID - crude ID
OBJ_VALUE - crude composition
Pipeline Crude Receipt
ID is a comma-separated crude composition list
ATORIONEventProps
OBJ_TYPE - must be COMPOSITION
OBJ_ID - crude ID
OBJ_VALUE - crude composition
Crude Run
ID is a comma-separated list of parameters numbers and parameter values
ATORIONEventParams
PARAM_ID - parameter No
PARAM_VALUE - parameter value
Furnace Bank Run
ID is a comma-separated list of parameters numbers and parameter values
ATORIONEventParams
PARAM_ID - parameter No
PARAM_VALUE - parameter value
Product Shipment
ID is a comma-separated destination list
ATORIONEventTanks
TANK_ID - tank id
TYPE - must be DESTINATION
Pipeline Shipment
ID is a comma-separated destination list
ATORIONEventTanks
TANK_ID - tank id
TYPE - must be DESTINATION
Product Receipt
ID is a comma-separated property list
ATORIONEventProps
OBJ_TYPE - must be PROPERTY
OBJ_ID - Property ID
OBJ_VALUE - Property Value
Mapping ID1
Types
Meaning
Normalized Table
Column in Normalized Table
Pipeline Shipment
Pipeline
ATORIONEventPipelines
PIPE_ID
Unit Parameter
Unit
ATORIONEventResources
RESOURCE_ID
Unit Mode
Unit
ATORIONEventResources
RESOURCE_ID
Pump
Unit
ATORIONEventResources
RESOURCE_ID
Furnace Bank Run
Unit
ATORIONEventResources
RESOURCE_ID
Blend
Destination Tank
ATORIONEventTanks
TANK_ID - tank id
TYPE - must be DESTINATION
Material Service
Material Service Tank
ATORIONEventTanks
TANK_ID - tank id
TYPE - must be DESTINATION
Crude Receipt
ID1 is a comma-separated destination list
ATORIONEventTanks
TANK_ID - tank id
TYPE - must be DESTINATION
Pipeline Crude Receipt
ID1 is a comma-separated destination list
ATORIONEventTanks
TANK_ID - tank id
TYPE - must be DESTINATION
Crude Transfer
ID1 is a comma-separated destination list
ATORIONEventTanks
TANK_ID - tank id
TYPE - must be DESTINATION
Product Transfer
ID1 is a comma-separated destination list
ATORIONEventTanks
TANK_ID - tank id
TYPE - must be DESTINATION
Crude Run
ID1 is a comma-separated source list
ATORIONEventTanks
TANK_ID - tank id
TYPE - must be SOURCE
FRAC - value
Furnace Bank Run
ID1 is a comma-separated source list
ATORIONEventTanks
TANK_ID - tank id
TYPE - must be SOURCE
FRAC - value
Product Shipment
ID1 is a comma-separated source list
ATORIONEventTanks
TANK_ID - tank id
TYPE - must be SOURCE
FRAC - value
Product Receipt
ID1 is a comma-separated destination list
ATORIONEventTanks
TANK_ID - tank id
TYPE - must be DESTINATION
Tank Property Change
Destination
ATORIONEventTanks
TANK_ID - tank id
TYPE - must be DESTINATION
Mapping ID2
Event Types
Meaning
Normalized Table
Column in Normalized Table
Crude Receipt
Pipeline
ATORIONEventPipelines
PIPE_ID
Pipeline Crude Receipt
Pipeline
ATORIONEventPipelines
PIPE_ID
Product Shipment
Pipeline
ATORIONEventPipelines
PIPE_ID
Crude Run
Unit
ATORIONEventResources
RESOURCE_ID
Blend
Blender
ATORIONEventResources
RESOURCE_ID
Crude Transfer
ID2 is a comma-separated source list
ATORIONEventTanks
TANK_ID - tank id
TYPE - must be SOURCE
FRAC - value
Product Transfer
ID2 is a comma-separated source list
ATORIONEventTanks
TANK_ID - tank id
TYPE - must be SOURCE
FRAC - value
Pipeline Shipment
ID2 is a comma-separated source list
ATORIONEventTanks
TANK_ID - tank id
TYPE - must be SOURCE
FRAC - value
Mapping ID3
Event Types
Meaning
Normalized Table
Column in Normalized Table
Blend
Product
ATORIONEvents
PRODCODE
Material Service
Material
ATORIONEventResources
RESOURCE_ID
Pipeline Shipment
Secondary Pipeline
ATORIONEventPipelines
PIPE_ID
Product Receipt
Transportation Mode
ATORIONEventResources
RESOURCE_ID
Product Transfer
Transportation Mode
ATORIONEventResources
RESOURCE_ID
Product Shipment
Transportation Mode
ATORIONEventResources
RESOURCE_ID
Crude Receipt
Transportation Mode
ATORIONEventResources
RESOURCE_ID
Pipeline Crude Receipt
Transportation Mode
ATORIONEventResources
RESOURCE_ID
Crude Transfer
Transportation Mode
ATORIONEventResources
RESOURCE_ID
Pump
Transportation Mode
ATORIONEventResources
RESOURCE_ID
Keywords: data tables
Event
References: None |
Problem Statement: The Font size for Event Screen can not be changed from User Settings Tab, how do I set the font size for the Event Screen? | Solution: Since version 7.1, Event Interface has been improved for Aspen Petroleum Scheduler. This is the way to set up these options now:
Keywords: None
References: None |
Problem Statement: Users may see the warning message "ADAPTER FAILED TO INITIALIZE" when trying to import Events into Orion (Integration| Import Events) even if they are NOT using the adapters.
From Menu bar select INTEGRATION/IMPORT/EVENTS
After a while appears a message "Adapter failed to initialize" :
If you click OK, everything goes without problem?.as follows?. | Solution: Since V7.1 a new keyword has been added to use Adapters. To avoid this warning message , In CONFIG table, verify/add the register called "USEADAPTER" with Value_ = 'N'
Value_ options:
1.) N ? Integration dialog will launch without initializing the adapter; allows users to use own methods to write data to staging tables and to import data directly
2.- Y ? (Default) Adapter will initialize when integration dialog boxes are launched.
The USEADAPTER CONFIG keyword must be set to "Y" in order for adapters and this field to be enabled
Keywords: -Orion Adapter
-Integration
-import Events
References: None |
Problem Statement: While Launching the Aspen Petroleum Scheduler in V7.1 or above, the message "Cannot check database version" or "Please update the database for Material Pools" appears. How do I resolve this? | Solution: Please note that changes and enhancements are made to Aspen Petroleum Scheduler (formerly called as Aspen Orion-XT) in each major version. Some of the new features require new table structure or some additional fields in the existing Tables.
So, it is absolutely necessary to migrate your model while moving to a newer version, using DBUpdate.exe.
The following error messages can appear when your migration of V2006.5 in to V7.1 was not properly done.
Error no:1
Error no:2
The above warning message indicates that some of the new tables introduced in V7.1 has not been created properly.
Please use DBUPDATE.exe and perform the model migration, then the error message will disappear.
Refer Solution 119374 for more details on the Migration procedure.
Keywords: error
Material Pools
Warning
migration
upgrade
DBUPDATE
References: None |
Problem Statement: Gantt and Trend Event screens can be accessed directly from the new model tool bar in V7.1 | Solution: Gantt and Trend options now at the Menu Bar:
Selecting ?GANTT? option:
Selecting ?TREND? option:
The GANTTt menu contains the following options:
? Reset Margins | Left| Right|Both
If you have used the sliders above the Gantt chart to change the left or right margin, selecting this option resets the left|right|both margin to its original position.
? Horizontal Gridlines
Displays horizontal gridlines on the Gantt chart.
? Show Period Lines
Displays vertical lines that align to each period.
? Show Modified Events Only
Shows only events that have changed.since the last save
? Preferences
Displays the Gantt dialog box where you can set Gantt chart display options.
? Control Variable List
Displays the Control Variable List dialog box used to identify which control variables can be used to display events on an event screen.
? Show Date-Time in Days
Displays events in elapse days versus calendar days.
The TREND menu contains the following options:
? Trend List
Displays the Trend dialog box, which defines which Trend charts are displayed on the current event screen.
? Trend Limits
Displays the Trend dialog box, which defines limits imposed on the Trend charts.
? Legend
Displays options that indicate where the legend in the main Trend chart will display:
Bottom, Top, Left, Right and Inside.
Keywords: - Gantt Menu
- Trend Menu
- Gantt Chart
- Trend Chart
References: None |
Problem Statement: New keywords have been added to the CONFIG table in Aspen Petroleum Scheduler since V7.1 | Solution: This is a new feature in Aspen Petroleum Scheduler called "In Memory Reporting" that allows the users to save data during the simulation to facilitate reporting after the simulation.
This option can be turned "ON" and "OFF" from the CONFIG table.
The following keywords have been added to the CONFIG table in version V7.1:
USE_SIMPLE_CACHE:
Set to "Y" to enable data caching for Unit, Stream, Product Tank, Crude Tank, Crude Pipeline, Product Pipeline, Daily Crude Runs, Furnace Bank, Feedstock or Blend Exclude Heel objects
USE_SIMPLE_MOVEMENT_CACHE:
Set to "Y" to enable movement data caching
Keywords: CONFIG table
Memory Cache
In Memory Reporting
References: None |
Problem Statement: In the Roll Forward dialog, on clicking the Load Inventory button, we can run a custom executable if set up as command "LoadInventory", and this now sees the date chosen for the roll forward as a parameter. It's shown as a string in US format (mm/dd/yy) even if the Orion model date format is European and the Windows date format is German | Solution: This is intended. Orion uses the System Locale.
What is the System Locale?
The system locale (sometimes referred to as the system default locale), determines which ANSI, OEM and MAC codepages and associated bitmap font files are used as defaults for the system. These codepages and fonts enable non-Unicode applications to run as they would on a system localized to the language of the system locale.
The system locale is implemented in Windows 95/98, Windows NT 4.x, Windows 2000, and Windows XP. (Under Windows 95/98, the system locale is fixed based on the language version and cannot be changed. Under Windows NT 4.x, the system locale is pre-selected by the language version, but can later be modified in the Regional Settings Control Panel.)
Keywords: - Roll Forward process
- System Locale
References: None |
Problem Statement: Several new automation objects have been added for Aspen Petroleum Scheduler | Solution: The following Automation Methods have been included in V7.1:
Review the Orion help File documentation for further information about each one.
Keywords: -Automation Methods
References: None |
Problem Statement: Whenever a Basic Phase is interrupted, by clicking the red X icon:
a User Confirmation dialog is displayed, requiring password confirmation and a reason (this information is stored to the audit trail): | Solution: CQ00339367, first released in Cumulative Patch 2006.5.4, introduces a flag to make the Comment field optional. Here is an example of enabling the flag, using the flags.m2r_cfg file:
Once the flag has been added on the server, run codify_all.cmd and then as workstations restart they will inherit this setting in their environment.
IMPORTANT NOTE: When a Basic Phase is interrupted, the Username and Password are still required, but the Comment field itself can be left blank:
Without setting this flag, trying to click OK with only the user and password results in a warning beep and the dialog stays active.
Keywords: INTERRUPT_EMPTY_COMMENT_ALLOWED
interrupt empty comment allowed
References: None |
Problem Statement: The tag counter fields IO_#_BAD_TAGS, IO_#_GOOD_TAGS, IO_#_SCAN_OFF_TAGS and IO_#_SUSPECT_TAGS were added to all Cim-IO Transfer records. This article describes how these counters interact with the IO_#TAGS counter. There might be the misconception that the sum total of the new counters is equal to the IO_#TAGS value. | Solution: The counter fields IO_#_BAD_TAGS, IO_#_GOOD_TAGS and IO_#_SUSPECT_TAGS relate to the quality levels that the value in the destination tags can have (respectively the levels Bad, Good and Suspect).
When, for a tag in the transfer list, IO_DATA_PROCESSING is set to OFF, it will be counted in IO_#_SCAN_OFF_TAGS. But as the corresponding destination tag will have a quality level of Bad, the occurrence is also counted in IO_#_BAD_TAGS.
The sum of the IO_#_BAD_TAGS, IO_#_GOOD_TAGS and IO_#_SUSPECT_TAGS values is equal to the IO_#TAGS value.
If you want to make a distinction between switched-off tags and tags that receive a Bad status from the device, you can compare the IO_#_BAD_TAGS and IO_#_SUSPECT_TAGS counters. If their values are equal then there are only switched off tags and no bad values read from the device for the Transfer record.
Keywords: Getrecord
Unsolicited
References: None |
Problem Statement: How to add more significant figures in Physical Properties and Results | Solution: Go to File/ Options/ Unit of Measures / Variable Formats. Here change ( add or subtract)
significant figures.
Keywords: significant figures, Preferences, Unit of Measures
References: None |
Problem Statement: Is it possible to use the simulation workbook with Aspen Plus in equation oriented run mode? | Solution: It is be possible with some limitations. Note you cannot access the EO variables directly, you have to access the results using the SM variables. For spec groups and any constant variable you may want to modify, you need to specify the specification value on the EO Input sheet, and you can access the value of the specification (it's called IVVALUE, you can find the EO Input in the variable browser in the simulation workbook organizer). If you were accessing the SM input value directly, then Aspen Plus would do a SM run before the EO run. Note also that you must set the run mode to Equation Oriented in the Aspen Plus simulation (e.g. before saving your file and re-opening it through the workbook) as the run mode cannot be modified from the simulation workbook.
Keywords:
References: None |
Problem Statement: Installing a new Intel FORTRAN version compiler and trying to recompile a subroutine that was created in a previous version compiler, such as Compaq Visual Fortran. When one tries to compile this subroutine using ASPCOMP, an error message related to the dms_plex.cmn file appears. | Solution: Some of the statements that were allowed in the Compaq compiler are not permitted using the Intel compiler.
The code that generates this message is:
include 'dms_plex.cmn'
REAL*8 B(1)
EQUIVALENCE (B(1), IB(1))
The code statement is not as recommended and is no longer supported:
include ‘dms_plex.cmn'
It has to be changed as follows:
#include ‘dms_plex.cmn'
With this change the entire code can be recompiled as in previous versions.
See the Aspen Plus User Models
Keywords: FORTRAN, Intel compiler, ASPCOMP, Aspen PLEX.
References: Manual for further references. Also, for more information about the FORTRAN compiler configurations supported by Aspen Plus, see solution 131011. |
Problem Statement: Is it possible to simulate an ARBOR furnace with Aspen Fired Heater? | Solution: This radiant-tube coil configuration is not available on the field “tube row layoutsâ€�.Â
This is an API radiant-tube coil configuration and for more information, please see page 19 of API 560 4th edition.
However, it is possible to do an approximation of that geometry in Aspen Fired Heater. Please see the attached file. The following image shows the geometric inputs needed
The next image show the results in the firebox.
Keywords: ARBOR furnace. Fired Heater
References: None |
Problem Statement: Maximum Tube per Row increased from 99 to 200 in Aspen Air Cooled Exchanger V7.0 | Solution: The previous program versions limited the number of tubes per row to 99. We have expanded this limit to now allow up to 200 tubes per row. We have also incorporated some additional error checking into the program to catch geometry combinations which the program may not be able to accommodate.
Keywords: Maximum Tube per Row, limit
References: None |
Problem Statement: The Aspen Upload Configuration Tool allows transferring SLM license logs automatically to Aspentech. It has been reported that the server name sometimes does not appear under the "Schedule" tab: | Solution: The Upload Tools collects the "Server Name" information from the "active" license file on the SLM license server.
Depending on the version of your SLM server application, that is the file:
<Install Drive>\Program Files\Common Files\SafeNet Sentinel\Sentinel RMS License Manager\WinNT\lservrc on SLM license servers v.8.X.\LSERVRC
or
<Install Drive>\Program Files\Rainbow Technologies\SentinelLM 7.X.X.X Server\English\lservrc on SLM license servers v.7.X\LSERVRC
The license files are ASCII files - they can be opened with any ASCII text editor (e.g. notepad.exe.) They can sometimes be encrypted but the first line should always be "human-readable" and look something like :
#Date Generated:3/30/2009 6:00:23 AM,Filename:lservrc_010_2a6e9.SLF,Version:v5.3.5,Encrypted:Yes,Key Serial Number:216E9,Lock Mask:0x0,Transaction Number:8ea8c457-5d7c-41b8-a338-cdd2b498cc04,System Name:LSYS14411.5
The last value on the line is the System Name. This is what the Upload Tools will extract and report under "Server Name".
If you used the monitoring utility "WLMAdmin" to install the licenses on the SLM server then the above mentioned line will no longer exist in the LSERVRC license file on the SLM server.
To remedy it, you should re-install the original license file you had received from Aspentech on your SLM server.
Short procedure
1. On your SLM Server, open the folder containing the "active" license file (depending on the version)
<Install Drive>\Program Files\Common Files\SafeNet Sentinel\Sentinel RMS License Manager\WinNT\lservrc on SLM license servers v.8.X.\
or
<Install Drive>\Program Files\Rainbow Technologies\SentinelLM 7.X.X.X Server\English\lservrc on SLM license servers v.7.X\
2. Stop the SLM server by launching "loadls.exe" and pressing the "Remove" button.
3. Rename the existing file "LSERVRC" as "LSERVRC.OLD".
4. Copy the original license file LSERVRC_XXX_YYYYY.SLF in that folder and rename it as "LSERVRC" (No extension!!)
5. Restart the SLM server by launching "loadls.exe" and pressing the "Add" button.
6. Launch the "WLMADMIN" tool to confirm that your license server appears in the left frame and licenses are available.
Of course, make sure not to use WLMADMIN's "Add Feature" function anymore!
Keywords: System Name, Server Name, WlmAdmin, Upload Tools
References: None |
Problem Statement: I can't view my logs from the Aspen Licensing Center. I'm getting an error message that the "License Server status data not available." | Solution: If you are getting this message it is due to the fact that you don't have Peak Demand Tokens. For customers with non-Peak Demand Tokens the Aspen SLM Reporter should be used. This tool can be found under the Upgrades and Licenses tab on the Aspen Support Website. Customer's can launch the Aspen SLM Reporter to view their logs.
Keywords: ALC, Aspen Licensing Center, peak demand tokens, SLM Reporter, logs
References: None |
Problem Statement: When attempting to start InfoPlus.21, the start-up stalls on the task loaddb, with no messages written to the out or err log files. If a DOS window is opened and the task is attempted manually, the following error message will appear:
Tsk_db_clock failed: ip.21 core quota=0
Error -2100 opening connection to History.
If InfoPlus.21 is already running, and a shutdown is attempted, then these errors may occur:
error 1909 reference account currently logged out and may not be logged on to. error 2186 could not stop the IP21 service for service is not responding to control function.
(These particular messages may also occur if the NT password of the account under which the shutdown is attempted has expired). | Solution: Follow these steps to regain control of your InfoPlus.21 Manager:
1. Go to Start | Settings | Control Panel | Administrative Tools | Services.
2. Scroll down to AspenTech InfoPlus.21 Task Service and double-click on the task.
3. Select the Log On button and then view the Log on as entry in the dialog box.
4. Change the account information for the InfoPlus.21 Task Service to an account that is a member of both the Administrator and Group 200 groups.
5. Verify that the first six Tasks from the InfoPlus.21 Manager are in this order and not marked as Skip during startup:
TSK_C21_WIN_INIT
TSK_H21_INIT
TSK_H21_ARCCK
TSK_H21_MNTTAB
TSK_H21_PRIME
TSK_DBCLOCK
6. From InfoPlus.21 Manager, go to Actions: Modify Task Service: Insert a password, confirm the password, and press OK.
7. Stop InfoPlus.21.
8. Reboot the PC.
9. Start InfoPlus.21.
ADDITIONAL NOTE: These error messages can also occur when using Windows Terminal Server with Aspentech software. One message in particular is caused directly by using Windows Terminal Server when attempting to shut down InfoPlus.21: "TSK_SAVE failed to exit in time".
This error message can also occur upon starting up InfoPlus.21 after an upgrade.
Windows Terminal Server is not a supported product for remotely administering InfoPlus.21 servers. See Solution 108725 for additional information.
Solutions 104427 and 105479 offer more related information.
KeyWords:
-2100
1909
2186
InfoPlus.21
loaddb
Keywords: None
References: None |
Problem Statement: How to filter "Existing events" from Event Import dialog box? | Solution: To filter "Existing Events", there are some options available under "Unmached Events"Tab, Event Import dialog box, as shown:
Filter options are described below. Application IDs refer to IDs that identify the application from which event data is being imported. This value is found in the ATORIONEvents table.
· Current App ID: Select this option to display events that have an APS application ID assigned but are not currently matched to an external system event.
· Missing App ID: Select this option to display events that do not have application IDs assigned.
· Invalid App ID: Select this option to display events that have invalid application IDs, meaning there is no corresponding events. This may occur if an application was removed.
· Other Valid App ID: Select this option to display events that have application IDs other than Orion that have not yet been matched to APS events.
Keywords: -Event Import
References: None |
Problem Statement: Aspen InfoPlus.21 now supports the generation and storage of best fit data as part of Aspen InfoPlus.21 input processing.
The automatic generation and storage of best fit data is enabled for an Aspen InfoPlus.21 tag record when archiving is configured for the corresponding best fit history repeat area.
A backfill utility can be used to generate stored best fit data for an earlier time span; that is, for the time span before the automatic generation of best fit data was enabled, if you intend to generate best fit data for a time before this was activated, you'll need to modify the "oldest allowed time" for the tag record. | Solution: A backfill utility can be used to generate stored best fit data for an earlier time span; that is, for the time span before the automatic generation of best fit data was enabled, for further information on this see KB 134148. if you intend to generate best fit data for a time before this was activated, you'll need to modify the "oldest allowed time" for the tag record.
Aspen InfoPlus.21 has an "oldest allowed time" for each history repeat area. For each history repeat area, any existing history older than the "oldest allowed time" is ignored, and attempts to write history under this condition fail. The "oldest allowed time" is initially set to the time at which the number of occurrences in memory is changed to a non-0 value, in this case when you activate best fit storage for a tag, this becomes the "oldest allowed time" for inserting values.
A utility program called "xoldestok.exe" allows you to change this "oldest allowed time" to move this time to a time of your choosing. There is also an API routine, HISOLDESTOK, that can be used in a C program to accomplish this change.
To set the start date for collecting aggregate data for the tag using the XOLDESTOK program see the instructions included in KB 103040.
The Back Fill Utility can be invoked using Windows Explorer. The Back Fill utility,
BackFillAggr.exe, can be found in the C:\Program Files\AspenTech\InfoPlus21\db21\code or C:\Program Files(x86)\AspenTech\InfoPlus21\db21\code folder.
The backfill utility lets you specify:
A start time for which stored best fit data should be generated.
-and-
A list of Aspen InfoPlus.21 record names for which you want to back-populate stored best fit data.
Keywords: IP.21, back fill, xoldestok
References: None |
Problem Statement: The user receives the following message while executing the OPT command in an LP Model: "Could not open CPLEX environment (ERR msg: CPLEX Error 32201: ilm: CPLEX license)." | Solution: There are 3 different types of CPLEX codes (OPT, MIP and BARR) for which a MIMI system can be licensed. In UNIX, the CPLEX license file is located in the "/MIMI/bin" directory and labeled as:
Install.cplex --> Contains OPT
Install.cplex.mip --> Contains OPT and MIP
Install.cplex.mip.barr --> Contains OPT, MIP and BARR
The user will need to edit "install.cplex", "install.cplex.mip", or "install.cplex.mip.bar" and replace the existing code with the following one, which is valid through January 1st, 2010:
6C-F9-1E-C5-04-13-10-23-3B-A2-87-57-7D-DC
Keywords: CPLEX
CPLEX Error 32201: ilm: CPLEX license
Install.cplex
Could not open CPLEX environment
Install.cplex.mip
Install.cplex.mip.barr
References: None |
Problem Statement: On March 3rd, 2007, the CPLEX license codes embedded within MIMI expired. What does this mean to our customers?
If you are currently running CPLEX, the license is perpetual and will continue to run. However, if you attempt to install CPLEX after this date you will experience a licensing error like the one below.
Error 32104: Licensing problem: License code has expired. | Solution: Under NT, the following are the instructions on how to re-license CPLEX.
1. Open a DOS prompt and go mimi\bin sub-directory.
2. c:\mimi\bin> instlcpx 6C-F9-1E-C5-04-13-10-23-3B-A2-87-57-7D-DC c:\mimi
(Note: of course the path c:\mimi\ in this command indicates the directory where you installed MIMI, so replace this string c:\mimi\ with the actual directory where you installed MIMI if different)
3. Run MIMI again and test the CPLEX optimizer to see if it works now
Notes:
? This CPLEX licensing code can be used to re-license CPLEX before 1/1/2010
? If the instlcpx command wraps the text when you copy it to your DOS prompt, it will not work. In that case you will need to type the statement in by hand in the DOS window.
? Also, in case of problems, type the command above without the ending backslash \
If you are running UNIX, you can have a look at solution 126300 - "CPLEX license code embedded within MIMI is expired in Unix"
Keywords: CPLEX
Error 32104: Licensing problem: License code has expired.
INSTLCPX
References: None |
Problem Statement: New SequeLink CONNECT statement and data source name configuration | Solution: New Connect Statement
CONNECT data_source_name BY id/pwd;
or
CONNECT data_source_name BY id/encrypt(id_code);
Where:
id
Network login user ID
pwd
Network login user Password
Data Source Name Configuration:
a.) Configure the CAT utility. This will test the connection between database and SequeLink.
b.) Go to the Control Panel on your PC and choose ODBC Data Sources.
c.) Click on the User DSN tab and choose Add.
d.) From the list of items, choose "INTERSOLV 3.10 32-bit SequeLink 3.10.03.00". (Note, this will only appear after the SequeLink Client has been installed on the machine.)
e.) Click Finish.
f.) A new dialog will appear with some choices:
Data Source Name:....................You can fill this in with anything appropriate. (THIS WILL BE USED LATER ON IN THE CONNECT STATEMENT)
Description:..............................Again, you can fill this in with anything appropriate.
SequeLink Data Source:.........This is a pull down menu. There should be only one choice here. It is based on information you included when configuring the CAT utility.
g.) At this point, correct your CONNECT statement with the data source name from above.
h.) Test the CONNECT between MIMI and the database .
KeyWords
Keywords: None
References: None |
Problem Statement: After installing Aspen SCM, it won't open. The error message reads: "The libodbcinst.a file is missing." | Solution: This error indicates that a portion of the SequeLink Client installation is missing. The new version of SequeLink (4.5) requires that BOTH the client and server be installed. The SequeLink Client should be installed from your SequeLink CD on to the same machine as MIMI.
Keywords: None
References: None |
Problem Statement: The first post installation step is to create the Event Management database by clicking the Check and Add Missing button on the Check/Update Database Wizard form under Database Management. If the database connection has not been configured correctly or the database does not exist, the following exception type will be reported:
HTTP Status 500 - Internal Server Error
Exception: org.apache.jasper.JasperException
root cause: java.lang.NullPointerException | Solution: Event Management uses the following connection strings for Oracle and SQL Server, respectively:
jdbc:oracle:thin@<host>:<port>:<SID> (Oracle)
jdbc:odbc:Driver={SQL Server} ;server=<server>;database=<database>;port=<port> (SQL Server)
where server and host= name of server hosting database, SID= Oracle Instance ID, database= name of SQL Database, and
port= TCP/IP communication port which is normally 1521 for Oracle and 1433 for SQL Server.
The connection string parameters are specified at install time on the Specify Database Information dialog that is part of the Event Management Configuration Details. This data will be used by the software after the database has been created manually.
For Oracle, the host is specified as the Database Server Name and the SID is specified as the Database Name. The Database Username must be created manually as part of the Oracle security specifications. This user should be granted the Connect and Resource roles, and Create Table, Create Trigger, Create Index, and Execute Any Procedure system privilege. This user also needs to be granted Execute privilege on the DBMS_PIPE package that is part of the SYS Schema. The Check and Add Missing button will create the Event Management tables in a schema having the same name as this user.
For SQL Server, both the Database Username and Database must be created manually. The Database Username should be made the Owner of the database with Datareader and Datawriter privilege. The Check and Add Missing button will create the Event Management tables within the specified database.
The connection data can be changed if a mistake was made on the Specify Database Information installation procedure dialog by changing the following parameters in servers.properties located in the Event Management\Config folder:
common.jdbcURL= <connection string>
common.jdbcUserid = <database username> common.jdbcPassword = <database username password>
common.jdbcClass = oracle.jdbc.driver.OracleDriver (for Oracle) common.jdbcClass = sun.jdbc.odbc.JdbcOdbcDriver (for SQL Server)
The "Aspen Event Management Server" NT Service must be restarted to activate changes in servers.properties.
KeyWords
post installation failure
Event Management
database connection strings
Keywords: None
References: None |
Problem Statement: Can a domain user be designated as the Role-Based Visualization (RBV) Administrator (portal_admin login) instead of the default ccadmin account? | Solution: Yes. IIS performs user authentication for Role-Based Visualization. Any domain account trusted by the IIS server connected to the portal can be used. Once IIS authenticates the user, it will pass the name of the user to RBV with the request to access the portal_admin application. That user will be granted access if it has been designated as the Administrator of the Authenticated Users Role, which is the parent of all roles. A user can be designated as an Administrator by following these steps:
After logging into the portal_admin application via http://<server>/portal_admin, go to the Roles tab and hit the up arrow until only the Authenticated Users role is shown. Authenticated Users is the parent of all roles.
Press the Edit button and click the Administrators link.
You can either press the "Edit List" under groups to add a group of Administrators (NT Group added by Synch Tool) or "Edit List" under Users to add a specific user.
Press the Find button to populate the available user list.
Use the arrows between the Available and Selected columns to select users who will be Administrators.
KeyWords
ccadmin
portal_admin
login
RBV Administrator
Operations Manager Portal
portal admin
Keywords: None
References: None |
Problem Statement: Pressure Safety Valve Sizing in Aspen HYSYS | Solution: Automate Relief Valve Sizing and Documentation with Aspen HYSYS V8.3 and V8.4. Calculate orifice sizes and appended line sizing for single and mixed phase systems using accepted industry standards such as American Petroleum Inst. (API) 520,521. Automate relief load calculations to increase the rigor of PSV analysis featuring fire analysis as per API RP 521, all PRD sizing methods in API RP 520 8e (including Direct Integration HEM method) and much more ALL within Aspen HYSYS.
This is a sample flowsheet to demonstrate the Relief valve sizing and documentation functionality added in HYSYS V8.3 and V8.4. This package will help you get a quick start and preview of Relief Valve Sizing in HYSYS, creating detailed datasheet and design documentation for relief studies.
The example shows automated capabilities in HYSYS V8.3 and HYSYS V8.4 including: orifice area calculation and selection for single and mixed phase systems; exchanger tube failure, fire, control valve failure, thermal expansion relief load calculation; line/pipe sizing calculation for single and mixed phases; multiple valve analysis; noise and reaction force analysis; automated PSV design study reports showing all the overpressure scenarios analyzed, mechanical datasheets, relief load reports, revision control, process datasheets and other well-arranged automated reports needed by regulatory agencies and internally for proper relief valve analysis and maintenance.
Keywords: PSV, Sizing, Flowsheet
References: None |
Problem Statement: HYSYS Stream Reporter (HSR) Version 1.6 | Solution: HYSYS Stream Reporter (HSR) is an Excel spreadsheet utility that allows material stream conditions, properties and compositions to be easily reported onto a spreadsheet, it also enables streams in different cases to be conveniently compared.
HSR can report properties from the following phases: Overall, Vapour, Light and Heavy (Aqueous) Liquid, Combined Liquid and Solid. It also allows stream user variables and property correlations to be reported. It is also possible to create formulae in the output table. The user can save sets of properties or use one of the pre-built property sets. Streams from different HYSYS cases can be reported in the same stream table. Once a stream table has been generated it can be updated by pressing a single button. Stream tables can be moved to another Excel workbook whilst maintaining the ability to be updated.
HSR takes the form of an Excel spreadsheet file with embedded Visual Basic for Applications (VBA) code that demonstrates how HYSYS can be accessed programmatically. The VBA source code is freely accessible and users are encouraged to learn from it and adapt it to their own needs.
For full details please see the 'HSR User Guide.doc' document in the attached zip file.
For HSR version 1.6, ten versions of the HSR spreadsheets are provided:
To use HSR with HYSYS V7.3 use "HSR 1.6 (for HYSYS V7.3).xls"
To use HSR with HYSYS V7.2 use "HSR 1.6 (for HYSYS V7.2).xls"
To use HSR with HYSYS V7.1 use "HSR 1.6 (for HYSYS V7.1).xls"
To use HSR with HYSYS V7.0 use "HSR 1.6 (for HYSYS V7.0).xls"
To use HSR with HYSYS 2006.5 use "HSR 1.6 (for HYSYS 2006.5).xls"
To use HSR with HYSYS 2006 use "HSR 1.6 (for HYSYS 2006).xls"
To use HSR with HYSYS 2004.2 use "HSR 1.6 (for HYSYS 2004.2).xls"
To use HSR with HYSYS 2004 use "HSR 1.6 (for HYSYS 2004).xls"
To use HSR with HYSYS 3.4 use "HSR 1.6 (for HYSYS 3.4).xls"
To use HSR with HYSYS 3.2 use "HSR 1.6 (for HYSYS 3.2).xls"
To use with other HYSYS versions please read section 8 of the User Guide document.
Note
This Automation application has been created by AspenTech as an example of what can be achieved through the object architecture of HYSYS. This application is provided for academic purposes only and as such is not subject to the quality and support procedures of officially released AspenTech products. Users are strongly encouraged to check performance and results carefully and, by downloading and using, agree to assume all risk related to the use of this example. We invite any feedback through the normal support channel at [email protected].
With Excel 2003 and higher, the HSR 1.6 can handle a maximum of 6 output worksheets.
KeyWords
HYSYS Stream Reporter, HSR, HYSIM Stream Summary
Keywords: None
References: None |
Problem Statement: Aspen Process Explorer(PE) and aspenONE Process Explorer(A1PE) provide a series of control charts or analytical plots to view different statistical values for tags. These Statistical Process Control(SPC) Charts/Analytical Plots can be populated with data from two types of Aspen InfoPlus.21 records:
An 'Adhoc' record which is nothing more than a standard Analog record that is recording and storing historical values. With this kind of record, PE will perform 'on-the-fly' SPC calculations for displaying on the charts.
A 'Q' record which is built specifically for use with the Aspen Process Explorer SPC Charts and aspenONE Process Explorer analytical plots. With this kind of record, most of the SPC calculations are performed by an external task called TSK_CIMQ. The results are stored inside the 'Q' record, and PE or A1PE will mostly just plot data directly from these records.
This article discusses the methods available to a user to build a 'Q' record.
Please note that AspenTech offers a 2-day Training Class covering all the SPC Charts, how to build Q records, etc. | Solution: There are 3 methods of building a 'Q' record:
While viewing an 'Adhoc' record on an SPC Chart, a user can right-click in the plot area and select 'Convert Adhoc to Q'. This will bring up a small GUI with a few boxes for user input. This is a limited option method. When used correctly it will build working 'Q' records, but sometimes a user may still need to use the InfoPlus.21 Administrator to modify some of the fields that are hard-coded with this method. This method is only available with Aspen Process Explorer
A much better wizard is available that provides more flexibility on just about all of the fields in a 'Q' record. This is the currently recommended way to build a 'Q' record. It has built in error checking which option 1 does not, and a good working 'Q' record is guaranteed virtually every time - assuming, of course, that the user has a good understanding of the parameters they are setting. This is available via: Start | Programs | AspenTech | Aspen Manufacturing Suite | InfoPlus.21 | Q Client Config Tool.
The one way to guarantee a 'Q' record is built exactly the way a user wants it is to use the Aspen InfoPlus.21 Administrator. It is strongly recommended that the user has access to the Aspen Q User's Guide when building records this way. The reason for saying this is that a thorough understanding of the Q record field names is needed. There are also certain rules that need to be followed, such as the order of editing the fields.
The following section discusses some of these rules:
Incoming data to a 'Q' record can come directly from Cim-IO or SQL into the 'Value' field, but most people will point to the history Repeat Area of an Analog tag via the 'Trend_Value_Field' and ' Trend_Time_Field' e.g. Trend_Value_Field = ATCAI 1 IP_TREND_VALUE (note the 1 for Occurrence number 1) Trend_Time_Field = ATCAI 1 IP_TREND_TIME
Subgroup Size is defined via 'Q_Std_Subgroup_Size'. However, if data is coming directly into the 'Value' field, then an entry into 'Q_Subgroup_Interval' is also needed.
Subgroups will be built on a Size basis or a Time basis (Either but NOT both). The 'Q_Trigger_Field' is configured if the size basis is used. Every time the trigger is fired, the program will check whether Q_Std_Subgroup_Size new historical values have been received to be able to build a new subgroup. Alternatively, the 'Schedule_Time' and 'Reschedule_Interval' can be used to define when Subgroups 'may' be built. Note with this method, 'Q_Allow_Partials' is used to define whether a subgroup is built if less that the subgroup size values have been received within the previous Reschedule_Interval.
Control Limit calculations can be performed in 3 different ways:
A user may specify them.
The software may recalculate them periodically.
The software will only calculate them once.
Two imortant things need to be stressed:
If the user is specifying them manually, they must edit the specific 'limit' fields and then type YES into Q_Move_Limits.
If the user wants the software to calculate once or re- calculate periodically, they must edit the specific 'calculation' fields and then toggle 'Q_Limit_Upd_Trigger' to the choice they want. In other words, any changes will not take place until that field is set (or reset).
Just as with any record that stores history, the fields that relate to Memory Storage need to be edited before adding Repository information. Therefore 'Number of Tend Vals' , 'Q_Sbgrps_In_Memory' and 'Q_Limits_In_Memory' must be edited before defining the Repository name and turning the Repository "on".
For more details, see the documentation or contact your local support group or register for training.
KeyWords:
Keywords: None
References: None |
Problem Statement: I am using "Import Crude Assasy" function in APS Integration | PIMS to APS | Import Crude Assays, to import assays from PIMS to APS.
                                                           Â
However, I have made some mistakes and I am not able to import assays anymore as I encounter the message "An item with the same key has already been added"
                                                 Â
I want to clear all the assays imported and re-import the assays. How can this be done?
                                                  | Solution: 1. Open the Microsoft Access database of the model, check all imported assays tables from PIMS i.e. PIMS_ASSAY_XREFE, PIMS_CRUDE_XREF, PIMS_ORION_KPI_MAP, PIMS_PROP_XREF, PIMS_SM_XREF, PIMS_STRM_XREF. Make sure there is no duplicate rows in these tables.
Â
                            Â
 If there is any duplicate row in the table, right click on the row and select "Delete Record" to remove duplicate row
                              Â
2. When importing Assays from PIMS to APS, ORION_MGR_ASSAY_IMP*** tables data are imported all at once i.e. ORION_MGR_ASSAY_IMP_MASTER, ORION_MGR_ASSAY_IMP_UNITMODE, ORION_MGR_ASSAY_IMPORT_CRUDE, ORION_MGR_ASSAY_IMPORT_CUT, ORION_MGR_ASSAY_IMPORT_PROP. To clear imported assays, delete all records in these tables. Then save MS Access Database file.
3. Now, you can go back to APS and import assays from Integration | Pims to APS | Import Crude Assays. You will not see the error message again.
Keywords: Integration
Import Crude Assays
Pims to APS
ORION_MGR_ASSAY_IMP table
An item with the same key has already been added
References: None |
Problem Statement: If the License Server is a Virtual Machine (VM) and I change the IP Address or Host Name, will I need a new license? | Solution: A license created for a Virtual Machine (VM) is normally locked to the VM's IP Address and Host Name. If either the IP Address or Host Name changes, the license will no longer function and you will need to order a new license. Best practice is not to make either change. In the event you are forced to, you should expect some down time. To minimize the downtime, we recommend you contact Aspentech Support to assist you with the process outlined below:
1. Make the change to the IP Address or Host Name
2. Obtain the new locking information for the virtual machine. This can be done by downloading the tool SLM Lock Info Tool . After you download the tool, run it on the Virtual Machine where the license will be installed and click Copy to clipboard. Save the information in a text file.
3. Submit a license key by following the steps below:
· Visit our Support Site
· Click Upgrades, Media & Licenses, located on the left hand side of the navigation pane
· Click Place a License Key Request and fill out the form.
Keywords: IP address, Host Name, License server, Virtual Machine
References: None |
Problem Statement: : | Solution: When swing cut structure is specified in PIMS recursion structure, the assumption is that the swing cut moves up or down with the quality at the quality of the entire cut. There is no provision for specifying a quality gradient across the swing cut. This structure demonstrates that such a gradient can be specified if known, and the swing cut will swing this quality gradient either up or down.
KeyWords:
swing
cut
cuts
crude
still
tower
distillation
temperature
overlap
quality
atmospheric
vacuum
qualities
Keywords: None
References: None |
Problem Statement: It is common that a PIMS model has more than one crude distillation unit, and the user would like to have the swing cuts behaving in the same way in all units i.e. if a swing cut swings up in one crude unit, the corresponding swing cut in another unit should also swing up. The method by which this synchronisation of swing cuts can be achieved in detailed here. | Solution: In order for the synchronisation structure to work, the swing cut structure must be built manually outside of the automated crude unit architecture. The structure is shown in table SSCS in the attached model. In this example, we are considering the swing cut between Medium Naphtha and Kerosene
The first set of rows takes care of the material balance. The structure allows the swing cut to either combine with the Naphtha (swing up) or with the Kerosene (swing down) or both. The streams from the two crude units are kept segregated in this example.
The second set of rows takes care of the swing cut synchronisation. The first step is to create a recursed property which represents the total fraction of swing up based upon the two sets of streams. The property is SWU, and the dummy stream is SWG. The next step is to use this property to drive the fraction of swing up in each set of swing cuts (crude unit specific swing up and swing down vectors) by the use of equality row equations driving the amount of swing up to be the calculated fraction of the total swing quantity. Manual penalty structure is also included so as not to over constrain the solution.
Finally the end points are reported to ensure that the synchronisation has worked
For information, this structure was built by amending one of the PIMS sample models. The sample model was first changed by segregating the medium naphtha and kerosene streams from the two crude units, and then the synchronisation structure was built. The full list of tables modified is:- SUBMODS, CRDCUTS, PGUESS, ASSAYS, ASSAY2, SSCS, PCALC New, SCALE, UTILBUY, SKHT, SNHT and BLNMIX
KeyWords
Distillation
Keywords: None
References: None |
Problem Statement: If Aspen Plus were somehow to exit ungracefully, the license is not checked in at the server for some time. How do I force the license server to reclaim the license right away? | Solution: On the license server, start the License Manager Utility tool, found under:
Start | Programs | AspenTech | License Manager 3.0.
Find out from the license manager server what is checked out by using the lmstat command. In the License Manager Utility tool window, type:
<b><i>lmstat -c @servername -A</i></b>
"servername" is the name of the license server and must contain the leading "@" symbol. "-A" display all active licenses. The message returned will look like this:
Flexible License Manager status on Thu 2/27/2003 13:48
[Detecting lmgrd processes...]
License server status: 27000@servername
License file(s) on jgrovesnt: F:\Program Files\AspenTech\License Manager 3.0\aes11networkserver.lic:
servername: license server UP (MASTER) v7.2
Vendor daemon status (on servername):
aspen: UP v7.2
Feature usage info:
Users of AspenPlus: (Total of 9000 licenses available)
"AspenPlus" v99.9, vendor: aspen
floating license
user userscomputer userscomputer (v1.0) (servername/27000 102), start Thu 2/27 10:48
The syntax for the lmremove command is:
lmremove [-c licfile] feature user host display
To reclaim the license from the sample output provided above, type:
lmremove -c @servername AspenPlus user userscomputer userscomputer
Since FlexLM supports UNIX applications as well as Windows, it is possible that "host" and "display" would be different computers. Mostly, they will be the same and hence the need for typing the the user's computer's name twice.
KeyWords
licenseserver licence server licence manager
Keywords: None
References: None |
Problem Statement: If you have duplicate predicates and they do not show up twice in $PRED, how do you find out if you have multiple processes? | Solution: Open up $TPRED.
Look for predicate entries in which values exist in both column 0 and column 1.
Go to the Data Search utility and search on that predicate.
This will bring up the list of values for that predicate. A quick preview of them will identify where the duplicates exist.
KeyWords
$PRED
Keywords: None
References: None |
Problem Statement: How do I design a thermosiphon reboiler using Aspen Shell & Tube Exchanger | Solution: Starting with Aspen Shell & Tube Exchanger V7.0 the new Design/Rating modes for thermosiphon reboiler is added.
All modes operate with a fixed thermosiphon flowrate. However in Simulation mode, you can choose between a Fixed Flow or a Find Flow option, to determine the flow giving a pressure balance around the thermosiphon circuit.
In a Thermosiphon Design calculation, the flow and driving head are fixed, and losses in the inlet and outlet pipe are pre-calculated ? using Percent of liquid head or From pipework whichever of the option that is specified ? to determine the exchanger inlet and outlet conditions, and the maximum permitted pressure loss, before the Design calculation begins.
In this solution the design a thermosiphon reboiler by specifying the losses in the pipework as a fraction of the driving head is demonstrated. This simplifies use of the Design option, at a stage where piping information is not available.
Below are the steps involved in designing a thermosiphon reboiler.
1. Download and open the attached Thermosiphon reboiler starter.EDR file. In this file we have provided the process and the property data of the reboiler, which is imported from a HYSYS column bottoms reboiler.
2. Under Application Options, ensure the case is set for Design and Hot Side is selected as Shell side, select the Vaporizer type as Thermosiphon.
3. In the Process Data input, set the pressure at the liquid level in the column to 20.1 bar.
4. We will design a BEL type Vertical thermosiphon with a cone front cover type. Enter the geometry data for the heat exchanger as below.
5. To design thermosiphon reboiler we have to enter pipework loss calculation details in terms of Percent of liquid head or From pipework. In this case we will specify the Percent of liquid head as we do not know the pipework details. Set the Percent head loss in inlet pipe to be 30 and Percent head loss in outlet pipe to be 15.
Note: The above values are used for a thermosiphon reboiler design in the simple percentage form, rather than specifying the geometric detail of the pipework. Low values will generally give lower thermosiphon exit qualities. High values will generally reduce the thermosiphon flow, and increase the exit quality. Low values may increase the risk of flow instabilities. In a fixed flow thermosiphon calculation, there will in general be a pressure imbalance around the thermosiphon flow circuit.This appears in the results as unaccounted pressure change in the inlet and/or outlet circuits.
6. Finally, if you know the required nozzle sizes you can specify the details of the shell side and tube side nozzle diameters by selecting YES for ?Use the specified nozzle dimensions in ?Design? mode? in the Nozzles page. This option is useful when you know the pipework information of the thermosiphon. For this design we will not set any nozzle dimensions and let the program calculate the nozzle diameters.
7. Save the case and run Aspen Shell & Tube Exchanger. Program will find the optimized thermosiphon design.
8. Check the Error/Warning log to see if there are any Operation/Result warnings.
9. View the setting plan for the exchanger and additional information for thermosiphons can be found from the following forms;
? Results | Thermal / Hydraulic Summary | Pressure Drop | Thermosiphon Piping ? Pressure drops in piping circuits and the heat exchanger.
? Results | Thermal / Hydraulic Summary | Flow Analysis | Thermosiphon and Kettles - Stability Assessment and flow reversal.
? Results | Calculation Details | Analysis along Tubes | Interval Analysis ? Flow patterns.
10. To use the obtained design in HYSYS/Aspen Plus you have to convert the Design case to a Rating case by clicking on Run | Update file with Geometry ? Shell&Tube
Keywords: design, thermosiphon, percent head loss, shell & tube
References: None |
Problem Statement: What is the thermosyphon reboiler in RadFrac, and how do I use it? | Solution: The RadFrac block has two options for modeling reboilers, kettle and thermosyphon. They roughly correspond to the physical kettle and thermosyphon heat exchangers, but there are some differences to be aware of.
Real Reboilers
In a physical kettle reboiler, liquid is taken from the bottom of the distillation column and fed to the heat exchanger. The vapor product of the heat exhanger is returned to the column to supply the heat needed to perform the separation. The liquid product of the kettle is the bottoms product. Because the bottoms product is essentially in equilibrium with the vapor going to the bottom tray, this kind of reboiler acts as a full theoretical stage for separation purposes.
In a physical thermosyphon reboiler, liquid is taken from the bottom of the column and fed to the reboiler where part of it is vaporized. Both vapor and liquid from the heat exchanger are returned to the column. Because the density of the partially-vaporized mixture is much lower than the density of the liquid feed to the exchanger there is a natural circulation (a "thermosyphon") through the reboiler.
Because both vapor and liquid are returned to the column, there has to be a separate bottoms product stream from the column. This may be taken off the feed line to the reboiler or it may be taken in a separate draw from the column. In general, the liquid bottoms product from such a system is not in equilibrium with the vapor going to the bottom tray. A thermosyphon does not act as a full additional stage for separation.
Sometimes a thermosyphon reboiler is attached to a column where the sump has baffles to ensure that only liquid that has been through the reboiler is taken as product. In this case, the bottoms product is essentially in equilibrium with the vapor to the bottom tray.
Reboiler Models in RadFrac
The RadFrac kettle reboiler is a simple model. It acts as a single theoretical stage with a heat duty. RadFrac takes the feeds to the reboiler, adds the reboiler duty, and flashes the mixture to the specified enthalpy. The liquid bottoms product is in equilibrium with the vapor from the reboiler going back to the column.
The thermosyphon reboiler models both flow and duty. If the user specifies a thermodynamic condition for the outlet stream (Temperature, Temperature Change, or Vapor Fraction), RadFrac will calculate what flow is needed to give the appropriate duty. If the user specifies flow, RadFrac will calculate the outlet conditions to match the needed duty. If the user specifies both flow and an outlet condition, RadFrac will calculate duty from that. In that case, the user must enter Reboiler duty as one of the two operating specifications on the RadFrac Setup/Configuration form; this entered duty is treated as an estimate and the real duty is calculated from the flow and outlet conditions of the reboiler. The liquid bottoms product is the same as the INLET liquid to the thermosyphon, not the liquid returning to the column after partial vaporization.
The thermosyphon reboiler model in RadFrac does not model hydraulics, only phase equilibrium and duty.
Which model do I use?
The most important criterion for choosing between kettle and thermosyphon models is whether the bottoms product is in equilibrium with the vapor going back into the column.
If the bottoms product is the liquid from the outlet of the heat exchanger, model it as a kettle reboiler. (You can do this even if flow through the reboiler is caused by a thermosyphon.)
If the bottoms product is at the same conditions as the inlet to the reboiler, model it as a thermosyphon. (This is appropriate even if the flow through the reboiler is not driven by a thermosyphon but by some other means such as a pump.)
Note that the main effect of this choice is the prediction of the overall separation in the column. For columns with large numbers of theoretical stages the choice of model makes little difference.
For detailed design of either kind of exhanger, including hydraulics as well as phase equilibrium, the exchangers should be modeled with Aspen B-JAC.
Example File
Attached to this document is an Aspen Plus 10.1 .bkp file with three columns that differ only in their reboiler models.
KeyWords
radfrac
thermosyphin
thermosiphon
Keywords: None
References: None |
Problem Statement: How to configure a Plant Break Down Structure? | Solution: Attached is the document that explains step by step procedure to create a plant Break down Structure. The 'Area/unit' field of data sheets can be filled if Plant Break Down Structure exists.
Keywords: Example, Plant, break, down, structure
References: None |
Problem Statement: If you do not specify the Unit Procedure name in the Project Wizard, you get a Text Recipe window with a blank line for the Unit Procedure name. | Solution: Double-click on the blank line (or hit the Enter key on the Text Recipe window) to open the Unit Procedure dialog, and specify the name.
KeyWords:
Project Wizard, Text Recipe window, Unit Procedure
Keywords: None
References: None |
Problem Statement: When trying to generate the Excel report the user gets the following message:
'The macro 'Driver' cannot be found.' | Solution: Unzip and install the attached templates to the c:\Program Files\AspenTech\Batch Plus 2.1\Templates\Excel directory. These files should make it possible to successfully generate the Excel reports from Batch Plus.
This patch is part of the Batch Plus 2.2 build that will be shipped soon. It addresses the problem by explicitly referencing DAO 3.6 rather than allowing Excel to select its own version.
Note that this patch will not work on Excel 95.
KeyWords:
Keywords: None
References: None |
Problem Statement: MS Word Comments report is blank even though comments exist, when a report is generated for s Step with comments after a Step without comments. | Solution: Close the project, and reopen it. Then generate Comments for the Step.
KeyWords:
Results, MS Word Comments
Keywords: None
References: None |
Problem Statement: Batch Plus does not run with the error that either License Manager is not running or there is no key available. Note: Other Aspen Engineering Suite products do not have this problem.
This problem occurs under the following conditions:
When customer has multiple subnets on their network, and
The License Manager environment parameter ASP_LMhost is set, and
The client machines reside on one subnet and License Manager server resides on another.
Batch Plus can work in multiple subnets. However, the License Manager Status utility requires an environment variable to be set in order to function properly. The setting of this environment variable causes Batch Plus to stop working. | Solution: The problem can be resolved by removing the environment variable ASP_LMhost from the client system. With this solution Batch Plus will run properly. However, customer will not be able to use asplmadm command to check License Manager status.
How to remove ASP_LMhost environment Parameter:
Under NT:
Right click on My Computer Icon and select Properties
Click on the Environment button
Select ASP_LMhost under System and/or User Variable list
Select Delete Button
Under Windows 98 or 95:
Edit Autoexec.bat
Remove set ASP_LMhost=LMhost name
Save the file and exit
Reboot your system
KeyWords:
Batch Plus License Manager
Keywords: None
References: None |
Problem Statement: Air Emission Summary report give a negative times for streams from inventory locations. | Solution: Batch Plus incorrectly calculates vapor emissions from inventory locations when default vapor emission models are used. The user can safely ignore these streams. Alternatively, specify the vapor emission models for each vessel in each Operation.
KeyWords:
Vapor emission model, Air emission streams, negative times
Keywords: None
References: None |
Problem Statement: When Cake Composition is specified, Batch Plus uses default percent moisture in cake rather than user-specified percentages of components retained in the cake as moisture. | Solution: Use Cake/Liquor Amount when specifying amounts of individual components in the liquid phase.
KeyWords:
Filter, Cake Composition
Keywords: None
References: None |
Problem Statement: MS Word Comments report is blank even though comments exist, when a report is generated for s Step with comments after a Step without comments. | Solution: Close the project, and reopen it. Then generate Comments for the Step.
KeyWords:
Results, MS Word Comments
Keywords: None
References: None |
Problem Statement: Batch Plus does not run with the error that either License Manager is not running or there is no key available. Note: Other Aspen Engineering Suite products do not have this problem.
This problem occurs under the following conditions:
When customer has multiple subnets on their network, and
The License Manager environment parameter ASP_LMhost is set, and
The client machines reside on one subnet and License Manager server resides on another.
Batch Plus can work in multiple subnets. However, the License Manager Status utility requires an environment variable to be set in order to function properly. The setting of this environment variable causes Batch Plus to stop working. | Solution: The problem can be resolved by removing the environment variable ASP_LMhost from the client system. With this solution Batch Plus will run properly. However, customer will not be able to use asplmadm command to check License Manager status.
How to remove ASP_LMhost environment Parameter:
Under NT:
Right click on My Computer Icon and select Properties
Click on the Environment button
Select ASP_LMhost under System and/or User Variable list
Select Delete Button
Under Windows 98 or 95:
Edit Autoexec.bat
Remove set ASP_LMhost=LMhost name
Save the file and exit
Reboot your system
KeyWords:
Batch Plus License Manager
Keywords: None
References: None |
Problem Statement: Will Aspen License Manager work over a WAN? | Solution: Aspen License Manager will work over a WAN, but the following must be kept in mind:
If Aspen License Manager server is on an NT platform, then the License Manager and licensed products can be on different NT domains but they should be on the same side of the firewall. In a network environment they have to be on trusted domains and they do have to be members of a domain.
We have a restriction on client and server residing on different networks. A client cannot use broadcast to find another server across networks. This is because we do a generalized broadcast using INADDR_BROADCAST. We want to limit this broadcast to within a network.
When the AspenTech License Manager runs in a Wide Area Network (WAN) configuration, problems can occur due to poor network performance. The problem is generally reported as an inability to reliably connect to a remote License Manager Server. The default communications settings for the License Manager server and client computers may not be adequate to allow proper license checkout. Normally, these settings allow hosts on a Local Area Network (LAN) to access the License Manager Server.
You can change two environment variables to fine-tune the performance of the License Manager client. The variables below allow you to adjust the amount of time the client waits for a reply to a License Manager request and the number of times to retry that request:
Variable Name
Purpose
Default Value (seconds)
ASP_LMTIMEOUT
Determines how long to wait during the
Request for a license
15
ASP_RETRIES
Determines how many times to retry a
License request
2
By default, the client makes up to three attempts to start a connection (original try plus two retries). Each try lasts 15 seconds. You can set these variables in either the Windows 95 or 98 AUTOEXEC.BAT file or the Windows NT 4.0 System Control Panel Environment Tab. You can also increase the License Manager Zombie Timeout to 360 seconds, from the default of 180.
The variables are used only when performing synchronous operations, such as connecting to a License Manager Server, or checking out a license. They do not affect the License Manager polling of active clients.
KeyWords:
wide area network, local area network, LAN
Keywords: None
References: None |
Problem Statement: The AFW Server intermittently crashes, hangs, or returns errors during different operations when used with an ORACLE database. In the Microsoft Internet Information Server (IIS) logs, the problem will be reported as:
"80004005 Unspecified_error__"
- or -
"ASP_0115 Unexpected_error",
and sometimes followed by one of the following errors:
"800a01bd Object_doesn't_support_this_action:_'oAfwDb.OpenAdoConnection'", "800a01ca Variable_uses_an_Automation_type_not_supported_in_VBScript:_'Response.Expires'" | Solution: This is a confirmed problem with Oracle's OLE DB Provider for Oracle, versions 8.1.6.0 and 8.1.6.1. During database operations, connections to the database are pooled for reuse. A connection will be released if there are no requested database operations for approximately 75 seconds. Oracle does not properly handle this connection release, so the next database action will fail. Either an immediate "Access Violation" exception will be reported, or a memory overrun will occur, leading to subsequent unpredictable behavior by the application.
This problem has been the subject of a discussion in an Oracle forum, <a target="new" href="http://technet.oracle.com:89/ubb/Forum14/HTML/001151.html">"Topic: ASP 0115 (Unhandled Exception) error"</a>. In this discussion, the Oracle technical support engineer explains:
"There is no official detailed information available about ASP-0115 issues but these issues have been fixed in 8.1.6.2.0 and 8.1.7 releases."
Version 8.1.6.2 of Oracle's OLE DB provider for Oracle should be used with Aspen Framework Server. The attached "ORACLE OLEDB.ZIP" document explains how to reinstall the Oracle OLE DB provider for Oracle.
KeyWords:
Security Server
Oracle 8i
IIS
ASP 0115 Error
intermittent
Keywords: None
References: None |
Problem Statement: Does Local Security or Aspen Framework have any NT security domain restrictions? | Solution: No. Neither Local Security or Aspen Framework require NT a specific domain membership. The following sequence of events occur for security applications:
A client makes an anonymous http request to create or update a local memory security cache.
The client uses this local cache to determine if a user has access to perform a secured function.
The cache is permanently saved on the client as encrypted xml files and will be used the next time the memory cache is initialized if contact cannot be made with the Security Server. The location of the security cache can be specified with the BPE AFWTools utility. The local security cache adds robustness to the distributed role-based system.
The Security Administrator does have the ability to specify the domain each must use in order to gain role membership privileges. Roles can be based on more than one domain and the server can be located in any domain that can be accessed by the client via anonymous http. A client will usually have sufficient access if it can ping the server's ip address.
KeyWords:
Local Security
Aspen Security Server
Aspen Framework
Keywords: None
References: None |
Problem Statement: What is the AuthWrapperSvc NT Service and How Can It be Used? | Solution: AuthWrapperSvc is an out-of-process Aspen Framework (AFW) authorization component used in role-based security applications. AFW and Plantelligence applications use this client component to determine if a user has privilege to perform protected software functions. For example, Business Process Explorer invokes the CheckAccessEx authorization method to determine if a user can modify a profile before making the profile edit function available.
AFW applications, utilizing role-based security, use the authorization component by
Creating an Authorization object based on this component,
Invoking the InitCache method on the Authorization object to create a memory-based cache, and
Perform security checks by invoking Authorization object methods.
Note: InitCache must be called at least once to initialize the memory cache. RefreshCache can be used later to update the cache.
By default, objects are created within the calling application's process space and are are referred to as in-process components. Since the InitCache step can be time consuming, the out-of-process approach will improve performance if multiple client applications using security are invoked. Applications using this approach can eliminate the expensive InitCache step by connecting to a common preinitialized authorization object. The out-of-process approach is an excellent way to use role-based security within a Microsoft Internet Information Server (IIS) Web application. IIS in this case becomes a client to AuthWrapperSvc on the Framework server. The memory cache for AuthWrapperSvc will only be initialized once when the web application is first started. A reference to this object can be shared among all users and web pages belonging to the application using the built-in IIS Application object. The memory cache can still be periodically refreshed during the application's lifetime with RefreshCache.
The AuthWrapperSvc NT Service is automatically installed as a core component. The account name used for starting this service is the only data that can be configured. The specified startup account for AuthWrapperSvc must have NTFS
Read access to the AspenTech\BPE folder, and
Change access to the persistent security cache folder which by default is AspenTech\Working Folders\AFW.
KeyWords:
AuthWrapperSvc
IPFWAuthorization
Local Security
Keywords: None
References: None |
Problem Statement: The Aspen Framework (AFW) File Repository, currently based on Microsoft's Visual Source Safe (VSS), can be used to share file-based data among AFW users. Aspen Framework provides an interface to its repository using Microsoft's Internet Information Server (IIS). The Aspen Framework presentation component, Business Process Explorer (BPE), uses the IIS interface to offer common repository commands such as Add Files, Download, and Create Project.
This document explains additional IIS and NTFS security specifications that may be necessary to support repository applications. | Solution: User accounts must be created for all File Repository users using the Visual Source Safe Administrator. The VSS Administrator is used to declare users, and assign user access rights and passwords. If an Aspen Framework user attempts to access the File Respository without a VSS account, a BPE error message may be issued having the form: "User '__' not found". Optionally, an Aspen Framework role can be added as a VSS user to control access based on the BPE role. For example, a general Aspen Framework user role could be created called afw to enable multiple users to use the VSS afw user account for general read type access.
For File Repository applications, access is usually set to either Read or Read-Write. The VSS password is optional. If the password is not left blank, the domain password that will be used with BPE must be specified.
When a user account is created by the VSS Administrator, a folder having the user's name will automatically be created in the VSS\users folder directory. If VSS is installed on an NTFS volume, new VSS users should be granted CHANGE security access to their folder. If a user does not have the appropriate NTFS access to this folder, a BPE error message may be issued having the form: "Access to file '...\VSS\...' denied". The user should also have CHANGE privilege on the VSS\data folder if that user will be adding or deleting repository files. The Everyone NT group could be granted read access to this folder.
The IIS interface is implemented using Active Server Page (ASP) web files. These files by default are stored in the AspenTech\AFW\Repository folder under the default web site. An IIS virtual application, named Repository, is automatically created at installation time to point to these files. The IIS directory security for the Repository should be set to Windows NT Challenge/Response only. This will cause the server to prompt BPE Repository users to login to a trusted domain before connecting to VSS. Any domain that is trusted by the Aspen Framework Server may be specified. The specified domain will be used to authenticate the login, and the user name and password information will be used for logging into VSS. The Repository IIS application is configured during by a post installation install script (AspenTech Internet Server Configuration Tool) to run in a process that is isolated from the rest of the Web site.
If the IIS virtual directory points to an NTFS volume, users should be given READ access to the physical folder containing the ASP pages. If a user does not have READ access, a File Repository message may be issued having the following form: "Error: Access Denied." Individuals who will be downloading repository files must also have CHANGE privilege on the Inetpub\ftproot\AspenTech\Repository\VSSFileTransfer folder. The BPE VSS server first extracts a file from Visual Source Safe, places this file within an ftp VSSFileTransfer subfolder then transfers the file to the client using ftp. Users must also have READ access to the default ftp site folder, and READ access to the AspenTech\Afw\bin web folder. The NT Everyone local group could be used to simplify the NTFS specifications.
KeyWords:
Repository
Aspen Framework
AFW
IIS
security
NTFS
Keywords: None
References: None |
Problem Statement: Why is Base Load Token Usage Report showing "Error 1000"? The "Tokens Consumed" column shows 0 (zeros). | Solution: Note: this solution only applies if you are using one of the aspenONE Manufacturing applications that require Base Load Tokens enabled. For a list of products that require Base Load Tokens, see KB 128288.
This Knowledge Base describes how to ensure that the Base Load Usage Report is working and fix error 1000 and zero token consumption. This behavior normally occurs when the Base Load Token service is not "pointing" to the License Server.
To resolve, please follow these steps:
1. Open SLM Configuration Wizard and configure it to point to the license server.Start | All Programs | AspenTech | Common Utilities | SLM Configuration Wizard
? On the on the first screen, select "Yes" on "Will you be connecting to an SLM Server over the network"
? Follow the steps from KB 131783 on the second screen to configure the server and buckets (if needed)
2. Restart the Base Load Token Service by going to the System Services Start | Services.msc | Service dialog box will open.
3. Run the Base Load Token Usage Report to confirm that errors 1000 no longer exist. Start | All Programs | AspenTech | Common Utilities | Base Load Token Usage Report
Keywords: Base Load Token Usage Report, SLM Configuration Wizard, SLM_PIMS_, SLM_DPO_, license error 88 and license error 1000, KB 128288 (how to install Base Load)
References: None |
Problem Statement: SLM Server V8.0 の簡易インストールマニュアルです。 | Solution: PDFをダウンロードしてお使いください。
Keywords: JP-
References: None |
Problem Statement: Aspen PIMS V8.0 および関連製品の簡易インストールマニュアルです。 | Solution: PDFをダウンロードしてお使いください。
Keywords: JP-
References: None |
Problem Statement: You are attempting to install ASCC in a Japanese environment utilizing SJIS(This is Japanese character set). Aspen Supply Chain Connect, as a default, is set up for "MS Windows Latin 1 (ANSI), superset of Latin1". You require your environment to be configured in the Japanese set. How is this accomplished? | Solution: Our PowerCenter Data Repository is shipped using code page ?Latin 1? which is not compatible with ?SJIS?. The only option that exists is to convert the PowerCenter Data Repository to ?UTF-8?.
Briefly, Informatica the code page rules are as follows:
1. Code page of the Integration Service must be: (a) a subset of the repository code page, (b) must be compatible with the machine hosting the integration service (pmcmd)
2. Code page of the Repository must be: (a) a superset of PowerCenter Client code page, (b) a superset of the Integration Service process (c) must be compatible with the machine hosting the repository service (pmrep)
3. Each source code page must be a subset of the target code page
4. See ?Relational databases? below for details on Workflow connection code page requirements to database code page requirements
Because ?SJIS? is a subset of ?UTF-8? or ?UTF-8? is a superset of ?SJIS?. Everything described in the setup looks correct based on what Informatica requires.
If you are still seeing garbled characters you should check:
1. Is the data movement mode for the Integration Service set to ?Unicode?? Switching from ASCII to Unicode will enable code page validation between source and targets so will identify any setup problems.
2. Also the logging will contain garbage characters unless you turn on UTF-8 logging.
3. Is the code page on the machine hosting Informatica compatible with the above settings?
4. Is the code page of the database client properly matching up with the Workflow Manager Connections selected?
5. Check if there is no data truncation occurring as data is being transferred from source to target.
Relational databases. The code page of the database client. When you configure the relational connection in the Workflow Manager, choose a code page that is compatible with the code page of the database client. If you set a database environment variable to specify the language for the database, ensure the code page for the connection is compatible with the language set for the variable. For example, if you set the NLS_LANG environment variable for an Oracle database, ensure that the code page of the Oracle connection is compatible with the value set in the NLS_LANG variable. If you do not use compatible code pages, sessions may hang or you might receive a database error, such as:
ORA-00911: Invalid character specified.
For more information about configuring environment variables for database clients, see "Before You Install" in the Installation and Configuration Guide.
Keywords:
References: None |
Problem Statement: When you import numeric format codes from Excel files (e.g. material codes 443234, 454323, ....) the data is imported as number (with decimal values), even if the origin excel column is formatted as text. | Solution: If you format your columns a) as text and b) with the tic mark this resolves the problem for you.
Keywords:
References: None |
Problem Statement: How do you deploy Aspen Engineering Suite with Microsoft SoftGrid 4.2? | Solution: This document outlines the best practices for deploying Aspen Engineering Suite 2006.5 with Microsoft SoftGrid Application Virtualization Platform, Version 4.2.
SoftGrid is a virtualization technology where the virtualization happens at the application level. This enables the encapsulated Microsoft SoftGrid-enabled application(s) to run within an isolated environment, called SoftGrid SystemGuard, on the Microsoft SoftGrid client.
1.1 The SoftGrid Advantages
Microsoft SoftGrid Virtualization platform allows enterprises to centralize management of applications based on corporate policy. The SoftGrid-enabled application is never locally installed on the end-user?s computer, avoiding conflicts between different applications, between applications and operating systems, and between different versions of the same application. The dynamic delivery nature of SoftGrid allows the application to be delivered to the end user on demand. The active application upgrade provides an efficient and controlled environment for users to stay with the latest patches/updates.
1.1.1 Application running within its own protected SystemGuard
One of the most significant benefits of Microsoft SoftGrid is that application will be running in its own protected SystemGuard environment. This prevents application files from being removed or updated intentionally or accidentally (by user or other application installation) as there will be no file(s) installed locally. This also reduces the chance of application conflict due to shared components on the user machine and ultimately reduces the help desk calls in the corporation.
In addition, this feature not only reduces application conflicts, it also allows multiple versions of the products to run side by side on the same machine. For example, you can have both Microsoft Excel 97 and Excel 2003 running concurrently. .
1.1.2 Applications are never installed locally
All needed elements (files, registry, etc.) for the application are contained inside its own protected SystemGuard environment. Virtualized applications are streamed to the user on-demand as packages to be executed on the local processor. No installation footprint is created on the user machine.
1.1.3 Centrally Manage Applications
Microsoft SoftGrid platform enables organizations to control the number of users who can gain access to Microsoft SoftGrid-enabled applications based on the user policy management with SMS integration. This feature greatly reduces the application management costs in the organization.
1.1.4 Accelerate application deployment, Reduce help desk cost
Because applications are installed only once, rather than once per client machine, applications can be deployed to groups of users more quickly. And since there is only one installed copy of the software, issues with getting the software installed correctly are minimized.
2 Microsoft SoftGrid 4.2 Overview
2.1 Microsoft SoftGrid components
There are three major components in Microsoft SoftGrid Application Virtualization Platform:
I. The Microsoft SoftGrid Application Virtualization Sequencer ? The sequencer monitors and analyzes the application installation process and creates the SoftGrid Virtual Environment (SystemGuard) package to be deployed to the SoftGrid server.
II. TheMicrosoft System Center Virtual Application Server ? Once the Microsoft SoftGrid-enabled application is ready, it will be deployed to the Microsoft System Center Virtual Application Server. The application server will then fulfill requests from Microsoft SoftGrid Application Virtualization Clients for the virtualized applications.
III. The Microsoft SoftGrid Application Virtualization Client ? The client launches the Microsoft SoftGrid-enabled application in a protected environment (SystemGuard) without having any installation footprint on the Microsoft SoftGrid client host machine.
2.2 Installing applications under the SoftGrid Sequencer
With SoftGrid, the application's installer is run only under the watch of the sequencer. During the installation process, the Sequencer provides wizards to collect information from the Sequence Engineer about the application. The Sequence Engineer must know not only how the application will behave during the installation but also which other applications need to exist and which network drives need to have been configured. Additionally, the Sequence Engineer must test and configure the application, which requires even more extensive knowledge of the application.
See Also
Microsoft KB: 932137 - Best practices to use for sequencing in Microsoft SoftGrid
2.3 Limitations of Microsoft SoftGrid
There are some limitations as to what can and cannot be virtualized with SoftGrid. For example, boot-time applications cannot be sequenced because they would be expected to run before the Microsoft SoftGrid Application Virtualization Client executables have been loaded on the client machine. Background services that run in the background for an entire machine and not just for one application cannot be virtualized. Services that use system resources not virtualized by Microsoft SoftGrid Application Virtualization, such as RPC or device drivers, will likely conflict with other running instances. For clients running non-Windows OSes or versions older than Windows 2000, such as Windows 9X, NT, Linux, Macintosh, or Windows ME, a solution such as Terminal Server or Citrix MetaFrame must be used as the Microsoft SoftGrid Application Virtualization client.
3 Sequencing Aspen Engineering Suite 2006.5 with Microsoft SoftGrid Sequencer
Before sequencing the Aspen Engineering Suite (AES) 2006.5 applications with Microsoft SoftGrid Sequencer, the user should have a good understanding about the AES 2006.5 installation and about SoftGrid itself. The Aspen Engineering Suite 2006.5 Installation Guide is a good source to gain knowledge about the AES installation. The AES 2006.5 installation guide provides insights such as which applications require post-installation configuration to behave properly.
To learn more about the SoftGrid technology, you can visit the SoftGrid Team blog, http://blogs.technet.com/softgrid/ or the Microsoft SoftGrid site, where numerous materials can help you master SoftGrid.
The following sections provide a recommended procedure when working with Microsoft SoftGrid Sequencer to sequence the Aspen Engineering Suite 2006.5 application(s). The Sequencing AES application(s) with SoftGrid section provides information relevant to specific AES applications during the sequencing process.
3.1 Setting up the Microsoft SoftGrid Sequencer host platform
3.1.1 Setting up the Microsoft SoftGrid Sequencer on Virtual Platform
In order to sequence applications on a clean environment, it is recommended the Microsoft SoftGrid Sequencer be installed on a virtual platform (VMWare or Microsoft Virtual PC). The virtual platform allows effortless snapshot rewinding (undo) which allows the platform to start anew when ready to sequence the next application.
3.1.2 Setting up Microsoft .NET Framework
If the AES application installation attempts to install the .NET Framework 1.1 or 2.0 during the sequencing, the sequencer may throw the error SystemGuard download failed (error code 53256). This occurs because the sequencer tries to copy a locked file to the virtual file system. To avoid this issue, simply install the Microsoft .NET framework 1.1/2.0 on the sequencer machine in advance to avoid the Microsoft .NET Framework installation during AES installation, or follow the instructions in Microsoft KB article 931592: http://support.microsoft.com/kb/931592
3.1.3 Setting up Microsoft SQL Express 2005 (Optional)
If the Microsoft SQL Server 2005 installer runs during sequencing, it may result in the following error:
If you intend to sequence the Aspen Properties Enterprise Database (APED) System with the dependent AES 2006.5 applications, you will need to install Microsoft SQL Express 2005 on the sequencer machine. In Microsoft SoftGrid version 4.2, drivers and system level services cannot be virtualized. Microsoft SQL Express 2005 installation installs certain system level drivers and/or services, and as a result cannot be virtualized. It is recommended that you install any SQL engine on the sequencer machine in advance to prevent the AES 2006.5 installer from attempting to install Microsoft SQL Express 2005 during the sequencing phase.
You can still use the APED feature with the Microsoft SoftGrid-enabled application by setting up the database locally on the Microsoft SoftGrid client machine. Please refer to the section Aspen Plus Family of Products for more information.
3.2 General recommendations for Sequencing AES 2006.5 applications
3.2.1 Mount point installation (MNT)
MNT installation is the preferred method when installing the Aspen Engineering Suite 2006.5 applications on the SoftGrid Sequencer. The MNT is an installation where during the sequencing steps, a destination folder is created in the Q:\ drive (SoftGrid Default Mount Point) and the application is installed in this folder. For example, it is recommended that Q:\APlus21 to be used as the root destination folder for sequencing Aspen Plus 2006.5. On the client, a virtual Q:\ drive is created within the SystemGuard environment which is not accessible outside SystemGuard.
With the MNT installation, efficiency is improved when compared to a Virtual File System (VFS) installation. The MNT installation method is also recommended by the Microsoft SoftGrid team in Microsoft KB: 932137 - Best practices to use for sequencing in Microsoft SoftGrid
3.2.2 Sequence a group of Aspen Engineering Suite 2006.5 applications
The current version of Microsoft SoftGrid doesn?t support communication between two separate Microsoft SoftGrid-enabled application packages. As a result, applications need to sequence together as a package for them to communicate and function properly. For example, Aspen HTFS+ should be packaged together with Aspen Properties in order to launch and use the properties system provided by Aspen Properties.
3.2.3 Microsoft Excel-Dependent Aspen Engineering Suite 2006.5 applications
As with groups of AES 2006.5 applications, in order for the inter-product functionality of third-party-dependent applications to perform flawlessly, it?s recommended that you also sequence any third party applications together with the depending AES 2006.5 applications. For example, Aspen FCC should be sequenced together with Microsoft Excel. Aspen FCC may have problems using the locally-installed copy of Microsoft Office.
3.3 Working with the Sequencer - Package Configuration Wizard
The Package Configuration Wizard is the first set of screens that appear when you start the sequencer. It collects information necessary to monitor the product installation(s).
3.3.1 Specify the backend Microsoft System Center Virtual Application Server name
Please specify the Virtualized Application Server (VAS) name instead of the variable %SFT_SOFTGRIDSERVER% for the Hostname if there?s only ONE VAS. If you use the variable, you will need to modify the *.osd files manually to specify the VAS name or set the environment variable SFT_SOFTGRIDSERVER on each client pointing to the VAS. However, if there is more than one VAS, it will be more convenient to use the variable, %SFT_SOFTGRIDSERVER%, instead and then make the variable point to the appropriate VAS via group policy.
3.3.2 Specify Path Variable
A Path value should be specified in the Package Configuration Wizard, for example ACM20065. A folder with this name should also be created on the VAS. Upload all the files created during the sequencing phase into this content folder. Using a specified path makes the packages more manageable and presentable on the server.
3.4 Working with the Sequencer - Installation Wizard
When you finish with the Package Configuration Wizard, the Installation Wizard appears. When you reach the Monitor installation screen of this wizard click Begin Monitoring and then install the application(s) that will be part of the package you create, including any third-party applications.
3.4.1 Using MNT install and specify the root destination folder
During the installation of the AES application, the root folder, ASPENROOT, should be specified to be a sub-folder under Q:\ (The default SoftGrid mount point drive). For example, you may specify Q:\ACM210 as the destination folder for ASPENROOT during the installation of ACM product(s) on the sequencer. However, the working folder for each application should be left with the default path (normally a subfolder within this folder). All applications sequenced into the same package should use the same folder, but for each separate package you create, you should choose a different folder.
The ASPENROOT folder is specified on the following screen:
3.4.2 Choosing the correct destination folder
After you click the Stop Monitoring button, please be sure to choose the correct folder your application installed to.
3.5 Working with the Sequencer - Application Configuration Wizard
3.5.1 Removing unwanted shortcuts
During the sequencing phase, Microsoft SoftGrid sequencer may create extra shortcuts due to the installation behavior. You can remove the unwanted shortcuts from the Configure Applications screen.
3.5.2 Renaming common shortcuts to avoid conflict
It?s recommended that you provide a name which is unique among all your SoftGrid applications for each of the shortcuts pointing to the common executables (for example: Notepad or SLM Configuration Wizard). This is to avoid conflicts and confusion during deployment due to shortcuts having the same name but belonging to different Microsoft SoftGrid-enabled applications.
3.6 Working with the Sequencer ? Other Issues
3.6.1 Windows Installer Resiliency kicks in during sequencing
During the sequencing phase, if Windows Installer resiliency kicks in, please allow it to complete the process. However, this may cause the Launch Progress bar hanging on the desktop even after you close the application, as it complains about waiting for a child process (application) to exit. Under this circumstance, the Windows Installer process (misexec.exe) will need to be terminated manually via the Task Manager.
3.6.2 Modify the .osd file to include the AspenTech Shared folder
In order to make all the AspenTech files installed to the %COMMONFILES%\AspenTech Shared folder accessible to virtualized applications,it?s recommended that you include the AspenTech Shared folder in the .osd path setting.For example, you may find the following in the .osd file referring to the AspenTech Shared folder:
<ENVIRONMENT VARIABLE="PATH">%CSIDL_PROGRAM_FILES%\Softricity\SoftGrid Sequencer\;%PATH%;%CSIDL_PROGRAM_FILES_COMMON%\AspenTech Shared\;</ENVIRONMENT>
Please replace %CSIDL_PROGRAM_FILES_COMMON%\AspenTech Shared\ with %SFT_MNT%\MyAppV10\VFS\CSIDL_PROGRAM_FILES_COMMON\AspenTech Shared\, where MyAppV10 is the folder under the default mount drive in the SoftGrid system where you install the application.
4 Sequencing AES application(s) with SoftGrid
4.1 Aspen Modeler Family of Products
The following best practices can be applied to Aspen Custom Modeler, Aspen Dynamics, Aspen BatchSep, Aspen Chromatography, Aspen ADSIM, Aspen Utilities Planner, and Aspen Model Runner.
4.1.1 Pre-sequencing requirements and application dependencies
? Install Microsoft SQL engine on the sequencer workstation in advance if you intend to sequence Aspen BatchSep 2006.5 with APED
? Aspen Plus 2006.5 should be sequenced together with the Aspen Dynamics package for the inter-product functionality to perform properly
4.1.2 Installation Wizard ? N/A
4.1.3 Execution Wizard ? N/A
4.1.4 Application Configuration Wizard
If you get the working folder invalid error while launching Aspen Custom Modeler products on the sequencer, please create a folder on your local drive and then specify it as the working folder after you click OK on the error window.
4.1.5 Other issues ? N/A
4.2 Aspen Plus Family of Products
4.2.1 Pre-sequencing requirements and application dependencies
Install Microsoft SQL engine on the sequencer workstation before sequencing the Aspen Plus Family of products.
4.2.2 Installation Wizard ? N/A
4.2.3 Execution Wizard ? N/A
4.2.4 Application Configuration Wizard ? N/A
4.2.5 Other issues
? You need to manually edit the file association of *.bkp file. When you import the *.osd of Aspen Plus User Interface, you will find there are two items about *.bkp in the File Association window. Please remove the second one (for extension [none]) and edit the remaining .bkp file association (modify the File type description and select the correct icon file Aspen Plus Backup File.ico).
The following 2 pictures illustrate the steps in removing and editing the file association for .bkp.
? If you intend to use the Aspen Properties Enterprise Database with Aspen Plus or Aspen Properties, please be sure to select the sub-feature Aspen Properties Enterprise Database when you sequence the product, in order to properly generate the file config.aem. However, the database files will not be included in the SFT package in the current version of the Microsoft SoftGrid. In order to use the APED on the client machine, APED (the Microsoft SQL engine and the APED database files) needs to be created locally on the client machine. Alternatively, you can connect to a remote APED server to access the properties data for the simulation run.
4.3 Aspen HYSYS Family
4.3.1 Pre-sequencing requirements and application dependencies
No additional product dependencies other than those specified in the AES 2006.5 installation guide.
4.3.2 Installation Wizard ? N/A
4.3.3 Execution Wizard ? N/A
4.3.4 Application Configuration Wizard ? N/A
4.3.5 Sequence Editor ? N/A
4.3.6 Other issues
? In order to avoid getting error 1722 during sequencing of the Aspen HYSYS Upstream Option 2006.5, Aspen HYSYS Upstream Option 2006.5.msi need to be modified with delaying the 2 custom actions (SetWorkingFolderPermission and FixWorkingFolderPath) to post InstallFinalize.
4.4 Aspen HTFS+/HTFS
4.4.1 Pre-sequencing requirements and application dependencies ? N/A
4.4.2 Installation Wizard ? N/A
4.4.3 Execution Wizard ? N/A
4.4.4 Application Configuration Wizard ? N/A
4.4.5 Sequence Editor ? N/A
4.4.6 Other issues
? Under some circumstances, Aspen Teams 2006.5 simulations may take longer to complete under the Microsoft SoftGrid Virtualization platform .
4.5 Aspen Batch Plus
Following the general setup guidelines should enable the creation of the Microsoft SoftGrid-enabled Aspen Batch Plus 2006.5 without any issue.
4.5.1 Pre-sequencing requirements and application dependencies
4.5.2 Installation Wizard ? N/A
4.5.3 Execution Wizard ? N/A
4.5.4 Application Configuration Wizard ? N/A
4.5.5 Sequence Editor ? N/A
4.5.6 Other issues? N/A
4.6 Aspen HX-Net/COMThermo Workbench
4.6.1 Pre-sequencing requirements and application dependencies
4.6.2 Installation Wizard
4.6.3 Execution Wizard
4.6.4 Application Configuration Wizard
4.6.5 Sequence Editor
4.6.6 Other issues
There is a concepts.UFO error while launching HX-Net/COMThermo Workbench. In order to bypass this error, please remove the concepts.UFO file from the Aspen HX-Net 2006.5/Aspen COMThermo Workbench 2006.5 folder, or you can add -nosplash parameter to the shortcut. This will result in the AspenTech splash screen not showing during startup of the application. We will continue to investigate on this issue.
4.7 Aspen Flarenet
Following the general setup guidelines should enable the creation of the Microsoft SoftGrid-enabled Aspen Flarenet 2006.5 without any issue.
4.7.1 Pre-sequencing requirements and application dependencies
4.7.2 Installation Wizard ? N/A
4.7.3 Execution Wizard ? N/A
4.7.4 Application Configuration Wizard ? N/A
4.7.5 Sequence Editor ? N/A
4.7.6 Other issues? N/A
4.8 Aspen Icarus
4.8.1 Pre-sequencing requirements and application dependencies ? N/A
4.8.2 Installation Wizard - N/A
4.8.3 Execution Wizard
DO NOT launch any Icarus application during the sequencing, otherwise the Windows Installer resiliency will kick in and cause the sequencer to freeze. Launch Notepad instead when SoftGrid will not allow you to go to next step until one of the shortcuts is launched.
4.8.4 Application Configuration Wizard ? N/A
4.8.5 Sequence Editor ? N/A
4.8.6 Other issues ? N/A
4.9 Aspen OnLine
Due to the requirement of additional port settings on the client machine, Aspen OnLine is not suitable for Microsoft SoftGrid Virtualization.
4.10 Aspen Online Deployment
Following the general setup guidelines should enable the creation of the Microsoft SoftGrid-enabled Aspen Online Deployment 2006.5 without any issue.
4.10.1 Pre-sequencing requirements and application dependencies ? N/A
4.10.2 Installation Wizard ? N/A
4.10.3 Execution Wizard ? N/A
4.10.4 Application Configuration Wizard ? N/A
4.10.5 Sequence Editor ? N/A
4.10.6 Other issues? N/A
4.11 Aspen Process Manual
Aspen Process Manual is not recommended as a virtualized application as it is intended to be a server application. Platform virtualization (VMWare/Virtual PC (Server)) is the preferred virtualization platform.
4.12 Aspen Process Tools
4.12.1 Pre-sequencing requirements and application dependencies ? N/A
4.12.2 Installation Wizard ? N/A
4.12.3 Execution Wizard ? N/A
4.12.4 Application Configuration Wizard ? N/A
4.12.5 Sequence Editor ? N/A
4.12.6 Other issues
? Due to the number of shortcuts created by this application, the shortcuts should always be deployed to a program group on the Start Menu on the client machine.
4.13 Aspen RxFinery Family
4.13.1 Pre-sequencing requirements and application dependencies
? Microsoft Excel should be sequenced along with the Aspen RxFinery products. Microsoft SoftGrid-enabled Aspen RxFinery applications cannot detect the VBA Editor if the Microsoft SoftGrid-enabled Aspen RxFinery application user intends to use Microsoft Excel installed on the client host machine.
? Aspen Plus needs to be sequenced along with the Aspen RxFinery Family of products.
4.13.2 Installation Wizard ? N/A
4.13.3 Execution Wizard ? N/A
4.13.4 Application Configuration Wizard ? N/A
4.13.5 Sequence Editor ? N/A
4.13.6 Other issues ? N/A
4.14 Aspen Simulation Workbook Family
4.14.1 Pre-sequencing requirements and application dependencies
? Aspen Remote Simulation Service is not recommended to be deployed with SoftGrid as it is a Windows Service that should be set to start when Windows starts.
? Once the Aspen Simualtion Workbook install completes, the file, AspenTech.AspenCxS.dll needs to be copied from %COMMONFILES%\AspenTech Shared\Aspen CXS 2006.5 to the Aspen Simulation Workbook 2006.5 folder.
? Microsoft Excel should be sequenced along with Aspen Simulation Workbook. However, if you prefer to use the locally installed Excel on the client, you will need to add the following scripts into the .osd file for the Aspen Simulation Workbook Add-In Manager package.
<DEPENDENCY>
<CLIENTVERSION VERSION="3.1.2.2"/>
<SCRIPT TIMING="POST" EVENT="LAUNCH" WAIT="FALSE" PROTECT="TRUE">
<HREF>C:\Program Files\Microsoft Office\OFFICE11\EXCEL.EXE</HREF>
</SCRIPT>
</DEPENDENCY>
This setting allows the locally installed Microsoft Excel to have access to the Microsoft SoftGrid-enabled Aspen Simulation Workbook.
4.14.2 Installation Wizard ? N/A
4.14.3 Execution Wizard ? N/A
4.14.4 Application Configuration Wizard ? N/A
4.14.5 Sequence Editor ? N/A
4.14.6 Other issues
4.15 Aspen Utilities Planner
4.15.1 Pre-sequencing requirements and application dependencies
Aspen Plus should be sequenced with Aspen Utilities Planner.
4.15.2 Installation Wizard ? N/A
4.15.3 Execution Wizard ? N/A
4.15.4 Application Configuration Wizard ? N/A
4.15.5 Sequence Editor ? N/A
4.15.6 Other issues
? If you receive the invalid working folder error, creating the following folder on the client host machine should resolve this issue. C:\Documents and Settings\All Users\Application Data\AspenTech\Aspen Utilities Planner 2006.5
4.16 Aspen Zyqad
? We are working with the Aspen Zyqad development team to finalize the procedure needed to create the Microsoft SoftGrid-enabled Aspen Zyqad.
5 Deploying Microsoft SoftGrid-enabled applications to Microsoft System Center Virtual Application Server
? On the Virtual Application Server, the folder C:\Program Files\Softricity\SoftGrid Server\content needs to be shared. (for example, as the share \\softgrid-server\content)
? The aforementioned shared folder should be specified as the Default Content Path, as illustrated in the picture below.
? You can import one Microsoft SoftGrid enabled application at a time by importing the *.osd file directly (all the *.osd files will be imported when you choose the *.sprj file).
? A program group for the Start Menu should be specified for the Microsoft SoftGrid-enabled application shortcuts for better manageability on the user?s machine.
? Additional actions can be performed when launching the Microsoft SoftGrid-enabled application by having additional scripts in the .osd file. Please refer to the following Microsoft SoftGrid team blog article for more information: Scripting within an OSD file - http://blogs.technet.com/softgrid/archive/2007/10/11/scripting-within-an-osd-file.aspx
6 Launching Microsoft SoftGrid-enabled AES 2006.5 applications with Microsoft SoftGrid Client
? Please be sure to install the Microsoft Visual C++ 2005 SP1 Redistributable (It can be downloaded at http://www.microsoft.com/downloads/details.aspx?familyid=200B2FD9-AE1A-4A14-984D-389C36F85647&displaylang=en) on the client locally! It?s CRITICAL for most of the AES 2006.5 applications to function properly.
? Refreshing the server on the Microsoft SoftGrid client will make the latest Microsoft SoftGrid-enabled applications available on the client?s host machine.
? Clean up the old application settings and cache after you re-deploy an application, otherwise you might have problems using the newly deployed application.
? Due to the large installation footprint of the AES applications, it is recommended that the default cache size be enlarged to allow sufficient space on the client to store/stream the Microsoft SoftGrid-enabled applications. A size over 4GB is recommended. If you have installed the client with default settings, you can enlarge the cache size afterwards in the SoftGrid Client Management console.
? If the Microsoft SoftGrid client is on a WAN/slow network environment, you may need to pre-load the Microsoft SoftGrid-enabled application (to stream the virtualized package to the local cache) to allow all the necessary feature blocks to be ready for execution.
7 Useful Resources
The following web sites have more information regarding Microsoft SoftGrid:
? Microsoft SoftGrid WebSite
o http://www.microsoft.com/systemcenter/softgrid/default.mspx
? SoftGrid TechCenter
o http://technet.microsoft.com/en-us/softgrid/default.aspx
? Microsoft Virtualization
o http://www.microsoft.com/virtualization/default.mspx
? The SoftGrid Team Blog
o http://blogs.technet.com/softgrid/default.aspx
? http://www.SoftGridblog.com
o http://www.softgridblog.com/
? http://www.SoftGridguru.com
o http://www.softgridguru.com/
Keywords: None
References: None |
Problem Statement: Zipped Log files not created in specified folder after installation of ALC Auto Upload Toolkit.
(In ALC, logged files will be generated within a zipped file) e.g. shown at right, which includes the log usage file, license file and some xml inside it.
After installation of ALC Auto Upload toolkit, the folders where the log files will be stored needs to be configured. e.g. Go to Windows -Start- Programs- Aspentech-> ALC -> Configuration Tool
In this example, the log files will be uploaded under folder C:\SLM and archived in C:\Archive.
However, if the environment variable "LSERVOPTS" is not specified in Windows system, the ALC upload tool will not be able to generate the log files even after configuration is completed. | Solution: Ensure that the LSERVOPTS environment variable is set up in the license server if this had not been done yet.
Go to Windows-> Control Panel-> System-> Advanced-> Click on "Environment Variables" button.
Select "New" under system variables and create the variable. The example variable value is -l C:\SLM\lserv.log -z 2m -lfe 2
Note: LSERVOPTS equal to -l C:\SLM\lserv.log -z 2m, for ALC to function properly, it is essential to have -lfe 2 as part of the defined value.
Also remember that if there is a space in from of the first dash or at the end (behind the 2), it will not work.
This will create a log file "lserv.log" in folder of C:\SLM with a 2mb file size capacity.
After that, you will need to restart the license server in order for this to take effect.
Keywords: ALC, Aspen Licensing Center, Log files, LSERVOPTS, Lserv.log, Environment Variable, SLM.
References: None |
Problem Statement: What is the best way to determine the differences between two license files? | Solution: To check for differences between two license files do the following:
Run SLMLicenseProfiler utility under: C:\Program Files\Common Files\Hyprotech\Shared
When the SLM License Profiler window appears, under Actions tab, select "Diff License Files..."
The following window will appear:
Browse to 1st license file location by clicking on the 1st box with three dots (?).
Browse to 2nd license file location by clicking on the 2nd box with three dots (?).
Click on Diff License Files
After both license files are loaded Loading license files completed box appears
Click OK to proceed.
Every entry within the two license files will be compared and listed in the same row. If two entries are different, three red exclamation marks (!!!) will appear in front of the corresponding row. In the above snapshot, every entry within the two license files seems to be different.
Keywords: Profiler
SLM
LicenseProfiler
References: None |
Problem Statement: How to obtain ion concentrations modeling electrolytes? | Solution: HYSYS has difficulties modeling true species. It is necessary to use an EO Subflowsheet in order to use Aspen Plus unit operations. The components should be added using Aspen Properties and Electrolyte NRTL property method.
Attached is an example
Keywords: Aspen Properties, Electrolyte, EO Subflowsheet, Electrolyte NRTL
References: None |
Problem Statement: What is new in Aspen Plus V7.3 - Binary mixture & phase equilibrium data available through NIST ThermoData Engine | Solution: The NIST TDE feature in Aspen Plus V7.3 is significantly enhanced through the addition of over three million points of binary mixture and phase equilibrium data, including VLE data for over 30,000 unique pairs of components.
With V7.3, you can search the database and extract phase equilibrium, infinite dilution activity coefficient, and heat-of-mixing data for thousands of component pairs, saving additional weeks or months of effort. The database includes data sets for vapor-liquid equilibrium, liquid-liquid equilibrium, and solid-liquid equilibrium (solubility). These data are invaluable for validating or fitting binary coefficients for equations of state and/or activity coefficient models.
The database also includes binary mixture data for thermophysical and transport properties. This can be especially valuable in fitting binary parameters to improve the fidelity of transport property models, resulting in more accurate equipment sizing and rating calculations.
The on-demand access to binary data in Aspen Plus V7.3 saves weeks of effort of data collection and validation, helping you improve the accuracy of your models.
See this animated tutorial to see this feature in action:
Play viewlet now!
To get started, click Tools, ThermoData Engine? and then select the binary property option. Select two components and then click the Retrieve data button.
After a few seconds, the system will bring up a list of all the available data sets for the selected system. The data sets are sorted by type, and the form shows the number of data points, publication data, and temperature and pressure range for each set of data.
Use the tree view on the left side of the TDE Result form to select a particular data set. Now you can view the data points in detail. The citation is fully documented for each data set. You can optionally view the uncertainty of the data using the uncertainty check box at the bottom of the form.
The data summary view makes it easy to see all the available data sets and select the sets in the temperature and pressure range of interest.
You can view the data points and the citation from this view.
You can also view the data in a plot.
Click the Plot button at the bottom of the TDE Result form to preview the data in a plot.
Vapor-Liquid equilibrium data can be checked for thermodynamic consistency by clicking the Consistency button. This button changes to a 'Results' button when the calculations are complete, and the summary view is updated to show the quality of all the data sets.
Click the Results button to open a form summarizing the consistency results.
Use the radio button on the top of the form to view the various test results. You can also view the test results graphically by clicking the Plot button on this form. These test results provide valuable insights into the quality and reliability of the model. With the Herrington test, for example, the areas marked ?A? and ?B? in the plot above should be equal. This data fails the test, in part because the data are biased towards low MMA mole fractions. This implies that binary parameters fit against this data might be unreliable at low water concentrations in MMA.
The Van Ness test results can be used to check TPXY or isobaric VLE data against a best-fit NRTL model. A good match between the predicted and measured TXY data implies that these data are reliable, although they don't quite meet the very stringent pass/fail conditions of the Van Ness test.
Once you are confident with the quality of the data, you can send them to a data form and use the Property Data Regression feature in Aspen Plus to evaluate your property model against the data or regress binary parameters to fit the data.
First, click the Save button at the bottom of the TDE Result form to send the data to a property data form. A dialog box will open. You can use this dialog box to select which data sets to be included in the regression. Next, click the Evaluate/Regress button at the bottom of the TDE Result form. This will set the run mode to 'Data Regression' and create a data regression or evaluation case. From this point you can run the model and use the features of the data regression system to view the results in detail.
Use Evaluation to compare model predictions against the data?
?or use Regression to fine-tune the model to fit the data.
Keywords: None
References: None |
Problem Statement: What is New in Aspen Plus V7.3 - Improved Integrated Cost Evaluation | Solution: Integrated economic analysis makes it easy for Process Engineers to us AspenTech's rigorous and proven cost modeling technology from inside Aspen Plus. Use estimated capital and operating costs to make better engineering design decisions. Compare alternatives on a consistent basis using relative costing early in the conceptual design process, then send the preliminary cost files to your cost estimation department for detailed estimates using Aspen Capital Cost Estimator.
Click here to see a quick demonstration of this feature
Aspen Plus V7.3 includes a number of improvements that build on the integrated economic evaluation feature first delivered with Aspen Plus V7.1.
? Material costs for raw materials and products are now passed from Aspen Plus to Aspen Process Economic Analyzer. Their totals, along with annualized capital cost, total utilities cost, total product sales, desired rate of return, and pay out period are now reported on the Summary tab on the Equipment Summary grid.
? The integrated costing workflow is improved in V7.3. The Select Basis dialog has been removed. Instead, you select a costing template on the Setup | Costing Options | Costing Options sheet. A number of standard templates are delivered out-of-the box with Aspen Process Economic Analyzer (APEA) V7.3. Your Cost Estimation department can develop custom templates using APEA or Aspen Capital Cost Estimator (ACCE). You can use templates to specify design basis parameters, default equipment mapping rules, sizing rules, and evaluation rules. You can even customize the sizing and costing algorithms in APEA using Microsoft Excel spreadsheets.
? You can specify the operating life of the plant, start of basic engineering, and the length of the plant startup on the Setup | Costing Options | Costing Options sheet These parameters are used to evaluate the Payout Period.
? The Aspen Plus Costing toolbar now has buttons linked to the Costing Options and Stream Price forms. Stream prices are passed to Aspen Process Economic Analyzer to set the raw material and product costs.
? Equipment weight and total installed weight are now reported for each piece of equipment on the Equipment Summary grid.
Keywords: None
References: None |
Problem Statement: What is new in Aspen Plus V7.3 - Exporting heat exchanger data to Aspen Exchanger Design & Rating tools | Solution: Aspen Plus V7.3 enables a new workflow for sending heat exchanger data to AspenTech's Exchanger Design and Rating (EDR) tools. The new workflow makes it much easier to extract data from simulation models to enable better collaboration between engineers focused on conceptual design and equipment design. This new feature makes it easy to evaluate heat exchangers, including the condensers and reboilers associated with distillation columns.
Click the link below to view an animated tutorial of this feature:
View tutorial now!
Step-by-Step Example
The first step is to build and run your process model. After running the model, the new Analysis toolbar will become active, as shown in the figure below. Click the ?Heat Exchanger Design? button to start the process.
After 30-60 seconds, the EDR user interface will open up and present a list of the heat exchangers from the simulation model. The list includes all the HEATER and HEATX (two-stream heat exchanger) blocks, as well as the column condensers and reboilers.
Aspen Plus passes EDR the known exchanger operating conditions, including the duty, feed stream compositions, pressure drops, and physical properties of the process streams. If the Aspen Plus model references process utilities, then the properties of the utilities fluid will also be passed to the EDR model.
Aspen Exchanger Design and Rating displays a list of the heat exchangers from the Aspen Plus model. Select an exchanger in the upper window. You may optionally change the stream conditions in the middle section of the form. Click the 'Import' button to import data for the selected heat exchanger.
The second dialog box can be used to generate a heat exchanger PSF file. This is an industry-standard data exchange file that you can use to import data into other exchanger design tools.
At this point, you can save the EDR file to send it to an exchanger specialist, or carry on with the exchanger design within Aspen EDR.
When you use this new Export capability from Aspen Plus the default exchanger type assumed by EDR will be Shell & Tube. You can use the Run | Transfer capability to get your data into an Air Cooled or Plate Exchanger case. For discussion purposes, this document follows the workflow for designing a shell and tube column.
Aspen EDR uses the information from the Aspen Plus case to set many parameters in the EDR environment. Be sure to review the Problem Definition | Application Options form to verify or reset the hot fluid location (shell side or tube side), the application types, and if applicable the condenser and vaporizer types.
The data in the Application Options | Process Data are based on the operating conditions and specifications in the Aspen Plus model. Aspen Shell & Tube Exchanger uses rigorous hydraulics to calculate the pressure drop across the exchanger, so the outlet pressures and pressure drops are estimates. The allowable pressure drop parameters are important specifications for the design because these are considered constraints by the design optimization algorithm. Therefore, the specified maximum allowable pressure drop parameters can have a strong influence on the design and capital cost of the exchanger.
You may optionally enter fouling resistance terms or change the operating conditions carried over from the model - for example you may wish to scale up the flow rates to allow for over-design of the exchanger.
The 'Adjust if Over-Specified' parameter also influences the design calculations. You may wish to adjust this parameter. For example, it often makes sense to free the utility stream outlet temperature or utility fluid mass flow to match the process fluid heat load.
The Property Data forms are automatically populated with data from Aspen Plus, ensuring consistency between the process model and the exchanger design application.
Before running the automated design tool, you may wish to adjust or constrain the geometry or set the materials of construction. You can specify the exchanger type and orientation using the Exchanger Geometry forms, as shown below.
The materials of construction and design codes are set using the Construction Specifications form. In this example, the hot process fluid is highly reactive and Inconel is selected. The hot stream has previously been assigned to the tube side of the exchanger.
After completing the specifications you are ready to size the exchanger. Click the run button (>) to launch the design automation algorithm. The design optimization results are reported on the Result Summary | Optimization Path form. This form reports the optimal design and alternative designs developed by the algorithm. Some of the designs may be lower cost, but may be outside the design constraints. The design status of these cases is flagged as ?Near? instead of ?OK?. Other cases may meet the pressure constraints, but may fail a design criterion such as the maximum unsupported tube length or nozzle velocity suggested by TEMA. These cases are flagged ?(OK)? and recommendations are made in the Result Summary | Messages and Warnings form. These problems can often be resolved with simple design changes.
In this example, the condenser duty is very high, and three parallel shells are required to meet the duty specifications. For this reason, and due to the exotic material selection, the capital cost estimate is quite high, but the design optimization procedure has found the lowest cost feasible design to meet the constraints.
When the design is complete, the EDR file can be saved. You may reference this model in rigorous heat exchanger (HEATX) model within Aspen Plus. For rating purposes, be sure to change the calculation mode to 'Simulation'.
Alternately, you can evaluate alternative designs and compare the cost and design parameters on the Results Summary | Recap of Designs form. For example in the case below, the exchanger was re-optimized using Inconel tube cladding instead of solid Inconel tubes, dropping the capital cost by half.
Keywords: None
References: None |
Problem Statement: What is new in Aspen Plus V7.3 - Sending simulation data to Aspen Energy Analysis | Solution: With Aspen Plus V7.3 you can check the energy efficiency of your process in minutes by sending the case file to Aspen Energy Analyzer. The workflow to send an Aspen Plus case to Aspen Energy Analyzer is now much easier - just build and run the Aspen Plus model and then click the new Send to Aspen Energy Analyzer button on the new Analysis toolbar.
See this five minute animated tutorial to see this feature in action:
Play Viewlet Now!
For best results, we recommend using the process utility feature in Aspen Plus to assign heating and cooling utilities to all process heaters and coolers, including the condensers and reboilers associated with distillation columns. Aspen Energy Analyzer will use the utility data from the Aspen Plus case.
Aspen Energy Analyzer is a heat pinch tool which you can use to analyze the energy efficiency of your process and develop alternate schemes to improve energy efficiency. For example, you can evaluate different schemes for adding process-process exchangers to recover heat to save on heating and cooling utility costs.
Aspen Energy Analyzer reports the ratio of the total heating and cooling costs against the theoretical targets. In this case, the heating and cooling are nearly twice the achievable limit, implying there is a good opportunity for saving energy by introducing process/process exchangers. Aspen Energy Analyzer uses the utility costs from the Aspen Plus model, and a simple capital cost estimation procedure based on required exchanger area to estimate the capital and operating costs associated with the heat exchanger network.
Aspen Energy Analysis also presents a ?Heat Exchange Network? or HEN diagram showing all the heat exchangers in the process. The HEN diagram shows the cooling utility streams, hot process streams (being cooled), cold process streams (being heated), and the heating utilities as streams. The HEN diagram shows all the heat exchangers in the process, using blue for cooling utility service, red for heating utility services, and grey for process/process exchangers.
The pinch points are shown as vertical dotted lines. Any exchangers crossing these dotted lines indicate wasted energy. The total amount of 'cross pinch' (wasted) energy is reported for each of the pinch points.
You can use the HEN diagram to evaluate different scenarios. For example, you can drop in additional process-to-process exchangers or change the utility services to avoid crossing the pinch points. In a matter of minutes, you can find alternative designs to save energy and reduce operating costs.
Keywords: None
References: None |
Problem Statement: Sometimes, we would like to change the number of stages in a distillation column, and expect Aspen HYSYS solver to converge the flowsheet at different number of stages controlled by an outside algorithm, i.e., optimizer or ASW.
This need may arise when the number of stages is a model tuning parameter. | Solution: In Aspen HYSYS, the number of stages has to be fixed, and cannot be varied as a parameter. Furthermore, Liquid flow cannot be zero. The ideas here is to set up the column with the maximum number of stages needed, and then direct the vapor traffic to go through only necessary number of stages (<= maximum number of stages). In another word, if we need 1 stage less then the maximum number of stages, we simply direct the majority of vapor to skip one stage. Only very small amount of vapor flows through the skipped stages.
In the attached example, this scheme of changing the actual number of stages is implemented in the spreadsheet. The stages are numbered top to bottom. The vapor is completely drawn from stage 8, and return to a stage above according to the settings in Tee-100. The settings, i.e., flow ratios in Tee-100, are calculated in the spreadsheet unit from the user specified number of stages. The calculated ratios stored on column C of the spreadsheet are mapped to the flow ratios of Tee-100.
To use the model, a user only need to set the number of stages in cell A9. This example is set up to change the actual number of stages between 6 to 10. The scheme can be adapted to different range of number of stages.
Keywords: number of stages, distillation, column.
References: None |
Problem Statement: How do you model de-propagation reactions in ionic polymerization? | Solution: The current version of the ionic polymerization model in Aspen Polymers Plus does not account for reversible propagation reactions out-of-the-box. You can, however, apply the segment-based power-law reaction model to account for the reverse reaction term. The "Reversible Propagation.bkp" sample file demonstrates how this is done. This file runs in 2006.5 and higher. To run the file, download the .zip file and unpack all of the files into one directory.
This example started from a simplified SBR polymerization model. For simplicity, we ignore the butadiene and just modeled the polymerization scheme as a styrene homopolymer. The main reactions are defined through the ionic polymerization model.
The depropagation reaction is defined using a segment-based power-law reaction model as shown in the screens below. Note that the ?Reacting site? is set to 1 on the segment-based Specifications | Specs sheet. This forces the model to return rates of change for site-based component attributes (SZMOM, SZFLOW, etc) instead of returning rates for the composite component attributes (ZMOM, SFLOW, etc). This is required in this case because the ionic polymerization model uses the site 1 component attributes to characterize the living polymer. The sample model uses the composite segment concentration basis to calculate the reaction rates; this model only uses one site so the composite and site 1 segment concentrations are equal anyway.
The reaction is defined as styrene segment a styrene monomer as shown below.
Note that the reaction has been defined as first order WRT styrene segments (STY-SEG). As a first approximation, this might be a reasonable assumption. However, it might be more reasonable to assume that the reaction should be proportional to the concentration of styrene end groups, or perhaps proportional to the concentration of styrene ends in the living polymer molecules. In this example, a user rate constant subroutine is applied to enable these types of assumptions.
The name of the user rate constant subroutine is entered as shown below. In this example two optional rate expressions are available (hence the number of rate constants, No Const:, is set to "2".
The user rate constants are identified on the Specifications | Rate Constants tabsheet as shown below. The User Flag field identifies which user rate constant is used to calculate the overall rate constant. For example, when user flag is ?2?, the system will use:
Where "RCUSER(2)" is the second rate constant returned by the user rate constant subroutine.
The first option makes the reactions proportional to the overall zeroth moment of the polymer (attribute ZMOM); the second option makes the reaction rate proportional to the flow rate of live styrene segments in the active polymer (LSEFLOW element #1). These moments are divided by the liquid volumetric flow rate to get polymer end group concentrations in terms of mole/L concentrations.
The second option could be extended to copolymers by setting further elements of RCUSER equal to the elements of LSEFLOW (element 1 is the first segment in the list of segments, element 2 is the second segment in the list of segments, etc).
The images below shows the subroutine code with some notes explaining what each section of the subroutine does. Note that SOUT is the stream array which contains component flows, the state conditions, and component attribute values.
By convention in Aspen Plus, the stream NCC+1 position contains the total molar flow, NCC+6 contains the liquid molar fraction. These are used to calculate the liquid volume flow. Attributes are stored as molar flows. The molar flow attributes are divided by the volume flow to get a molar concentration.
The results below show how the depropagation rate influences the polymer generation rate and the number average mole weight of the product. As expected, higher depropagation rates lead to reduced molecular weight and lower yield.
Although Polymers Plus will calculate the weight-average molecular weight this prediction may be suspect because the segment-based model does not account for the influence of this reaction on the second or third moment of the polymer.
Keywords: None
References: None |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.