question
stringlengths 19
6.88k
| answer
stringlengths 38
33.3k
|
---|---|
Problem Statement: This knowledge base article contains a sample query that activates an Aspen Calc calculation. | Solution: This query activates the Aspen Calc calculation named TotalRawMaterials.
local CalcCmd;
local CalcObj;
CalcCmd=createobject('CalcScheduler.CalcCommands');
CalcObj=CalcCmd.GetCalculationObject('TotalRawMaterials');
CalcCmd.CalcExecute(CalcObj);
Keywords: query
sample
example
Aspen Calc
execute
calculation
References: None |
Problem Statement: How to run an Aspen Calc calculation from VB (Visual Basic)? | Solution: Aspen Calc has a COM object that the user interface uses to manage and execute calculations. Unfortunately, Aspen does not have user documentation for this COM object at this time.
The following code executes a calculation from Visual Basic.
Dim calccmd As Object
Dim calc As Object
Set calccmd = CreateObject(CalcScheduler.CalcCommands)
Set calc = calccmd.GetCalculationObject(abc) ' abc is the calculation name
calccmd.CalcExecute calc
Keywords: aspencalc
calcscheduler
calccommands
visual basic
107042-2
References: None |
Problem Statement: One of the integral features of the Aspen InfoPlus-X layered product is the ability to calculate averages and store them in history for each tag.
You can set it up with up to 6 different Averaging Periods such as 12 Minute, 1 Hour, 4 Hours, 1 Shift, 1 Day, 1 Month
Then at regular intervals, an external task called Tsk_Avrg will calculate the averages for all records defined by Ip_PvDef or Ip_DvDef. They will then be stored in History.
Sometimes, usually upon first configuring Aspen InfoPlus-X, or if the averaging has been inoperative for a long time, the averaging calculations may need to be jump-started. | Solution: The first thing to check is a database record called Ip_Act_Avrg which has four fields(including the 'Name' field).
IP_EXTERNAL_TASK should be set to TSK_AVRG
IP_RESCHEDULE_INTVL should be set to a value equal to or lower than the 'minimum' averaging period. As per the example above that would be 12 minutes or a multiple less than 12 (1,2,3,4,6) It is the frequency at which to wake-up Tsk_Avrg to see if it has any work to do.
IP_SCHEDULE_TIME should be set to a time in the VERY NEAR future. Or it can be entered as a time in the past and will be automatically re-calculated
If all of this is correct and averages are still not being calculated then below is a simple SQLplus Query that needs to be just slightly modified to Jump-Start InfoPlus-X averaging
It involves writing a value to 4 fields in 3 different records. Three of these fields contain the same Timestamp.
These Timestamp fields will fool Tsk_Avrg into thinking it has performed averaging at this time and to continue averaging calculations from that time.
The only change you should make to the query is the Timestamp itself. Set all 3 occurrences of this timestamp to the SAME value - of your choice.
--Set the last average time in the Ip_PvDef records for AVG1 area to 'end' of average period
UPDATE IP_PVDEF SET IP_Avrg1_In_Time='31-MAR-04 14:00:00.0';
--Set the confidence value to cause history generation
UPDATE IP_PVDEF SET IP_Avrg1_In_Confid = 0.0;
--Set the last average time in TSK_AVRG for AVG1 repeat area to 'end' of average period
UPDATE TSK_Avrg SET IP_Avrg_Started[1]='31-MAR-04 14:00:00.0';
--Set the last average started in IP_AVRG_INTVL_1 to 'end' of average period
UPDATE IP_AvrgRaListDef SET IP_AVRG_STARTED='31-MAR-04 14:00:00.0' WHERE NAME LIKE '%_1';
Keywords:
References: None |
Problem Statement: How to report reaction rate for the kinetic reactors RCSTR, RPLUG, and RBATCH? | Solution: Prior to V12, it was not possible to see the reaction rates in the user interface on any of the results forms.
However, if the Diagnostics for the Simulation of the block were turned up to 6 or higher on the Block Options of the Reactor, the rates would be reported in the History file in SI units.
Using a version of the file in Knowledge Document 65744 (https://esupport.aspentech.com/S_Article?id=000065744) that has been modified to have a diagnostic level of 6, the History (.his) file shows:
General Kinetic Reaction Rates
1 0.207365516409191
Reaction Phase Volumes
1 0.140000000000000
General Kinetic Reaction Rates
2 0.174051245557285
Reaction Phase Volumes
2 0.140000000000000
The Rates for reactions 1 and 2 are in kmol/s and the Phase Volumes are in cubic meters.
In V12 and higher, if the file is re-run, the results will be shown on the reactor Results form:
The Component Generation Rates and the Custom Reaction Variables are also reported.
Keywords: Reaction, rate,
References: None |
Problem Statement: How to calculate the checksum on the media that has been downloaded from the Aspentech web support site or the aspenONE Download Center. | Solution: In order to checksum one file, such as an ISO file, use the certutil command. Let’s assume you’ve download an ISO file into c:\temp, enter the following DOS command in that directory:
certutil -hashfile <iso file> MD5
For example: certutil -hashfile V12-ENG.iso MD5
MD5 hash of file Aspen-V14-ENG.iso :
4bd32eaa3bf6cff37f61cf3d79a13bc6
certutil: -hashfile command completed successfully
The highlighted text is the checksum. Compare this checksum with the checksum in the checksum file.
In order to verify multiple the files after extracting them from the web download zip file, you can download the MD5 checksum files to compare the integrity of the extracted files. The checksum file is generated with the MD5 algorithm and you can create a similar checksum file using any checksum utility that supports the MD5 algorithm such as fsum (http://www.slavasoft.com/fsum/) or Microsoft fciv (http://www.microsoft.com/en-us/download/details.aspx?id=11533).
1. Launch any checksum utility that supports the MD5 algorithm, and point it to the folder where the files were unzipped. Please be sure to supply the parameter to allow the checksum utility to generate a checksum on all extracted files.
E.g. Using fsum.exe as an example, enter the following DOS command, which creates a checksum file for all files under c:\temp\MyUnzipfolder and assume the DVD content was extracted to the c:\temp\MyUnzipfolder\aspenONEV[X.X][ENG/MSC]Suite.
fsum -jm -r c:\temp\MyUnzipfolder\ aspenONEV[X.X][ENG/MSC]Suite *.* > c:\temp\ aspenONEV[X.X][ENG/MSC]Suite _unzipped.checksum
2. Save the checksum output as aspenONEV[X.X][ENG/MSC]Suite _unzipped.checksum.
3. If you are using other checksum utility, please ensure that the format of the checksum file is [Checksum + path to the filename] in order to compare with the checksum files provided.
4. Use any file-comparing utility to compare the aspenONEV[X.X][ENG/MSC]Suite _unzipped.checksum and aspenONEV[X.X][ENG/MSC]Suite.zip.checksum.
5. There should be no differences between the .checksum files other than the time that the .checksum file was created. [In some cases, you may need to sort the checksum file based on the checksum number as the sequence of path/filename may not be the same].
6. If there are any differences in any file checksum, try to unzip the content from the zip file again and monitor if there is any error during extraction. Otherwise, please try downloading the aspenONEV[X.X][ENG/MSC]Suite.zip again before extracting the content.
· Fsum -jm -r c:\temp\MyUnzipfolder\ aspenONEV[X.X][ENG/MSC]Suite \*.* >> c:\temp\ aspenONEV[X.X][ENG/MSC]Suite _unzipped.checksum
· The -jm and -r are correct because they code for md5 format and recursive (search all subfolders)
· What is not correct is the lack of a space between the \ and the *.*
· According to the Fsum ReadMe.txt file (attached), here is the correct coding, and the way I used:
· Fsum -jm -r c:\temp\MyUnzipfolder\ aspenONEV[X.X][ENG/MSC]Suite *.* > c:\temp\ aspenONEV[X.X][ENG/MSC]Suite _unzipped.checksum
· The fix is that there needs to be a space between the end of the name of the folder and the *.* There also needs to be a space after the *.* and the > symbol.
· Note: you only need a single > symbol.
· Note: the backslash \ at the end of the aspenONEV[X.X][ENG/MSC]Suite folder name is unnecessary. The fsum will work even if you put it there, as long as you put a space in-between the \ and the *.*
Keywords: fsum, checksum calculation, download of software instructions
References: None |
Problem Statement: How do I solve issues where the RabbitMQ endpoints are not reachable, affecting Aspen Mtell Alert Manager (MAM) functionality?
It is often necessary to make sure the RabbitMQ service is working correctly when dealing with Aspen MAM, many different errors referencing Aspen MAM services will show letting user know that endpoints are not reachable. | Solution: 1. Open Windows Services Console and make sure the RabbitMQ service is running
2. Open CMD using Run as Administrator
3. Go to folder “C:\Program Files\RabbitMQ Server\rabbitmq_server-3.8.2\sbin”, depending on RabbitMQ version final part of path may change (e.g. 3.8.1 instead of 3.8.2)
4. Execute command “rabbitmqctl.bat list_permissions”
5. The account “admin” should be listed and have all permissions, as visible in below screenshot
How to fix:
A. If there is no admin in this list or if admin lacks permissions
a. Type below command to check if admin user is created but lacks permission “rabbitmqctl.bat list_users”
b. If there is no admin user, type below command to add “admin” user
rabbitmqctl.bat add_user admin admin
c. If the user is added successfully, type below command to add “admin” user permissions
rabbitmqctl.bat set_permissions admin .* .* .*
B. If at any time an error from RabbitMQ utility is reported and utility does not run successfully
a. The easiest way is to uninstall then reinstall RabbitMQ again, then use above command lines to add “admin” user and set permissions. The RabbitMQ installer is part of the Aspen MAM installer “\aspenONEMedia\aspenonesuite\core\RabbitMQ3.8.1”
After these steps have been executed, restart IIS and MAM will no longer display the initial error.
Keywords: Online continuous
Online batch
Can’t deploy
Can’t process
Can't access
References: None |
Problem Statement: Is it possible to specify the Cv value and determine the pressure drop across the control valve? | Solution: If you are able to obtain or estimate the Cv of the meter then you can specify this Cv value in a control valve in HYSYS from the Rating/Sizing tab.
In the valve Design/Parameters page there is a check box - Use sizing methods to calculate Delta P. The valve can calculate pressure drop with specified Cv even in the steady state simulation. This will be an approximation for the pressure drop calculations.
Keywords: Control Valve, Cv, Aspen HYSYS
References: None |
Problem Statement: How to get rid off the following message “Utility Hydrate Formation-1 is not available with the property package being used” in Aspen HYSYS? | Solution: In order to avoid the following message from appearing while selecting the Hydrate Utility:
You will need to select a property package from the HYSYS databank in order for the Hydrate Utility to work. Aspen Properties databank is not linked to the utility.
Keywords: Hydrate Utility, Databank, Aspen HYSYS
References: None |
Problem Statement: From the Column Design ribbon, in the Column Analyzer group, the Export to Vendor option is greyed out, why? | Solution: The user can export results from the column analyzer to vendor packages to confirm and validate the results. However, this option is only available if column internals configurations are being analyzed within the column.
Once the column internals are added to the configuration, the Export to Vendor option will become available.
Keywords: Column internals, reports, exporting, vendors, KG Tower, SulCol, FRI-DRP
References: None |
Problem Statement: Auto Add button generates an error message in Aspen Calc when using MS EXCEL to create a calculation.
Consider the following scenario:
1. Open Aspen Calc
2. Right-click the server node and select New Calculation from the context menu
3. Input a calculation into the Wizard1 and click Next
4. Click 'New Formula' and select EXCEL
5. Click Create New button to create a new workbook.
6. Create a new Excel Formula that will calculate the average of 3 values.
Formula name: average
Cell names: A1 = level1, A2 = level2, A3 = level3, A4 = average
Formula cell: = (level1 + level2 + level3)/3 and save the workbook
7. Click Auto Add button:
Expected result: The parameter appears in the parameters tab.
Actual result: An error pops up: Error Auto Adding Excel Parameters.
This Knowledge Base article shows how to resolve this error message. | Solution: To resolve the error, you must change the identity of the Excel application to interactive user (see below). This setting is found in the DCOM Config. To get there proceed as follows:
Open Component Services | Computers | My Computer | DCOM Config
Expand DCOM Config and search for Microsoft Excel Application
Then right click Microsoft Excel Application and select Properties from the Context menu
Select ‘The interactive user’ radio button and save the setting
This stops Excel from generating the popup
Keywords:
References: None |
Problem Statement: Aspen Watch keeps the history of the DMC3 parameters in the Attribute field. When the user launches the history plot from the PCWS it selects a predefined list. For CVs it shows measurement (DEP), Steady State Target (SSDEP), Model Prediction (PRDMDLD), Operator High limit (UDEPTG) and Operator Low limit (LDEPTG). | Solution: You can add additional trends in the history view by searching the tag in the top left section. The tag format on Aspen Watch follows:
CXXY_VAR
XX: Controller ID in Aspen Watch
Y: (I) Independent or (D) Dependent
VAR: Variable name in DMC3 controller
For example, I want to add the HD_FLOWSP independent measurement and steady state target to the trend. Type the variable name and click “Add”
This will default to the measurement in the trend:
Now, for the Steady State target. Click on the Search icon to show the full list of available parameters.
Then in the search menu you can filter the search results by clicking on the APC Input filter. Then select to expand the “More” menu and “Results” type.
Click on “Select” and then “Return Selected”. This will add the trend to your current plot. Note that you can select multiple attributes (highlighted in yellow) before returning to the plot.
Keywords: Aspen Watch
PCWS
History Trends
References: None |
Problem Statement: DMC3 RTE controller cannot write the value to the DCS. But there are no write error messages on the PCWS and CIMIO_MSG.log.
However, the Test API and PCWS can write the value to DCS.
What is the root cause of this issue? | Solution: If you use the “Boolean” IO Datatype for some DCS binding entries, you may see this problem.
This can cause the controller’s internal error and this can cause no write error from Controller to DCS without write error message.
If you see this kind of this issue, please use the “Integer” IO Data type instead of the “Boolean”.
Keywords: DMC3
TEST API
Write failure
References: None |
Problem Statement: This | Solution: frames some reasons behind and MV or CV variable stuck at Setpoint combine status and some ways to solve the problem
Solution
The combined status Setpoint can be achieved for and MV or CV when both the Low Limit and the High Limit are pinched at the same value. In this case, the controller will calculate a SS Target such that can maintain that condition, if enough MV are available and have a proper range to react.
However, there could be some scenarios where MVs or CVs can be stuck on setpoint even if the Limits are showing different values. So, we have these problems reported due to two causes:
1.- On V12 we have encountered that one MV or CV gets stuck on Setpoint, regardless of the controller is ON or OFF. This behavior may happen in both the controller deployed or in simulation on DMC3 Builder. This problem was addressed on Emergency Patch 3 for both Online Server components and DMC3 Builder Components. (VSTS 611026). for V12.1 the problem has been addressed on Emergency Patch 1 for Online Server.
2.- MV/CV Model curves. By default, when an MV only has a model with one CV and the CV is Turn Off. This will put the MV in a situation where it does not have any related CV in service. When this happens the MV is pinched by the engine for safety reasons. This can be solved by either Turn Off the MV as it does not have anything to control or turn ON the CV if possible.
Keywords: DMC3, Setpoint, MV, CV
References: None |
Problem Statement: The MES Legacy Add-In can be used with the Tag Browser or you can enter manually the name of a single tag in the Tags field of every Dialog Box. However, it is not possible to enter several names of tags or drag and drop them from the Tag Browser. into that field. | Solution: 1. Type the names of the tags that will be entered in separated cells:
2. Click on the Process Data Button and select the desired function. (Current Values en this example):
3. Position the cursor in the data entry field or click on the _ icon:
4. Select the cells that contain the names of the tags:
5. Click OK
6. The values of the tags will be returned in the desired Output Location:
Keywords: Legacy Add-In
Multiple Entries
Excel Add-In
References: None |
Problem Statement: From V11, AspenONE Process Explorer was including as the primary tool to display HTP plots from AspenWatch on PCWS. However, some problems could happen as difficulties to access the tool or error as Unable to acquire SLM_RN_ALL_ASPENONEPCXP”. However, PCWS offers different options for plotting the AW Information. This KB explains the different types of plots that can be used and how to set them up. | Solution: The three types of plots that PCWS can be use are:
AspenONE Process Explorer – This type of plot is used as the main engine of the Process Explorer technology which is one of the main Data explorer and trending tools from IP21. This tool requires the SLM_RN_ALL_ASPENONEPCXP to work, and if the key is missing these plots may fail to open or display the information. This type of plot was introducing from V11.
Web.21 HTP – These types of plots are the legacy plots used in PCWS. The display fundamentally consists of the basic similar functionalities as the AspenONE Process Explorer as it allows us to see the IP21 AW data. This type of plot does not require the SLM_RN_ALL_ASPENONEPCXP key.
The type of plot can be changed from PCWS by going to the Preferences tab and then Select the plot option:
Keywords: PCWS, History Plots, License
References: None |
Problem Statement: SMBv1 (or SMB1) was the first version of the popular SMB/CIFS file-sharing network protocol. There have been a couple of different versions of SMB/CIFS over the years. Most predominant nowadays is the SMB2 version (and in many organizations now the SMB3 version). SMB1 has developed nearly 30 years ago.
SMBv1 recalls some vulnerability problems as Wannacry and Petya as prime examples of malware that took advantage of SMB1's weaknesses.
In this | Solution: , we will describe the relationship between APC software and SMBv1.
Solution
APC software does not use SMBv1 for any of the servers (DMC, Web, or AspenWAtch) and it can be disabled as recommended by Microsoft without affecting the APC or DMC applications.
As mentioned previously SMBv1 is an old version of the Server Message Block protocol Windows uses for file sharing on a local network. However, currently, it has been replaced by SMBv2 and SMBv3.
These versions are secure and can coexist with APC software without causing issues so there is no further need to disable them.
Keywords: APC, SMBv1, Microsoft
References: None |
Problem Statement: This | Solution: frames a couple of points to check when there is a connection problem (or suspect a connection problem) between Online Server and AspenWatch Server
Solution
1.- The messages display on AW Maker Run Status can give a few hints about a further problem. some of them can include:
Not Licensed: This problem can occur when AW is not capable to reach the License Server, or it could wait for the validation of the license before the change to Success. The Not Licensed message also could indicate that there is a problem with the Aspen APC Performance Monitor Data Service. In such a case Verify that the service is running, verify that the service is running with an administrator account, or restart the service to check if this can solve the problem. Further, the message can appear if one of the applications is not started and requires the process controller process to be running for the status to change. Finally, if the problem exists for ACO application is possible that some of the controller features as adaptive modeling require a separated license key that may not be in your current License file. This features normally can be disable from the CCF by going into DMCplus Build and then check the general section from the option menu
CIM-IO Connection Failed: This problem can be fixed by restarting the DMCplus context service on the Online Server.
2.- Verify that the connections are properly set between the AW server and the Online Server. This can be easily done by going to AW Maker ----> Actions ------> Online Host Connections (Opening this could take a few minutes). Healthy communication should show all the RTE and DMCplus nodes in green. If one of the Nodes is yellow verify that the Server name is correct, verify that there is no firewall between ports that could be interrupting the communication, and check through a ping test that the communication between servers is enabled on both sides (the ping test should be done on both server AW and Online the communication should successful for both cases). If the ping test fails in any of both ways, It may be necessary to check for further network problems.
3.- Verify that the Controllers (RTE or ACO) are running with any further problem (License Validation, ONREQ going success), also check that the RTE Service and DMCplus context service are running fine on the Online Server. Otherwise, The AW Maker Run status could hang on initial or any of the other messages described above.
4.- Verify that IP21 Manager is running fine. This includes checking that all the tasks from the IP21 manager have run fine since the start of the IP21 manager. (Is a good practice to enable the option Startup @ Boot). Also, make sure that before rebooting the AW server the IP21 Administrator is stopped, otherwise, this could lead to further problems on the Database.
5.- In the case of the RTE Controllers, we suggest stop the collection before rebooting the server and restart the collection afterward.
Keywords: AspenWacth Server, AspenWatch Maker, Online Server
References: None |
Problem Statement: How can I design a heat exchanger using Aspen HYSYS? | Solution: The Aspen HYSYS Heat Exchanger unit operation contains several different models to simulate a heat exchanger:
1. There are 2 simple 'Exchanger Design' models, 'End Point' and 'Weighted', in which UA and pressure drop can either be specified or calculated based on process conditions (see the Operations Guide manual andSolution 109410 for more details). Exchanger geometry is not considered in these models.
2. There are also rating models available, 'Steady State Rating' and 'Dynamic Rating'. These can be used if you want to check an existing design for feasibility for the process or if you have already fixed the general geometry of the heat exchanger but want to investigate the effect of 1 or 2 parameters on the design (for example, shell diameter or number of shell / tube passes). This would be an iterative process. See the Operations Guide manual andSolution 109410 for more details.
3. The best option, however, for evaluating and choosing a detailed design would be to use one of the external heat exchanger software packages which is compatible with Aspen HYSYS. Aspen TASC and Aspen STX, for example, both interface directly with Aspen HYSYS and can be accessed directly from the heat exchanger unit operation in your flowsheet. Some of these packages will provide you with a range of possible designs and calculate which design is the most cost effective. Once designed, you can then use either Aspen TASC or Aspen STX as the calculation engine for the heat exchanger within Aspen HYSYS to do further rating studies.
Keywords: design, heat exchange
References: None |
Problem Statement: The collect documentation says that you can use -1 for description length and engineering units length.
This is not the case. It generates an invalid CIM-IO message. | Solution: Users must specify a positive number for the eng-units and desc result length in the collect input file.
Also See the Cim-IO to OPC User Guide that corresponds with the version of the Cim-IO for OPC software you are using. Since the Cim-IO for OPC interface does not support Smart Data Types, it is necessary to configure them.These are configured in one of two ways, depending on the version you are running, either with the OPCProperties tool, for newer versions of the interface or by editing the extendedlist.txt file for older versions.
Keywords: dmcplus, cim-io, description, engineering units
References: None |
Problem Statement: Information for running DMCplus on the Honeywell AppNode | Solution: Tips and Tricks for the Honeywell App node
I. Honeywell Software
NOTE: Much of this information in this section is a compilation of our observations. It is really Honeywell's responsibility to set up and support this software covered in this section. Some of the following information may not be totally correct or it may be misleading, but it should give you a general idea of where to look for problems. Don't hesitate to call TAC about these issues.
A. Understanding the TPS network. The App node will be connected to a primary domain controller. The PDC is a NT box running NT Server. The PDC can be an App node, but it can't be a GUS station. The bestSolution is probably to have a small NT box acting as the PDC (primarily because IP.21 cannot run on a PDC). When you change configuration of the TPS domain or the security levels then you can replicate (broadcast) the changes to all of the nodes on the domain.
B. TPN Server The client must have the TPNserver installed and running. This is the equivalent of CM50S or OpenDDA software and allows the App node to connect to the LCN. The Aspentech Cim-IO to OPC server will connect to the Honeywell TPN server (actually the Aspentech Cim-IO to OPC server is a client in this case!).
You can check on the status of the TPNserver by running START Menu\Programs\Honeywell TPS\TPS Status Display Click on the App node's machine icon in the left pane and make sure that there is a TPN server available. The name is arbitrary, but if you double click on it you will get a TPN Server Status Display in the right hand pane.
The Number of Active Clients entry is important. If you have the Cim-IO to OPC server running this number will increment. If you have a PI database running this number will increment. You should not stop the TPN server unless the number of active clients is zero. If the number of active clients is zero then you should expect the status of the TPN server to be IDLE. If there are clients connected you should expect the status to be RUNNING.
The Default Access Level must be CONTINUOUS CONTROL. If it is set to anything else you will only be able to READ and not WRITE. To change this value do the following:
1. From the TPS Status Display STOP the TPN Server. [If you can't stop the server then you need to skip to the next section on security]
2. START Menu\Programs\Honeywell TPS\Configuration Tool
3. Go to the menu option Configure\TPS Domain
4. Select the tab HCI Components
5. In the dialog box find your TPN server under the Component Name column and click on it.
6. Click on the button Enter/Edit Server Specific Configuration...
7. Click yes
8. Select the Default Access and Priority Levels tab
9. Set Default Access Level to CONTINUOUS CONTROL
10. Click OK to exit TPN Server dialog box
11. Click OK to exit Configure HCI Component dialog box
12. Go to the Replication tab and click on the Commit Configuration\Replicate button.
13. Restart the TPN server from the TPS Status Display
C. Security Security is governed by the files in C:\HWIAC\Security. If you right click on these files and choose properties then you can choose security and set the security levels. If you can't start or stop the TPN server then you need to do the following:
1. START Menu\Programs\Honeywell TPS\Configuration Tool
2. Go to the menu option Configure\TPS Domain
3. Select the tab HCI Components
4. In the dialog box find your TPN server under the Component Name column and click on it.
5. Note the Capability column in the Secured Methods box (should be OPCRead=Operator, OPCWrite=Operator, and Shutdown=Supervisor). [I have recently found that it may be easier to set both of these OPCRead and OPCWrite to blank. It is not obvious how to do this, but it can be done. Of course this short circuits all security, but who else is going to accidentaly try to write through the OPC server?]
6. In Windows Explorer right click on C:\HWIAC\Security\Operator and select properties.
7. Go to permissions and set the X permission for your user.
D. Checkpoint file (CACHE) In the TPS Status Display detail you will see a reference to the Checkpoint file. If you want to clear the cache then stop the TPN server and delete the cache file. When you restart the TPN server a new cache file will be created. There is no way to retranslate a single tag.
E. CL Server If you are running the CL server then you will need to follow the directions in the DMCplus for TDC3000 manual regarding the $PRSTSxx.$XACCESS variable. It needs to be set to READWRIT on the LCN.
II. Aspentech software
A. Cim-IO to OPC Server The OPC server installation is pretty smart. It can find many OPC servers on the App node (I've seen it find 13). We are only interested in one of them. The Cim-IO to OPC server installation will fill up your cimio_logical_devices.def file with every OPC server it can find. You can take them out and put them in the START MENU\Programs\Cim-IO\Cim-IO to OPC Excluded Services file. Precisely take the DLGPSERVICENAME listed in the cimio_logical_devices file and put it in the excluded services file. The one that you will want to leave is CIOHCITPNSERVER. You should also copy the CIOHCITPNSERVER entry in your cimio_logcial_devices file and rename the first column to IOHCI. This is for convenience (who wants to type CIOHCITPNSERVER all the time?).
B. Smart Data Types Version 2.0 of the Cim-IO to OPC server supports some smart data types. Engineering units and Descriptors will work if you edit the %CIMIOROOT%\io\opc\extensions.txt file. You need to add the following line:
Hci.TPNServer .PTDESC .EUDESC
Antiwindup and Loop Status smart data types do not work. We are presently calculating them in the mode switching CL code and storing them on the CDS point. You will need to change your ccf to read these values as DBVL (see the DMCplus Build App node template).
C. Starting and Stopping the Cim-IO to OPC Server Go to START MENU\Settings\Control Panel and open the services dialog box. The Cim-IO to OPC Interface Manager cannot be set to automatic startup. The Cim-IO Manager service can be set to automatic startup. The Cim-IO to OPC Interface Manager can be started by the system account (double click on the service to set this). Right now I have not figured out how to set this to automatic startup and get it to work. I believe that the problem is that the TPN server is not fully up by the time the Cim-IO server wants to come up. Unfortunately, if you do set it to automatic the service will appear to be running, but it will not work correctly. Suggestions welcomed.
The actual process that gets started when you start the Cim-IO to OPC server is called asyncdlgp.exe.
D. DMCplus Collect in the Background This can be done. See other TAN 102680 Running Collect on an NT machine regarding this topic.
Keywords: None
References: None |
Problem Statement: What can cause the error do retry when using the Aspen InfoPlus.21 (IP.21) Administrator from a client computer? | Solution: Verify that a firewall does not block communications between the client computer and the Aspen InfoPlus.21 server, see How do I make the Aspen InfoPlus.21 Administrator connect through a firewall?
Verify that Data Execution Prevention is set to essential Windows programs and services only.
If using Symantec antivirus software verify that the startup type of the BHDrvx64 driver is set to Disabled, see Aspen InfoPlus.21 may not start if the server is using Symantec Endpoint Protection
Next, perform the following steps:
Shutdown IP.21 database
In Control Panel | Administrative Tools | Services, stop Aspen InfoPlus.21 Task Service
Stop NobleNet Portmapper for TCP service
Issue ipconfig /renew in command prompt
Add Microsoft Loopback adapter
Start NobleNet Portmapper for TCP service
Start Aspen InfoPlus.21 Task Service
Start IP.21
Keywords: IP.21 administrator
do retry
firewall
References: None |
Problem Statement: I am working with a text file that I need to add lines into and delete lines from. It is being used as an external table, one that an SQLplus script AND a user can update. In SQLplus, I can easily add a line at the bottom with SET APPEND. However, I need to delete the first line in the file when certain things happen. This file represents a queue of a process, first in -- first out. How do I delete that first line? | Solution: One way to delete a line from a file is to select from the file and exclude the line with a WHERE condition. You can direct the output to a file with SET OUTPUT. You need a SYSTEM command to replace the old file with the new one. E.g.:
set output 'c:\t1.txt'; -- Open temporary file
write line from 'c:\t.txt' where linenum > 1; -- Don't include line one
set output default; -- Close the temporary file before copying it.
system 'move c:\t1.txt c:\t.txt'; -- Replace the original with the temporary file.
Keywords: Converted from 114234_Default.txt
References: None |
Problem Statement: How to set up a dynamic simulation of a pressurized propane tank with heat loss due to the environment? | Solution: In the following video, it is shown step by step how to setup both the steady and dynamic state for a simulation with the former characteristics.
In the attachment, you can find the starter files, an .hsc for V11 and upwards, and an .XML file for earlier versions.
Keywords: Propane, Dynamic, Pressurized, Tank, Heat, Loss, Ambient, Specs, Specifications, Example
References: None |
Problem Statement: How can I set up a deadband for my PID controller, so that the controller acts to maintain a range and not a single setpoint? | Solution: The steps are shown in the following video:
Keywords: Deadband, HYSYS, Dynamics, PID, Controller, Parameter, Scheduler
References: None |
Problem Statement: Aspen Plus crashes when I try to save a .bkp file from an .apwz file. | Solution: The following video shows the steps to extract a .bkp file from an .apwz file in V12.1
Keywords: bkp, apwz, save, crash, file, plus
References: None |
Problem Statement: Aspen Plus Hangs When Opening File on Loading Simulation Engine or get the error Unable to load simulation Engine. Probable cause: insufficient disk space or memory. | Solution: This can occur when there are multiple Fortran run time libraries on the machine.
This can happen when other versions of the Fortran runtime libraries are installed from another application.
To check where the run time libraries are located:
Open a cmd window with admin rights.
Type where libifcoremd.dll to get the path for the Fortran run time libraries.
e.g.
C:\WINDOWS\system32>where libifcoremd.dll
C:\Windows\System32\libifcoremd.dll
C:\Program Files (x86)\Intel\Compiler\11.1\072\lib\intel64\libifcoremd.dll
C:\Program Files (x86)\Intel\Compiler\11.1\072\lib\ia32\libifcoremd.dll
C:\Program Files (x86)\Common Files\Intel\Shared Libraries\redist\intel64_win\compiler\libifcoremd.dll
C:\Program Files (x86)\Common Files\Intel\Shared Libraries\redist\ia32_win\compiler\libifcoremd.dll
The top reference listed needs to be the latest compatible version.
In earlier versions, Aspen Plus installed the run time libraries in C:\Windows\System32; however, other programs may also write to this directory. Starting in V12, the files are installed in C:\Program Files (x86)\Common Files\Intel\Shared Libraries\intel64 and the path to that directory is set.
Copying the latest Fortran run time from C:\Program Files (x86)\Common Files\Intel\Shared Libraries\intel64 to system32 is a workaround if there is a problem.
If using roaming user profiles try changing the working directory. See Knowledge Document 101890 for details.
If neitherSolution works, please check the following list of items in Knowledge Document 101100 and report the results to support.
Keywords: None
References: : VSTS 715002 |
Problem Statement: It is possible to manually delete history values in GDOThistory table of the GDOTOnlineHistory or GDOTSimulationHistory database running a query in SLQ Management Studio. | Solution: Here is a procedure to manually trim the database.
This query will get you the oldest timestamp:
SELECT TStamp FROM GDOThistory WHERE TStamp = (SELECT MIN(TStamp) FROM GDOThistory)
That is the oldest time in the data base - units are the number of milliseconds since 1970-01-01.
Divide by 1000 and you get the number of seconds since 1970-01-01. That is also known as the Unix epoch time; 1970-01-01 is known as the Unix epoch. This website will convert between Unix timestamps and human readable timestamps: https://www.epochconverter.com/
To trim the database, you could try removing 24 hrs from the history data. The number of seconds in a day is 86 400, and the number of milliseconds is 86 400 000. Add that number of milliseconds to the oldest timestamp (returned by that first query above) to get a new timestamp value. Then execute the following query to remove the oldest 24 hours of history:
DELETE FROM GDOThistory WHERE TStamp <= old-timestamp-value-plus-one-day
Where the old-timestamp-value-plus-one-day is the integer number of milliseconds you just calculated.
For example, if the timestamp query returned 1619348749000 then, go to https://www.epochconverter.com/ and find that 1619348749 seconds corresponds to, approximately, 25 April 2021.
So now with a calculator add 1619348749000 and 86400000 and get 1619435149000. Then execute this query to remove the oldest 24hrs of data:
DELETE FROM GDOThistory WHERE TStamp <= 1619435149000
You may need to go through that procedure multiple times, day by day, to get the database down to a reasonable size. You could shortcut that procedure by using https://www.epochconverter.com/ to determine a more recent date (e.g. one or two weeks ago) and just delete anything older than that - make sure you multiply seconds by 1000 to get milliseconds.
Be careful that you have the dates and numbers correct - deleted records cannot be recovered.
Note: If Delete permissions have been granted on the database, as per https://esupport.aspentech.com/S_Article?id=000098225, it should automatically prune the old records. The automatic pruning removes a day at a time, just as described above.
Keywords: GDOTOnlineHistory, SLQ database, query
References: None |
Problem Statement: In aspenONE Process Explorer tag values may not appear on a plot and the current value might show a series of three asterisks ( *** ) instead of the value. What might cause this situation? | Solution: The quality status that accompanies the tag value and timestamp gets translated to an appropriate quality level via a special record in the Aspen InfoPlus.21 database called 'QUALITY-STATUSES'. If the quality status results in a quality level which is Bad then Bad values will not be plotted and three asterisks (***) may appear for the current value. Values who quality status resolves to Suspect or Good will be shown and plotted.
Keywords: None
References: None |
Problem Statement: How to define the default text properties for labels in ABE Graphics Definer? | Solution: Start the Graphics Definer from the Start menu | All Programs | Aspen Basic Engineering | Graphics Definer - ABE, and the window in the image below will appear.
You will need to connect to a workspace to be able to use all the tools. Connect to the workspace.
At this stage, it would be convenient to define the default text properties for labels. Click on the Default Text Properties icon on the Graphics Definer toolbar and define the default text properties. Note that separate default settings can be set for fields, unit fields and plain text.
Keywords: Text properties, Graphics Definer
References: None |
Problem Statement: How to add radio buttons option in a datasheet template in ABE datasheet definer? | Solution: Radio buttons in a datasheet allow users to select one among a set of options. All radio buttons in a group are linked to a single attribute in the class view that has a set of defined enumerations.
Select cell in the datasheet, where you want to add radio buttons and click on the Radio button option from the drop-down menu of the Add button. The Radio Button Properties dialog will be displayed.
Specify the way you wish the radio button to look like from the Shape style options. Click on the New group button to create a new group to associate the radio buttons to it.
Type in ‘Radio buttons name’ for Group Name. Browse to the attribute in the class view. Click OK in both windows.
In the Radio Button Properties window, click on Group name box drop down menu, select the new group from the drop-down list. The Selected Value pull-down list should now display the enumeration choices for the attribute. Select Continuous. The dialog should look like the image below. Click on OK to add the radio button.
Keywords: Radio buttons, Datasheet Definer
References: None |
Problem Statement: Do we have any example of boiler in Aspen Utilities Planner to find the efficiency of boiler? | Solution: Attached Boiler_Losses.auf case is an example of modeling boiler in AUP V12. In V12, significant enhancement is made by
Incorporating water evaporation losses into boiler efficiency calculation
Calculating the air specific heat capacity based on the condition and composition, instead of constant value.
In the attached example, user can specify fuel composition in Fuel block (by default, the fuel is natural gas), stack temperature and stack O2% in BH block (Boiler) to estimate boiler efficiency accurately.
The method is in compliance with the Losses method defined in ASME PTC 4.0.
Note the Losses method in ASME PTC 4.0 use fuel HHV to estimate boiler efficiency. In AUP V12, user has option to use HHV or LHV in Fuel block.
Please refer to the online help and tutorial included in AUP V12 installation folder for What's New in AUP V12
Keywords: Boiler, Efficiency of the Boiler
References: None |
Problem Statement: Aspen Shell & Tube Exchanger – unable to export TEMA Sheet in Excel | Solution: Some time user may fine that, user is unable to export the TEMA sheet in Excel template, when user click on the Export button (applicable for both options “default template or specified template”), user can see below Exporting dialog & suddenly EDR file get closed.
Please make sure that the Excel Templates are available on below locations:
C:\Program Files (x86)\AspenTech\Aspen Exchanger Design and Rating Vxx.0\Excel Templates (for V10 & older version)
C:\Program Files\AspenTech\Aspen Exchanger Design and Rating Vxx.0\Excel Templates (for V11 & above)
If not available, then the installations may not worked well & user needs to check installation.
If the templates are available & still user is facing this issue, then this could be a problem with the registry.
Use Regedit to see what subkeys have on your machines at “Computer\HKEY_CLASSES_ROOT\TypeLib\{00020813-0000-0000-C000-000000000046}”
There may be:
1.5 for Excel 2003.
1.6 for Excel 2007.
1.7 for Excel 2010.
1.8 for Excel 2013.
1.9 for Excel 2016.
If you have a subkey for a version of Excel that no longer have on your machine, please delete that subkey. (remember to delete only those they no longer exist).
This shall help to resolve the issue.
Keywords: TEMA, Excel Template
References: None |
Problem Statement: How to add or Modify floating roof tanks in Aspen Operations Reconciliation and Accounting. | Solution: Click Configure | Vessels | Tanks to open the Tank Listing box
or
Click on the icon is Add mode on the diagram to add a tank as below. Click and then double-click the physical representation of the tank in the Diagram window.
The following window will open, Containing 3 tabs, general, details and advanced.
Fill in the tag information
Then the the second tab “Details” tab contains the following
Attributes – This section allows you to define tank attributes.
Product – Defines the default product code for the material contained in the tank. This default is over-ridden by the product code associated with a tank inventory instrument reading.
Paint – An optional field used to input the color of the tank.
Transaction Type – Defines the type of transaction.
Tolerance – Defines the maximum imbalance there can be across the vessel without it registering as unbalanced and showing a gross error condition (Red) on the GUI. This allows you to specify an imbalance level, above which, the system should notify you by registering a gross error condition.
Height – An optional field used to input the height of the tank. This data should be in the base length unit of measure for the model (such as, feet). The height is required in some circumstances when an outage tank gauge measurement is used.
Diameter – An optional field used to input the diameter of the tank. This data should be in the base length unit of measure for the model (such as, feet).
Volume Ranges – This section contains volume range information.
Maximum – Contains the maximum liquid volume level permitted for the tank. This value must be less than or equal to the tank’s capacity. The data in this field is used to screen for error conditions as well as for reporting available tank capacity.
Minimum – Defines the minimum liquid volume level permitted for the tank. This value must be greater than zero. The data in this field is used to screen for error conditions as well as for reporting inaccessible inventory.
Capacity – Contains the total capacity of the tank. The data in this field is used to screen for error conditions.
Shell Correction – This section contains shell correction information.
Correct for Shell Expansion – The checkbox indicates whether or not Aspen Operations Accounting should correct for shell expansion.
Insulated – The checkbox indicates whether or not the tank is insulated.
Temperature – Contains the temperature that the capacity table was calculated for. This defaults to 60F.
Critical Zone – This section contains critical zone information.
Begin – This (required) field contains the location of the beginning of the tank's critical zone. The units of measure for this field are the same as the base units of measure for length (such as, feet).
End – This (required) field contains the location of the end of the tank's critical zone. The units of measure for this field are the same as the base units of measure for length (such as, feet).
Floating Roof – This section contains roof information.
Mass – This (required) field contains the weight of the floating roof.
Volume – This (required) field contains the volume displaced by the floating roof.
In in brief, by following the above steps, you will be able to add new floating roof tanks in AORA.
Relevant KB’s
How does the 'Critical Area' section work with floating roof tanks in Aspen Advisor?
https://esupport.aspentech.com/S_Article?id=000064779
How do I model a floating roof storage tank with a secondary containment wall?
https://esupport.aspentech.com/S_Article?id=000076964
Keywords: AORA, Floating roof tank
References: None |
Problem Statement: –
How to resolve below error while launching Aspen Unified PIMS (AUP)? | Solution: -
Whenever the above error pops-up after launching Aspen Unified PIMS, we need to look into below folder
1. Open the “AspenUnified” folder located at –
“C:\ProgramData\AspenTech\AspenUnified”
2. Delete the above folder
3. Reinstall Aspen Unified PIMS package again.
4. Launch Aspen Unified PIMS again.
After deleting the above folder and reinstalling the Aspen Unified PIMS will resolve this error should launch the application without this error message.
Key words:
Aspen Unified
Aspen Unified Services
Keywords: None
References: None |
Problem Statement: -
What does the numbers in CRDDISTL table indicate under Estimated (EST) Charge Rows? | Solution: -
The numbers indicated under distillation units as estimated charge are the estimated charge for the crude (row) to the distillation model (column). The value has to be non-zero for the crude to be fed to the mode and the value itself is used to calculate initial yields property values for all the cuts produced by the mode (using the estimated charges of all the crudes fed to the mode and then the yields and properties of the cuts for each crude to calculate the overall cut yields and properties).The numbers are normalized to add up to 100 (%).
These values are not any kind of ratios, percentage or any sort of quantities.
These values are only used to get initial guesses for the yields and properties, so they affect the initialization of the problem, but not the actual optimalSolution, which will determine the actual charges for each crude.
PIMS uses these estimates to calculate the straight run properties of pre-defined crude mixes, only those crudes with non-blank entries in the ESTxxx rows will be available as potential feeds to each logical crude unit.
Key Words:
CRDDISTL
ESTxxx
Non blank entry
Keywords: None
References: None |
Problem Statement: AI Training feature in Aspen Plus/Hysys V12.1 is not able to start. An error such as access socket forbidden or refuse connection appears. These errors may also appear when the user attempts to run a case file that is having an AI Training block in the flowsheet. | Solution: AI Training is similar to the Plant Data feature which also using Aspen Online Service as the backbone to use plant data and data validation features before use those data for AI training purposes.
The above errors are the errors due to the Aspen Online Service V12.1.
One of the workaround to resolve these errors and continue to use the AI Training feature in V12.1, the user is required to change the Aspen Online Service V12.1 instead of running using a user account or admin account, it should be changed to run using a Local System account.
Kindly follow the following procedures to change the account running Aspen Online Service V12.1 to Local System:
Install SQL Express 2014 SP2
The installer is available in AspenOne Media under the 3rd Party Distributable folder.
If you already having SQL Express in your machine, please make sure it is SQL Express 2014 SP2. Otherwise, kindly install a new SQL Express 2014 SP2 instance using the installer stated above.
After the new SQL instance is installed, please restart the machine before proceeding to the next step.
Select the right SQL instance to run with Aspen Online Service V12.1
Go to System Drive | Program File | AspenTech | Aspen Online V12.1 and look for SelectInstance.bat
Right-click and run the SelectInstance.bat using Administrator right
Choose the latest installed SQL Express 2014 SP2 instance when the list is shown in the command prompt windows. The windows will close automatically after successfully selected the instance.
Change the account to Local System account for Aspen Online Service V12.1
Go to the Services window and look for Aspen Online Service V12.1
Right-click and choose properties
Go to the 2nd tab named “Log on” and select the first option Log on as “Local System account”
Restart the Service if it is currently running when making changes.
Optional
If the AI Training feature is still not able to run after the Aspen Online Service V12.1 is already running using the Local System account without any issue, then you have to run PSexec (attached in this article) by following the steps below:
Download the attached PSexec.zip.
Create a new folder in the system root directory with a simple name such as C:\PS. Extract the PSexec.exe into this folder.
Copy a simple Aspen Plus/Hysys case file into this folder. Prefer a simple file name for the ease of typing it into CMD later.
Open a CMD window as an administrator
CD to the folder where PsExec.exe resides eg. Cd C:\PS (if you put it inside a folder named PS under C drive).
In the command prompt, enter “PsExec.exe -i -S CMD.exe”. This will launch a new CMD window and this window runs as “Local System”
In the new CMD window, CD to the PS folder where you have copied the example case files. Then, enter the name of the Aspen Plus/HYSYS model with the extension and hit “Enter”. This will open the Aspen Plus/HYSYS model as “Local System”. After the model is fully opened, you can close the Aspen Plus/HYSYS GUI
Re-start AOL service
Note: This PSexec procedure is only required to do it once for each type of program. If you are intending to run Aspen Online with another program such as Hysys or EDR, you need to repeat the above procedures with the respective case file once.
Keywords: None
References: None |
Problem Statement: When working with the product Aspen Production Record Manager Extractor one of the steps involved is to create the Aspen Extractor Configuration database. The question may arise - should a brand new database be created for this purpose or can the appropriate database objects be created in the existing Aspen Production Record Manager (APRM) database? | Solution: A new database should be used. The main reason is that the Extractor database is different from the APRM database. The tables in each are different and are used by their own applications and the information is not shared.
Keywords: None
References: None |
Problem Statement: Sometimes when troubleshooting an issue in ABE, it's necessary to report all messages in the Journal.log file located in the workspace folder. | Solution: Find out which Library Set is being used in the current workspace by going to the workspace folder and checking in the workspace.cfg
Delete or change the name of the old Journal.log
3. Navigate to the WorkspaceLibraries folder, open LibrarySets.lst, and find the .cfg file associated with the Library Set in use
4. Open the corresponding .cfg file and delete the # before Journal::ReportMessages = all. Save the .cfg and close
5. Navigate to the ABE Administration tool. Right click and reload the workspace
6. A new Journal.log file will be generated in the Workspace folder (step 2) when a user connects to the workspace
Keywords: Basic Engineering, Journal, Workspace Connection, Troubleshooting
References: None |
Problem Statement: When using Cim-IO Test API utility, it consists of a series of prompt which requires the user to input manually. However, if one want to run it through a batch file, this will require the inputs to be read in by the Cim-IO Test API. | Solution: It is possible to pass in the inputs to Cim-IO Test API utility by having the inputs in a file. Cim-IO Test API utility does not have the functionality to read the inputs from a file. However, by making use of redirection in command prompt.
By following the knowledge based article titled Proper way of using the Test API utility to diagnose problems with CIMIO, the inputs to be entered into Cim-IO Test API utility will look something similar as below. Do not include the text in brackets in input file. It is included in below to serve as an explanation. The sample input file is also available as for download in the file called CimIOTestInputsSingleTag.txt.
9 (Cim-IO Test Get)
CIOPROCESSXOPCS (logical device name)
1 (unit number)
1 (number of tags)
1 (priority)
10 (timeout value)
1 (access type)
100 (frequency)
-1 (list id)
1 (Tagname entry options. In this case, 1 for enter one tag at a time)
Random.Real4 (tag name)
1 (Data type)
(Device data type. Do not omit. This is for press RETURN for default)
x (Exit Cim-IO Test API)
In command prompt, after browsing to the folder in which the Cim-IO Test API utility is located, issue below command to execute Cim-IO Test API and have the input file pass in through redirection.
cimio_t_api.exe < C:\Temp\CimIOTestInputsSingleTag.txt
By saving the above as as a batch script with the full path to the Cim-IO Test utility, this will allows for a simple double-clicking of the batch script to get the result. It is then just a simple matter of changing the logical device name and tag name in the input file.
Batch script is available for download in the file called CimioTestBat.txt. The file will need to be renamed from a .txt file extension to a .bat file extension.
Keywords: cimio_t_api
References: None |
Problem Statement: How is Reid Vapor Pressure [RVP] calculated in Aspen HYSYS? | Solution: Aspen HYSYS flashes the vapor at 100F till the vapor to liquid volume ratio is 4:1. The pressure at 100F is adjusted by iteratively flashing until the vapour:liquid ratio by volume is 4:1.
It is possible to use several other RVP calculation methods. Either by adding the additional property correlations to the stream (use the green cross button at the bottom of the Properties page then add the correlations from the RVP section, or by using the RVP extension (Solution 110059). The new correlations or the extension calculates the RVP according to the following methods.
API 5B1.1 (Naphtha)
API 5B1.2 (Crude)
ASTM D323-82
ASTM D323-73/79
ASTM D4953-91
ASTM D5191-91
For the API methods it solves the numerical equations equivalent to the charts attached [API_Chart]. The other methods are calculated by flashing the stream to the 4:1 volume ratio (with various nuances depending on the method). The extension documentation describes each method in detail.
Keywords: RVP, Reid Vapor Pressure
References: None |
Problem Statement: How do I resolve Error saving configuration of HTTP Server: The system cannot find the path specified? | Solution: When saving the configuration on the Auto Upload Tool, an incorrect path in the FeatureXMLDoc.XML file will lead to this error message highlighted in yellow.
The error occurs because there is an incorrect path in the configuration XML file.
To resolve this, navigate to C:\Program Files (x86)\AspenTech\ALC\Xml\ and open the FeatureXMLDoc.XML
If there is a path in the Feature Name; If so, delete it so that it looks like the screenshot below.
Save the XML file and try to save the configuration on the Auto Upload Tool. If it still failing, please contact AspenTech support.
Key Words:
Auto Upload Tool, Error Saving Configuration, AUT, XML, ALC
Keywords: None
References: None |
Problem Statement: While in Calibrate mode for a DMCplus or DMC3 controller, there are messages in the PCWS indicating auto-paired moves. However, in the Controller Details page, the STAUTOPAIR (Enable Auto-Pairing) entry is set to NO (Off). Is this feature always active? | Solution: As of V10 CP2 and V11, the upgraded version of Calibrate is called Calibrate 2.0, which includes some more advanced features. One of these features is that Auto Pairing is now a default behavior so it is always active. Here is an excerpt from our APC250 training course:
If the controller is stuck in a corner...
The engine automatically looks for another MV to move together (2 MVs) in certain ratio to satisfy the constraints
Because of model uncertainties and dynamics, the paired step moves are implemented over the following multiple cycles. If significant overshoot is detected, the remaining unimplemented step moves are discarded
If the active constraints are two CVs, their corresponding RGA number is calculated and displayed along the paired step move message
The auto-pairing feature is always enabled starting from V10 CP2 and V11 so the engine will pair the MVs if it needs to as part of theSolution. This cannot be disabled by the user turning off STAUTOPAIR (Enable Auto-Pairing) or changing the CALIBOPT (Calibrate Engine Option) to 0, i.e. the traditional approach before Calibrate 2.0.
The auto pair will trigger in when a single variable cannot find a big enough step move. So if you are seeing the message that the MVs were paired, it means that the individual MV min step (STMVMINSTEP) was bigger than the engine found feasible to do. Therefore, it had to take 2 MVs and step them together. What you can try, to prevent this from happening, is reduce the STMVMINSTEP to a more feasible value but make sure it's not too small as you still want to get useful testing data.
Update
This behavior of the controller not respecting the user setting of turning off auto-pairing was fixed in the following patches: V11 CP1 EP7, V12.0 EP3, V12.1 EP1, as well as V14 release. So if you have applied the latest patches for these versions, you will need to set the entry for SS_AutoPairingSwitch to 1 (in the controller General Section) in order to turn auto-pairing ON and set it to 0 to turn it OFF.
Defect ID: 618648 - Calibrate Auto-Paring is Always On, Ignoring User Settings
Keywords: Calibrate, multi-test, step, testing, test, mode, auto, pair, auto-pair, stautopair, 618648
References: None |
Problem Statement: When I was using a different computer my Projects and Scenarios displayed in the Palette View were sorted alphabetically. Now, my Scenarios are displayed in the order they were created. Can I sort my Projects and Scenarios alphabetically or by date? | Solution: There is no way to sort Projects and Scenarios from the Aspen Capital Cost Estimator user interface since this order depends on how Windows file types where specified. With the NTFS file system and CDFS file systems, the names are usually returned in alphabetical order. With FAT file systems, the names are usually returned in the order the files were written to the disk, which may or may not be in alphabetical order. However, these behaviors are not guaranteed (eg. When working from a Network Drive).
To check which file type you are working with, go to File Explorer > This PC, right click the hard drive icon -usually Windows (C:)- and go to Properties.
Under the General tab, look for the File system type:
Ultimately, there is no order or sorting on our side, but if a specific sorting is necessary, you could ask your IT to change the Windows file type.
Keywords: Sorting, projects, scenarios, palette view, alphabetical, order, date, ACCE, APEA, In-Plant
References: None |
Problem Statement: How do I prevent repeat area fields of an Aspen Cim-IO transfer record from having a status of Io_Data_Status = BadA and Io_Data_Status_Desc = Configuration Error? | Solution: KB Article 116004 (What is the maximum number of tags that can be added to a transfer record?) defines that the Number of Character limits for the io_tagname field are 39, 79 or 255 with Regular, Long, or LongLong transfer records respectively.
However when you type into the io_tagame you will find that you can actually type 40, 80 or 256 chars respectively. Unfortunately if you do completely fill those fields to the end, the last character is dropped.A meaning that it only reads 39, 79, or 255 characters.
As a result the Status would be reported Bad and the Status Description would be reported asA Configuration Error.
This means that as perSolution 116004 if your device address has 39 - 79 characters then you should use the Long transfer record and if the address has 80 - 255 characters then you should use the LongLong transfer record.
Additional information: In at least one instance when a customer received this (or a similar) error theSolution was to stop and restart the Cim-IO transfer record.
Keywords: Get, Put, PutOnCOS, Unsolicited, repeat area, IO_#TAGS, occurrence
136205-2
References: None |
Problem Statement: Is it possible to model methanol synthesis in Aspen Plus? | Solution: Attached is an example of this process. This example will run in Aspen Plus V11 and higher.
These files were shipped with Aspen Plus V11 and higher and are located in the following directory:
C:\Program Files\AspenTech\Aspen Plus Vxx.x\GUI\Examples\Bulk Chemical\Methanol
The files for V11, V12, and V12.1 use slightly different parameters than documented in the .pdf. The corrected version of the files are attached here. The updated files will be in the next release.
The attached PDF document summarizes AspenTech’s Methanol Synthesis model. The report describes the process, explains how the physical properties and reaction kinetics have been validated, and describes some of the challenges involves with simulating the process.
Properties
All the required pure component property parameters are drawn from the Aspen Plus PURE36 database. The SRK (Soave-Redlich-Kwong) equation of state is used throughout the model to calculate pure component and mixture properties as well as phase equilibrium. The SRK method is selected because it is appropriate for the high-pressure, high-temperature conditions in the reactor.
The standard SRK method is not usually applied for polar compounds such as methanol. A variation of this method, using the Mathias alpha function for polar compounds, is available in Aspen Plus. This method was not selected, however, because the default method for Aspen Plus includes a special alpha function for systems containing hydrogen. Given the importance of hydrogen in the methanol synthesis reactor, the default SRK method is the most appropriate choice for this model.
Azeotrope data and several sets of TPXY data for water/methanol, water/ethanol, and methanol/ethanol binary systems were extracted from the NIST source database included with Aspen Plus. The Aspen Plus data regression system was applied to regress the temperature-dependent SRK binary parameters (Kij and Lij) binary parameters for these systems. The fitted parameters and regression results are summarized in the tables and plots below. The values in some of the files shipped with Aspen Plus V11, V12, and V12.1 are not always consistent. The following values should be used.
Regression Results: SRK Binary Parameters for Methanol/Water System
Parameter Component Component Value Standard
i i j (SI units) deviation
SRKKIJ/1 H2O MEOH -0.0505 0.0142
SRKKIJ/2 H2O MEOH -6.09E-05 3.78E-05
SRKLIJ/1 MEOH H2O -0.00503 0.05024
SRKLIJ/2 MEOH H2O 1.68E-04 1.32E-04
Regression Results: SRK Binary Parameters for Ethanol/Water System
Parameter Component Component Value Standard
i i j (SI units) deviation
SRKKIJ/1 H2O ETOH -0.109 0.0478
SRKKIJ/2 H2O ETOH 9.48E-05 1.42E-04
SRKLIJ/1 ETOH H2O 0.098 0.137
SRKLIJ/2 ETOH H2O 6.20E-06 4.04E-04
SRKLIJ/1 H2O ETOH 0.039067 0.221008
SRKLIJ/2 H2O ETOH -1.65E-04 6.66E-04
Regression Results: SRK Binary Parameters for Methanol/Ethanol System
Parameter Component Component Value Standard
i i j (SI units) deviation
SRKKIJ/1 MEOH ETOH 0.0645 0.0101
SRKKIJ/2 MEOH ETOH -1.90E-04 3.04E-05
SRKLIJ/1 ETOH MEOH 0.2443 0.0275
SRKLIJ/2 ETOH MEOH -6.69E-04 8.36E-05
Reactions
The primary steps responsible for methanol formation are the reverse watergas shift reaction and the methanol synthesis reaction (see reactions 1 and 2, below). These highly reversible, gas-phase reactions are carried out on the surface of a copper/zinc oxide catalyst supported on alumina at high pressure (30-50 bar) and temperatures of 200-300 C. The model includes two additional side reactions to represent the formation of higher alcohols (lumped into ethanol) and dimethyl ether (DME). The side reactions are assumed to be irreversible.
# Label Stoichiometry Description
1 RWGS CO2 + 3H2 = CH3OH + H2O Methanol Synthesis
2 MEOH-SYN CO2 + H2 = CO + H2O Reverse Water-Gas Shift
3 F-ETOH 2CO + 4H2 -> Ethanol + H2O Ethanol formation
4 DME-FORM 2 CH3OH -> DME + H2O DME formation
ICI Synetix Methanol Process
The ICI Synetix low pressure methanol (LPM) process (currently licensed by Johnson Matthey) is the most common industrial methanol process worldwide, responsible for over 30 million metric tons of methanol per year. The Aspen Plus sample model covers the methanol synthesis and purification sections of the plant. The equipment sizes, operating conditions, and feed stream conditions used in the sample model are drawn from the SRI Process Economics Report 43D “Mega Methanol Plants” (Pavone, 2003) which claims to be a representative amalgam of multiple industrial plants. The unit operation tag numbers and stream numbers in the Aspen Plus model are consistent with those in the SRI report to enable easy comparison between the model predictions and the mass balance included in the SRI report. The process flow diagram is shown in the figure below.
The methanol synthesis process is fed with clean, sulfur-free syngas with an approximate molar composition of 68% hydrogen, 23% carbon monoxide, 7% carbon dioxide, and small amounts of methane, water, and inert gases at a pressure of 35.5 bar and a temperature of 38°C. The syngas is typically produced from a gas reformer with significant heat integration between the reformer and methanol synthesis sections of the plant. Hypothetically, any source of syngas could be used including that derived from coal, coke, or biomass gasification plants.
The syngas feed is pressurized to 80 bar in compressor K201. A condensate stream (S21) is taken off downstream of the compressor intercooler (K-201HX). This gas is mixed with cooler recycle syngas (S22) from K202, reaching a temperature of about 53°C (stream S24). Approximately one third of this stream is heated to 182°C in the interchange heat exchanger E202. This portion of the hot gas is fed to the top stage of the methanol converter. The remaining cooler portion of the syngas stream is split in equal proportions and fed to the next three stages of the converter to cool the intermediate streams between catalyst beds, helping move the reaction forward.
Lurgi Two-Stage Methanol Synthesis Model
Lurgi offers single-stage and two-state methanol synthesis process configurations. The two-stage system is recommended for larger plants; however, it should be noted that the design basis (from the SRI Report, Pavone, 2003) may be beyond the typical scale of proven industrial processes. The Aspen Plus sample model of this process closely follows the SRI design basis, except as noted below. The process flow diagram is shown below.
Summary
These examples are intended to provide our customers with a strong starting point to develop appropriate models of common industrial methanol synthesis processes. The physical properties, thermodynamics, and reaction equilibrium are believed to capture the real process behavior very accurately. The relative reaction rates are reasonable, but the catalyst activity factors may need to be tuned to match the specific catalyst activity of any given plant. Each catalyst grade may have different initial activity as the level of ‘doping’ of active ingredients vary. Catalyst activity may also degrade over time due to the action of contaminants such as sulfur.
Keywords: None
References: None |
Problem Statement: How to edit custom definition record to avoid A1PE error “404- File or directory not found” shown for Tag details for tags defined by custom definition record?
In a1Pe, for tags defined by custom definition record, error “404-File or directory not found” error may be encountered when trying to view Tag details. | Solution: The DETAIL_DISPLAY_REC attribute within the IP_DiscreteDef definition record defines Tag Details page. ipdscret is the default name for IP_DiscreteDef. It is possible to plug other ASP implementations for Tag Details based on the tag's definition record. If the value of the DETAIL_DISPLAY_REC is different from the default, the page name specified in DETAIL_DISPLAY_REC will be displayed. Similarly, if a custom definition record is created with DETAIL_DISPLAY_REC specified, the page with this name will be called. This will cause the above-mentioned error message to appear.
In order to solve this, you will need to modify the custom definition record. Here are the steps to do the same:
1. In IP21 Admin, go to your custom definition, once expanded go to ‘Fields’
In this setup, the custom definition is “IP_AnalogTestP1” and the sample tag is “TestP12”
2. Double click on Fields, you will see on the right-pane the ‘Detail_display_rec’
3. Go to the ‘Detail_display_rec’ field and make sure that the DETAIL_DISPLAY_REC Field is left EMPTY, once emptied add blank space and then hit enter or else it will not let you empty the field.
4. Once done, using the A1PE Process Admin page, re-scan the tags.
5. Once the scan is successfully completed, open A1PE and select the Tag and click on view tag details, you should now be able to see the Tag Details page at your end. This will show the default attributes of the tag.
Keywords: A1pe, error 404.0, not found, custom tag, tag details
References: None |
Problem Statement: –
What is functionality in operation difference of “*” (Asterisk) & “DISABLE” column in Aspen PIMS. | Solution: –
Purpose -
1. Using “*” (Asterisk symbol) in rows in Aspen PIMS data tables will terminate particular row and will not be considered during model run.
2. Using “DISABLE” column (user defined column) PIMS data tables and entering “1” as a coefficient for any row (component) will entirely remove all the structure related to particular component from matrix structure.
Hence the main functionality operation difference between “*” & “DISABLE” column is that –
“*” will terminate the row during the model runs if the particular row is not required and the structure will remain in the model matrix but using “DISABLE” column with “1” as coefficient for a particular row will remove entire structure from the matrix and model for any component row.
Example –
Below is the example where LRG crude is terminated using “*” and “DISABLE” column in table SELL where when used “*” for LRG row the row is terminated for after running model but the structure still exists in matrix structure and when “DISABLE” was used the entire structure with LRG was removed off from the entire model structure.
1. Use of *
2. Use of DISABLE
Key words:
Aspen PIMS
DISABLE
Matrix
Keywords: None
References: None |
Problem Statement: -
Why does PIMSWIN.exe – Application error occurs during installation and how can we fix it? | Solution: -
This error might appears due to the missing folders. Please verify that the intel fortran and aspen common files folder is part of the system PATH environment variable as shown in the highlights below:
The screen shot has two highlighted directory paths. One of the highlights is for the Aspen Common Files folder and the other highlight is for the intel fortran 32-bit folder. Please verify that these 2 directory paths are part of the PATH environment variable.
OR
Simply, Just type path in CMD (Command prompt ) and hit enter. This will show PATH contents from a command prompt.
For example -
C:\Users\Username>path
PATH=C:\Program Files\Microsoft MPI\Bin\;C:\Program Files\Common Files\Oracle\Java\javapath;C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files (x86)\Common Files\AspenTech Shared\;C:\Program Files\Common Files\AspenTech Shared;C:\Program Files (x86)\Common Files\Intel\Shared Libraries\redist\intel64_win\compiler\;C:\Program Files\dotnet\;C:\Program Files (x86)\dotnet\;C:\Program Files (x86)\Spiral Software\Spiral Assay\CrudeSuite DLL\;C:\Users\username\AppData\Local\Microsoft\WindowsApps;
As the above path indicates it doesn't have the 32-bit intel folder referenced in their PATH environment variable. This only has the 64-bit one. PIMS is a 32-bit application, and this is why the 32-bit version is needed. We need to add the 32-bit Intel folder which is C:\Program Files (x86)\Common Files\Intel\Shared Libraries\redist\ia32_win\compiler to the PATH environment variable.
Install Aspen PIMS again, Aspen PIMS should now be installed correctly without any application errors.
Key words:
PIMSWIN.exe
Path
Keywords: None
References: None |
Problem Statement: In Shell & Tube Heat Exchanger Mechanical Design, why the rear head body flange thickness is “Zero” in the fabrication drawings? | Solution: Under Shell & Tube Mechanical, user may face the issue that the rear head flange thickness would be “Zero” on fabrication drawings.
The reason behind such non calculation of the flange thickness would be possibly that the mechanical design conditions specified like the design pressure is much higher than that expected & calculations of thickness may not be possible within that range as mentioned in below warning messages:
For example, there would be possibility of the design pressure specified is too high where gasket thickness itself is much higher, it is a flange that must meet a very high pressure and the gasket material chosen means that the required gasket width is very high.
The BPVC code was designed to work with exchangers with a design pressure not exceeding max conditions.
If such conditions exist, user needs to take care to define manually those geometry details, to come up with a possible design.
Keywords: Warning 225; limit 936, zero flange thickness
References: None |
Problem Statement: How do I log processes using procmon to help diagnose the problem if Aspen Plus is crashing? | Solution: A log of the processes from procmon can sometimes help diagnose crashes that are not reproducible or where files cannot be sent to Aspen Technology.
The attached zip contains the procmon binaries.
Execute the following steps on the user’s machine :
Launch procmon.exe
Set up filter as shown below :
Follow customers steps that leads to the crash
ProcMon will display on its window all the actions that meet the filter specifications
On ProcMon, go to the File Menu and select Save. In the dialog, specify the name of the log file that you want the ProcMon output to be saved to and hit OK.
A .pml can be loaded into ProcMon and analyzed appropriately.
Note
It is good to Clear the display before starting the steps to reproduce the problem :
It is a good idea to select the All events radio button in the Save to File dialog. It is not the default.
Keywords: None
References: None |
Problem Statement: Datasheet Apps functionality in Aspen Plus | Solution: Aspen Basic Engineering's integration with Aspen Plus let you map simulation components, streams, and attributes directly to their counterparts in the extensive ABE data model library. Once mapped to simulation, you can use the data models to create complete and detailed datasheets supporting Basic Engineering or Front-End Engineering Design (FEED) datasheets that update dynamically. This functionality is made available through the Datasheets ribbon tab.
This document will guide you how to map Aspen Plus simulation with ABE datasheet explorer for viewing the datasheets.
Step 1: How to create ABE server for using Datasheet Apps functionality?
Step 2: How to map the unit operation block and its attributes to ABE server?
Step 3: How to view datasheets dynamically?
Step 1: How to create ABE server for using Datasheet Apps functionality?
Select Datasheet Apps icon in the home ribbon of your flowsheet
After selecting the Datasheet Apps icon, two windows viz., Explorer and Mapper opens, prompting to select the workspace
If ABE workspace server is used for the first time, then create a new workspace or connect to an existing workspace
After creating a workspace, please wait till the workspace is created in the ABE Engineering server
Step 2: How to map the unit operation block and its attributes to ABE server?
Once the workspace gets loaded, you will see that there no available equipment in workspace as the flowsheet objects like equipment / stream is not yet mapped
Click on mapper window and you see all the flowsheet objects that can be mapped.
Choose the desired mapping in the “Map As” dropdown toolbar
There are options to map or not to map the flowsheet objects in “Workspace Object” dropdown toolbar
Click on transfer to map the flowsheet objects with the simulation attributes
Click on each flowsheet objects to see the attributes and the mapping ports of each object
Step 3: How to view datasheets dynamically?
Once the flowsheet objects are mapped, select the explorer window and select the flowsheet object for which datasheet are to be viewed
Click on Add new datasheet button in the Workspace object window
Next window will allow you to select the template as desired
After selection, datasheet opens for the selected workspace object in another tab
Sample datasheet for a Shell and Tube Heat Exchanger
Keywords: Datasheet Apps, ABE, Workspace, Workspace objects, Flowsheet objects, Explorer, Mapper, FEED, Aspen Basic Engineering, Shell and Tube Heat Exchanger, Pipeline
References: None |
Problem Statement: With multiple pens in the tag legend of a plot, each representing a different tag getting updated at different frequencies, an attempt to view the historical data in the Data Table will likely result in many gaps. It is difficult to identify the value for all the tags for a given timestamp. How can a Data Table be produced that displays the value for each pen, at a set interval between each row? | Solution: You can use aggregate trend type to display data at a set frequency.
Tick all the check boxes on the left edge of the legend (you can easily do so by ticking the topmost check box in the legend column header). You then have access to the legend menu just above the item rows. Click the pencil icon to open the property dialog - you will be configuring ALL the pens at once.
Modify the configuration of the pens so that they show Aggregate data calculating “First” value each 1 Minute period (for example) – to do this, select Trend Type | Aggregate radio button and make the necessary changes. Click OK.
In this example, now when you view the Data Table you will see data calculated for each minute of the time span with no gaps.
Note, you can export a table of interpolated history data without empty cells using aspenONE Process Explorer V12 and later. Tick the check boxes on the left edge of the legend for the tags you want to export. Click above the legend header. Then click Export to CSV, and select the Conditioned Data radio button. You can then choose the sampling interval and export the calculated results to a CSV file.
Keywords: A1PE
Excel export
csv
group
seconds
minutes
hours
spaces
References: None |
Problem Statement: While viewing trends in Aspen Mtell Agent Builder the filter sampling options ALARM_SAMPLING and INTERVAL_SAMPLING are available, how are these options different? | Solution: If Sample once per day and ALARM_SAMPLING are selected, the trend will show one data point per day, but an emphasis will be placed on the data points with alerts.
For example, if we had an hourly TDS that started at midnight and we selected Sample once per day and ALARM_SAMPLING when viewing a trend, the trend would show the midnight timestamps of every day, but if a failure happened at 3 PM on one day, then the trend would pick out 3 PM, instead of midnight, to show for that day.
Similarly, Sample once per week and ALARM_SAMPLING will show one data point per week with an emphasis on data points with alerts.
On the other hand, if Sample once per day and INTERVAL_SAMPLING are selected, the trend will show one point per day, at the same time each day, regardless of if any alerts happened in-between intervals.
In the example above, if Sample once per day and INTERVAL_SAMPLING were selected, the trend would keep showing only midnight time stamps, and if the 3 PM failure/alert mode did not last till midnight then it would be missed.
Similarly, Sample once per week and INTERVAL_SAMPLING will show one data point per week with no regard for when alerts occurred.
ALARM_SAMPLING is the recommended option to use in most cases.
Keywords: Alarm sampling
Interval sampling
Alarm-sampling
Interval-sampling
Probability Trend
View Trend
View Trend Lines
References: None |
Problem Statement: How do you supply Power to a block such as a Pump or Compressor? | Solution: Use a Work stream to supply a Power specification to Pump or Compressor block. You can also enter speed for a Compressor block. Even though Power is the rate at which Work is done or Work/time, Power is reported in the Work stream and as the Total and Net Work result for blocks such as the Pump (PUMP), the Compressor (COMPR), and Multistage Compressure (MCOMPR) in a simulation.
To supply power to a block use a negative value. To remove power use a positive value. It is also possible to use a Work stream to transfer Power such as from a turbine to a compressor.
You can use inlet Work streams in two ways:
To use the Work stream as a Power specification, specify the Power for the work stream on this form. Then, do not specify power and pressure for the Pump or Compressor block.
For overall energy balance, specify the Power or pressure for the Pump or Compressor block. If you specify pressure, the block calculates Power . The Power you specify on this form is not used as a power specification. It is used to maintain an energy balance:
Outlet Work Stream = Inlet Work Stream - Actual Power
In this case, you also should use an outlet work stream for the block.
Keywords: None
References: None |
Problem Statement: End user would like to schedule SQLplus script to be executed at regular interval on his own machine without having to save the query as a QueryDef or CompQueryDef record in Aspen InfoPlus.21 database to be scheduled. | Solution: The sqlplusx.exe command program can be used to execute SQLplus command on a remote node. The SQL command can be passed as a parameter in double quotes or redirect from a file. For example,
SQLplus command passed as a parameter.
sqlplusx /h=hostname select name from ATCAI
SQLplus command redirected from a file.
sqlplusx < SQLQuery.sql
By using Windows Task Scheduler, it is possible to schedule execution of query on end user’s machine.
Preparation
Setting up the client machine for the SQL script to be scheduled.
Create a folder called ScheduledQuery in C: drive.
Copy sqlplusx.exe and libc21.dll from Aspen InfoPlus.21 database server into above mentioned folder.
For 32-bit,
sqlplusx.exe from C:\Program Files (x86)\AspenTech\InfoPlus.21\db21\code
libc21.dll from C:\Program Files (x86)\Common Files\AspenTech Shared
For 64-bit,
sqlplusx.exe from C:\Program Files\AspenTech\InfoPlus.21\db21\code
libc21.dll from C:\Program Files\Common Files\AspenTech Shared
Download both ExecuteSQL.txt and query.sql attached in this article and copy to ScheduledQuery folder.
Update ExecuteSQL.txt with the correct hostname after /h parameter. Currently, it is showing as MES.
Rename ExecuteSQL.txt as ExecuteSQL.bat.
Configuration in Windows Task Scheduler
Creating a task in Windows Task Scheduler to run a batch file every 1 minute.
Launch Task Scheduler from Control Panel.
Click on Create Basic Task... under Actions.
Enter a name for the task. For example, 1MinScheduledQuery.
Click on Next button.
Click on Next button leaving Daily as default for trigger.
Click on Next button leaving settings for Daily as default for the trigger.
Click on Next button leaving Start a program as the default action.
Click on Browse... button and browse to C:\ScheduledQuery folder.
Select ExecuteSQL.bat and click on Open button.
Click on Next button.
Tick checkbox beside Open the Properties dialog for this task when I clicked Finish.
Click on Finish button.
Select Triggers tab.
Click on Edit... button.
Tick checkbox beside Repeat task every:.
Select 5 minutes from the drop-down list beside Repeat task every:.
Update 5 to 1 so that it is showing as 1 minutes.
Click on OK button to close Edit Trigger dialog box.
Click on OK button to close Properties dialog box.
Refer to Setting up Task to schedule.docx for screenshots.
Note:
Execution of SQL script will still be performed on Aspen InfoPlus.21 server.
ASCII/text/CSV file on end user's machine cannot be read unless it is in a shared folder.
SET OUTPUT to a text file cannot be used to output result to end user's machine.
Use command line redirection as demonstrated in example ExecuteSQL.bat.
Example ExecuteSQL.bat will write output to text file called output.txt in C:\ScheduledQuery folder.
Keywords:
References: None |
Problem Statement: If you place the CCF and MDL/MDL3 files in a folder which contains a double byte character in the folder name, you will see the following issues.
In V11.0 and above, you will not be able to open the CCF file with error messages shown error setting path and controller load error in DMCplus Build and Simulate, respectively:
In V10.0 and previous versions, you will be able to open the CCF file in DMCplus Build but the model path name in Options dialog box is garbled because of the double byte character, as shown below:
If you try to run Simulate from DMCplus Build, DMCplus Simulate cannot work properly and you will see error message Intel(r) visual fortran run-time error: | Solution: To workaround this issue, please place the CCF and MDL/MDL3 files in a folder which does not use double byte characters in the folder name. Double byte characters are not supported for APC ACO platformSolution.
Keywords: DMCplus, Build, Simulate, error, Double byte
References: None |
Problem Statement: Aspen Fidelis Pipe Production Probability to exceed does not go up to 100%, as in the example below | Solution: Probability to exceed will not get up to 100% in this case because the last bin in the histogram (the left most bar) is not zero. (The last bar is included in the calculation contrary to what it may appear as).
Another way to understand this is that 5% of the time the value is 224000, therefore the Probability to exceed this value will be 95% at this bar.
The reason for this is that there is no theoretically best value to have Probability to exceed to be 100%, since pipe values can be anything in the model. This behavior is as per design and should be expected.
Keywords: Incomplete Probability
Incomplete graph
Dies out
Missing bin
References: None |
Problem Statement: How do I reduce the Excel file size after it increased for no apparent reason?
Description: An Excel file may increase its size by a lot if by mistake, you move to the last Excel cell and enter something. Sometimes this can be caused by moving to that last row (by doing Ctrl-Down-Arrow too many times, for example) and entering something in a cell. Even if you delete what you entered, Excel remembers that you entered something there and so that last row becomes the last row of the used range of that worksheet. Depending on what cell or column formatting is set on the worksheet, doing that can increase the size of the worksheet by a few hundred KB or by 10 MB or more.
You can tell that the used range of the sheet is wrong by selecting cell A1 and then pressing Ctrl-End. The cursor will move to AX1048576. | Solution: Go to the worksheet and take these steps (do not deviate from these steps):
Select cell A1
Ctrl-Down Arrow to go to last row, then select the row after that, the first row with no data -- select the row by clicking on the row header.
Ctrl-Shift-End to select from that row to the end of the used range (in this case, that will be through the last row in the worksheet).
From the Home tab in the Ribbon, in the Edit section, select Clear and then Clear All. If notified that the operation may take a while, just click OK.
Next, right-click on the selected region and select Delete. When asked if you want to shift cells or delete rows, select Entire row. If notified that the operation may take a while, just click OK.
Select cell A1. IMPORTANT: do not do anything else before going to cell A1.
Save the workbook. IMPORTANT: do not do anything else after going to A1 and before saving the workbook.
Press Ctrl-Home and then Ctrl-End to check that used range was reset correctly. That should take you the correct last cell in the used range (above blank rows).
Save the workbook again.
Check the file size. It should have decreased significantly
Do this procedure for each of the worksheets identified as having a used range that is too large.
Some of the Worksheets that may have this issue are:
MV Config DR
CV Config DR
Model Update
Key words
GDOT, Excel, Memory, File size
Keywords: None
References: None |
Problem Statement: Special character on alphabetic character as ( ‘ ), i.e. á, on the variable description can lead to application problems after the controller is deployed.
This behavior can lead to issues as the status of the controller remains on start, no capture of any application mode change (Control, Calibrate) and it could not run a steady-state | Solution: . This problem can affect also during simulation step.
Further problems also have been reported when the CCF export feature is used on DMC3 Builder. if some special characters are found on the Calculation Comments or variables description the CCF file will fail to open and return an error showing that some characters are not supported.
Solution
The way to prevent and fix this issue is just by removing this kind of character from the variable descriptions.
The Major problem with this issue is that this error is not captured by any pop-up window or log. However, this issue can be noticed from the simulate stage by noticing no change in the state of the controller and noSolution performed.
In the case of exporting a CCF file with special characters, the file will fail to open showing these characters are not supported. the main reason for this is that DMCplus build only supports ASCII characters by design. Unfortunately, the only way to work around this is where edit directly the comments from the CCF file or avoid/delete special characters from the DMC3 Builder project before exporting it.
If after verifying that all descriptions are clear and the controller still having problems. Please contact support for further assistance
Keywords: DMC3, PCWS, Simulate
References: None |
Problem Statement: What items are considered for the Cylinder-hot side and Cylinder-cold side in a Shell & Tube Heat Exchanger? | Solution: These specifications can be found on the EDR Navigator under Input| Construction Specifications| Materials of Construction| Vessel Materials.
It must be considered where (shell or tubes) the hot fluid and cold fluid are located in the heat exchanger to assign the corresponding material of construction to each section.
The following items are considered for the Shell side and Tube side when assigning the material to the Cylinder-hot side and Cylinder-cold side.
Shell side:
Shell cylinder
Shell cover cylinder
Shell cover
Shell side flanges
Shell side nozzles
Expansion joint
Kettle reducers
Kettle cylinder
Bolting
Couplings
Tube side:
Front head cylinder
Rear head cylinder
Front head cover
Rear head cover
Front head flange
Rear head flange
Front head nozzles
Rear head nozzles
.
Keywords: Material of construction, vessel material, different materials
References: None |
Problem Statement: What are the correct equations for the mechanical calculation of the stress in a Shell & Tube exchanger? | Solution: In a simulation file of Shell and Tube Mechanical V11.0, in Results - Code Calculations - Tubesheets/Expansion Joints in UHX-13.5.11 Step 11, the equations are in a wrong way.
The correct equations are:
Ss is the allowable stress for the shell material at Ts, the shell design temperature.
Sc is the allowable stress for the channel material at Tc, the channel design temperature.
These equations will be rewritten in a correct way for future versions.
Keywords: Stress, equation, UHX, mechanical, sigmas, sigmac, step 11.
References: None |
Problem Statement: How can I insert Unicode characters such as Russian, Japanese, Chinese, Korean into an Aspen InfoPlus.21 (IP21) database that can then be displayed in aspenOne Process Explorer or Aspen Process Explorer? | Solution: Note: Please see article What is 'the Code Page' option in the Aspen DA for IP.21 service? before continuing. You will have to set the Code Page of the Aspen DA for IP.21 Service in ADSA in order to insert your Unicode characters and have them recorded correctly.
How to insert Unicode characters?
In this example, we will use Russian character set when updating the description of a tag. Before typing characters, it is worth pointing out that you will need to have selected an appropriate keyboard and Russian system locale would need to be selected on the IP21 server.
Assuming you already have a tag then you will be able to modify this tag with the Process Data REST service. To view examples of its available functions see Does Aspen InfoPlus.21 provide any access via a REST web service?, you will find instructions as to how to open the REST service's samples pages.
Using the WriteAttribute to write data to the tag
Open the WriteAttribute samples page: http://SERVERNAME/ProcessData/Samples/sample_write_query.aspx (replace SERVERNAME with name of web server).
Fill in the fields and click Issue Request button. A notification at the bottom of the page will let you know if the request was successful or not. The screenshot below shows we have updated an existing tag with a new description made up of Russian characters:
The tag can then be seen with its new description in an aspenONE Process Explorer tag list:
Keywords: extended character set
language
References: None |
Problem Statement: One can use Aspen Simulation Workbook to facilitate the management of data (e.g. to collect data from historian in Excel, then push those data to the simulation). For equation oriented run mode, one may need also to change the configuration of the simulation (enable or disable connections, spec groups, measurements, etc). This is routinely done using scripts (OOMF scripting language, or EBS script files). Those can be invoked in the engine command, but if one wishes to hide the graphical interface of HYSYS to prevent the final user to interact with the simulation, how can we invoke those scripts? | Solution: The example attached. The following VBA code is used. The trick is to use the so-called backdoor mechanism, which is essentially typing the Text string variable into the command prompt of the HYSYS EO simulation engine. The path may change in other versions, but it can be found by using the recording macro in HYSYS. The logic of the code is:
- get HYSYS application object (GetObject call)
- get the pointer to the command prompt of control panel
- issue the command
- force ASW to update to get a consistent state of data in Excel and the simulation
Dim HYSYS As SimulationCase
Private Sub CommandButton1_Click()
Dim bdCase As BackDoor
Set HYSYS = GetObject(Sheet1.Cells(1, 2).Value)
Set bdCase = HYSYS.Application.BackDoor
Dim ActualScriptText As TextVariable
Set ActualScriptText = bdCase.BackDoorTextVariable(/Document.0/FlowSht.1/UnitOpObject.400(FLOW-1)/FlowSht.600/EOSolverData.300:Index.301).Variable
Text = invoke test.ebs
ActualScriptText = Text
' ASW update
AspenSimulationWorkbookXLA.ASWUpdateExcelFromModel
AspenSimulationWorkbookXLA.ASWRunActiveSimulation
End Sub
Note that in VBA editor, in Tools menu, the references to the type libraries HYSYS 11 Type Library and AspenSimulationWorkbookXLA have been selected.
To run the example:
- open test.xlsm in Excel
- activate Aspen Simulation Workbook and connect to the simulation test-eo.hsc
- click the command button CommandButton1, which will invoke the script test.ebs
You can open the script test.ebs in a text editor and comment in/out statements, save and invoke using the command button.
Keywords: None
References: None |
Problem Statement: How to Enable Greenhouse Emissions Calculations in Aspen Hysys. | Solution: To enable greenhouse emissions calculations in aspen hysys, open the process utility manager, and then click greenhouse gas emissions preferences.
In the greenhouse gas emissions view, check calculate CO2 emissions.
Enter specifications as shown below:
Option Description
CO2 Emission Factor Data Source EU - 2007/589/EC EC589 - European Commission Decision 2007/589/EC
US - EPA Rule E9-5711 US5711 - United States Environmental Protection Agency Rule E9-5711
User specified - When selected you can enter any real value (negative or positive). This lets you take heat recovery into account (for example, a recovery steam generator may offset CO2 generated elsewhere in the process).
Ultimate Fuel Source Specify one of the fuel sources listed in the EPA or EU guidelines.
CO2 Emission factor Applies to the selected fuel source.
CO2 Energy Source Emission Factor Indicates the fuel combustion efficiency.
Carbon Fee preference Opens User Preferences to access the Carbon Fee setting.
The source selections and their associated CO2 Emission and Energy Source Efficiency Factors are filled in under their respective columns in the Utility Database. All process utilities in the manager list will be enabled for CO2 emissions calculation.
Keywords: Greenhouse Emissions Calculations.
References: None |
Problem Statement: Do you need an Aspen Polymers Plus license to use NRTL-SAC in Aspen Plus or Aspen Properties? | Solution: Both NRTL-SAC and eNRTL-SAC use the polymer segment concept and originally needed an Aspen Polymer Plus license to use NRTL-SAC or eNRTL-SAC in Aspen Plus. Now, as long as polymers, oligomers, or segments are not in the simulation, an Aspen Polymers license is not needed.
Keywords: None
References: None |
Problem Statement: Why does Administration tool fail to connect to server, but other Aspen Basic Engineering applications connect successfully? | Solution: In Aspen Basic Engineering (ABE) V11 and V12, the administration tool uses DCOM protocol for communicating with the server. The other ABE applications use TCP/IP protocol. One exception is if users have installed ABE V11 Cumulative Patch 1 Emergency Patch 17, then the Drawing Editor can be configured to use DCOM protocol tool.
If only the administration tool cannot connect to the server, then the configuration of the DCOM protocol should be verified. Refer to KB article 000099630 which explains the configuration of DCOM protocol for ABE.
Keywords: Aspen Basic Engineering, Administration, DCOM, Server Configuration
References: None |
Problem Statement: This Knowledge Base article answers the following question:
Is it possible to create multiple profile records for one Aspen InfoPlus.21 tag? | Solution: There can be multiple profiles created against the same tag, they just need to have either different triggers, unit, or context.
The best approach would be to have different start/end time triggers based on when the user wanted to profile the different times of the process. They could also have different context information with the same set of triggers.
So, to summarize, the user can create multiple profiles that reference an IP.21 tag, and use the batch context to control which profile is enabled at run time.
Keywords: Golden Batch
References: None |
Problem Statement: How to import Aspen HYSYS Petroleum assay data into Aspen Plus? | Solution: In HYSYS, to export petroleum assays to input files:
1. Click the Petroleum Assays node to open the Petroleum Assays form.
2. On the Petroleum Assays form, select the desired assay(s) in the Assays Summary table.
3. At the bottom of the form, click Export | Export to Aspen Plus Inp File.
Note: If any of the selected assays have not been characterized yet, a warning message appears and the assay cannot be exported.
Each assay is exported individually as a .inp file.
In Aspen Plus, the user has to open the .inp file in a new simulation since import does not have .inp as an option. Then, the user would need to save it as a .bkp file. The .bkp file can be imported into any simulation. The .bkp file with the assay could be saved into the Assay library if desired so the user can get to it later using the Assay Data button on the Components | Specifications | Petroleum sheet C:\Program Files\AspenTech\Aspen Plus V12.1\GUI\Asy
Users can export characterized assays to input (.inp) files, which can be opened within Aspen Plus. The generated input files are compatible with Aspen Plus V10 and later versions.
When the user opens the exported .inp file in Aspen Plus, the following information is specified within the Assay/Blend object manager in the Properties environment based on user specifications in Aspen Assay Management:
Distillation yield curve data (on the Basic Data form | Dist Curve sheet)
Property curve data (on the Property Curves form)
Keywords: Petroleum assays, inp, import, etc.
References: None |
Problem Statement: DCOM (Distributed Component Object Model) is an addition to COM that facilitates the transparent distribution of objects over networks and over the Internet. It dynamically allocates one port per process.
When connecting applications across a firewall that use DCOM, there are Firewall and Registry Settings for DCOM that need to be taken into account. | Solution: In order for DCOM to work effectively across a firewall, the System Administrator needs to decide how many ports should be allocated to DCOM processes, which is equivalent to the number of simultaneous DCOM processes through the firewall. The System Administrator must open all of the UDP and TCP ports corresponding to the allocated port numbers. He or she will also need to open TCP/UDP 135, which is used for RPC End Point Mapping, among other things. In addition, the System Administrator must edit the registry to tell DCOM which ports have been reserved. This is done with the HKEY_LOCAL_MACHINES\Software\Microsoft\Rpc\Internet registry key, which may have to be created using the Registry Editor.
The following example tells DCOM to restrict its port range to 10 ports:
Named Value: Ports
Type: REG_MULTI_SZ
Setting: Range of port. Can be multiple lines such as:
3001-3010
135
Named Value: PortsInternetAvailable
Type: REG_SZ
Setting:Y
Named Value: UseInternetPorts
Type: REG_SZ
Setting: Y
For more information refer to the attached document.
Keywords: DCOM
Firewall
cim-io
opc
aprm
batch
calc
References: None |
Problem Statement: Search in aspenONE Process Explorer (A1PE) and also aspenONE Process Explorer Admin returns error code 500
Directly opening the Solr control panel (http://<Aspen Search Server>:8080/solr/) returns error HTTP Status 500 – Internal Server Error SolrCore 'collection1' is not available due to init failure? | Solution: UPDATE - Starting in aspenONE V12 there are now 2 independent services relating to Aspen Search. These are:
Apache Tomcat 9.0 Tomcat9
SolrWindowsService
In at least one instance this problem was corrected by simply starting the SolrWindowsService service (which, although set to be started automatically, was not running). Starting this solved the problem - modifying the two .XML files below was NOT required in that case.
If using a version older than V12, try the suggestions below.
To resolve this problem make sure the \TomcatN.N.NN\conf\Catalina\localhost\ folder on the web server contains correctly configured solr.xml file (and check AspenCoreSearch.xml while you are there)
IMPORTANT: Before making any changes you must stop the Apache Tomcat service in services.msc.
If the xml file(s) are missing then download the attachment zip file and uncompress into \TomcatN.N.NN\conf\Catalina\localhost\ folder.
Open the xml files in turn using a text editor (Run as Administrator) and make sure the path to the solr folder is valid (eg. Tomcat folder name may be wrong)
solr.xml
<?xml version=1.0 encoding=utf-8?>
<Context path=/solr crossContext=true>
<WatchedResource>WEB-INF/web.xml</WatchedResource>
<WatchedResource>../../appdata/solr/collection1/conf/schema.xml</WatchedResource>
<WatchedResource>../../appdata/solr/collection1/conf/AspenSearchSolrSecurity.xml</WatchedResource>
<WatchedResource>../../appdata/solr/collection1/conf/solrconfig.xml</WatchedResource>
<Environment name=solr/home type=java.lang.String value=C:\Program Files\Common Files\AspenTech Shared\Tomcat8.5.23\appdata\solr override=false/>
</Context>
AspenCoreSearch.xml
<?xml version=1.0 encoding=utf-8?>
<Context path=/AspenCoreSearch crossContext=true>
<WatchedResource>../../appdata/scheduler/config/logging.xml</WatchedResource>
<WatchedResource>../../appdata/scheduler/config/ScheduleManagerConfig.xml</WatchedResource>
<Parameter name=aspen.home value=C:\Program Files\Common Files\AspenTech Shared\Tomcat8.5.23\appdata\scheduler override=false/>
</Context>
After saving the files you can restart the Apache Tomcat service. Wait until you can locate the \TomcatN.N.NN\logs\AspenSchedulerStartUp.log file with a new entry confirming AspenSearchDeployer completed deployment operations. You can then log back into the Solr control panel to check all is well.
Keywords: Solr Server is not available
configuration
134541-2
References: None |
Problem Statement: Shell & Tube Exchanger in EDR is capable of design and simulate shell & tube heat exchangers in multiple shells either in series or parallel. However, there could be different designs based on different shell arrangement setups. How to define different arrangements in Shell & Tube Exchanger program for multiple shell setups? | Solution: Multiple shells arrangement mainly divided into 2 types, shell in parallel or series. Each type could also have different arrangements such as co-current or counter-current flow between shellside and tubeside flows.
Shell in Parallel:
For shell in parallel, the user can have 2 different arrangements if E-shell is used. The user can choose the “E-shell flow direction (Inlet Nozzle Location)” option to achieve co-current or counter-current flow with tube side flow (Single tube pass).
Shell in Series:
For shell in series, there will be more arrangement options. Users can choose from the “Overall flow for Multiple Shells” option as shown below:
Keywords: None
References: None |
Problem Statement: There are some conventional components with not defined RON and MON numbers in the databank and the users should specify those missing parameters since Aspen Plus calculates the octane number for a blending considering only the contribution of the conventional components with a RON and MON number defined. | Solution: This video covers the description regarding accurately specifying the RON and MON property in the stream results. As well as the Octane Number blending calculation and an example simulation.
Keywords: ROC-NO, MOC-NO, Property Curve, Assay/Blend
References: None |
Problem Statement: Users may notice that once they deploy an APC controller online, the gain for a ramp variable has a different value on the Desktop tool (DMCplus Model or DMC3 Builder) than on the PCWS web page Models view. What is causing this difference? | Solution: The gains are not actually different, it is only a difference in display between the web page of PCWS and the desktop tools like DMC3 Builder, which is intentional by design. DMC3 Builder is reporting the gain as per minute, which you can verify by looking at a plot of the step response model. In contrast, the web viewer is reporting the gain as per cycle. So when the control cycle is 1 minute, this difference would normally go unnoticed but for controllers with an interval that is not 1 minute, this difference will be visible.
For example, say that the total gain response is 0.5 over 120 minutes and the controller runs every 15 seconds (or 0.25 mins). This will be a slope of 0.5/120 minutes or displayed as 0.0042 (per minute) in DMC3 Builder. When looking at the web viewer, it will show a result of (0.0042/min)*(0.25min/cycle) or 0.00105 (per cycle).
This is documented in the release notes for APC V12.1 CP1 for DMCplus Model, but is also applicable for DMC3 Builder:
Adaptive Modeling Model Viewer, DMCplus Model display ramp rate differently
In the History tab, Adaptive Modeling, Model Viewer page, the plotted model gain for ramp variables is expressed in (CV Units / MV Units) * ControlInterval. This is different from DMCplus Model, where it is expressed as (CV Units / MV Units) * Minutes. This may cause confusion when the control interval for the application does not equal 1 minute.
This plotting behavior, in the Model Viewer window, is by design. It follows the current convention used in most Aspen APC tools, which use the ControlInterval factor when plotting ramp variables. Only DMCplus Model uses the Minutes factor.
Keywords: ramp, gain, slope, different
References: None |
Problem Statement: The Move Suppression is a unitless number and can often feel arbitrary when tuning it. Therefore, it is valuable to understand the relationships between move suppression and scaling parameters of gain and operator range. So, what can you expect to see in Simulation when the gain and operator range are changed? Also, how does the entry Move Suppression Increase Factor work and how to tune it? | Solution: A simulation test was conducted for a 1 MV x 1 CV model with a first order transfer function, a gain of 1, time constant of 10 min, over a range of move suppressions, then plotted for just the first control move. The gain, operator range, and maximum move were varied. For each simulation, the MV started at zero and its target was the upper operator limit.
Results (see plot below for reference):
If the gain is doubled, the control moves are halved
Funct(G2,R100) = 1/2 * Funct(G1,R100)
The orange curve first control moves are half the size of the blue control moves, except the first two blue moves which are clamped by its maximum move.
If the range is doubled, the control moves are doubled
Funct(G1,R200) = 2*Funct(G1,R100)
The yellow curve first control moves are twice the size of the blue control moves
The yellow curve first control moves are also twice the size of green control moves, expect the first two yellow moves which are clamped by its maximum move.
If the gain is doubled and the range is doubled, the control moves are the same
Funct(G2,R200) = Funct(G1,R100)
The green curve starts out higher but then intersects the blue curve, the only difference is the blue is clamped by its maximum move.
These scalings were exact, not approximate. If a variable is scaled, its gain and range will be scaled the same way and so the dynamics behave the same. You can think of it as trying to get the process to its limit in the same amount of time.
A summary table of the simulations and a plot are shown below.
The move suppression increase factor (SUPMLT) can be used to increase the speed of the control for your application and its main use case is to have more reasonable move suppressions within a similar range. It adjusts the MV move suppression across an MV's control horizon to either be more aggressive after the 5th move or less aggressive after the 5th move. By default the value is 2.0.
Increasing SUPMLT more than 2, causes the MV move plan from the 5th move to the end to be less aggressive, causing moves at the beginning of the move plan to be more aggressive.
Decreasing SUPMLT less than 2, causes the MV move plan from the 5th move to the end to be more aggressive, causing moves at the beginning of the move plan to be less aggressive.
Please note that tuning it is the opposite of the move suppression:
Increasing SUPMOV leads to less aggressive MVs and decreasing SUPMOV leads to more aggressive MVs
Increasing SUPMLT leads to more aggressive MVs and decreasing SUPMLT leads to less aggressive MVs
A rule of thumb when first starting to use SUPMLT is solve for the following:
Initial SUPMLT Value = 1/(Current SUPMOV Value)
Then set SUPMOV to 1 and SUPMLT to the value that was calculated. Then you can begin adjusting SUPMOV to fine tune the controller behavior.
Keywords: move, suppression, gain, operator, range, scaling, supmov, supmlt
References: None |
Problem Statement: What would I need to perform Spyro simulations with the new version of Olefins Regression Calculator? | Solution: The Pyrotex SPYRO engine used by Aspen Olefins Regression Calculator (AORC) to perform Spyro simulations has been updated to SRTO version 7, which is also 64-bit. If you intend to perform Spyro simulations with the new version of AORC, you will need to obtain a new SRTO license file. For more information, please visit https://www.spyrosuite.com/. The SPYRO settings can be found on the Global Settings tab in AORC. For version 7, they should be configured as follows:
Keywords: None
References: None |
Problem Statement: How can I tell what version of DMCplus a ccf was last modified? | Solution: BLDVERS parameter can be used. Find below a chart including the BDLVERS parameter, DMCplus Build internal version and the AspenONE release version.
V8.0 release 13.0.0 13
V8.0 CP3 13.0.3.1021 13
V8.4 release 14.0.1 14
V8.4 CP2 14.0.2.1004 14
V8.5 release 14.1.1 14.1
V8.5 CP5 14.1.5.1012 14.1
V8.7 release 15.0.0 15
V8.7 CP2 15.0.2.1008 15
V8.8 release 16.0.0.1649 16
V8.8 CP1 16.0.1.1673 16
V9 release 17.0.0.2026 17
V9 CP1 17.0.1.2058 17
V10 release 18.0.0.1369 18
V10 CP1 18.0.1.1385 18
V10 CP2 18.0.2.1389 18
V11
V12.0
V12.1 CP1 19.0.0.1961
20.0.0.2510
20.1.0.2529 19
20
20.1
Keywords: BLDVERS, DMCplus Build
References: None |
Problem Statement: How does one use the Hansen method to predict activity coefficients? | Solution: Hansen is a solubility parameter model and is commonly used in the solvent selection process. It is based on the regularSolution theory and Hansen solubility parameters. This model has no binary parameters and its application merely follows an empirical guide like dissolves like.
Theory
Hansen is a solubility parameter model and is commonly used in the solvent selection process. It is based on the regularSolution theory and Hansen solubility parameters. This model has no binary parameters its application merely follows an empirical guide like dissolves like. The Hansen model calculates liquid activity coefficients. The equation for the Hansen model is:
where:
The Hansen model does not require binary parameters. For each component, it has four input parameters.
Implementation
Hansen model was added in Aspen Plus 12 around 2004 as a system liquid activity coefficient model. There are four adjustable parameters for each component. It is now also available as a complete property method (HANSEN).
Parameters
DELTAD
Hansen solubility parameter of component i for nonpolar effect
DELTAP
Hansen solubility parameter of component i for polar effect
DELTAH
Hansen solubility parameter of component i for hydrogen-bonding effect
HANVOL
Hansen volume parameter of component i
The Hansen volume is implemented as an input parameter; it can also be calculated by using Option Codes in Aspen Plus Interface. The table below lists the option codes.
Option Codes
0
Hansen volume parameter input by user
Others
Hansen volume parameter is automatically calculated at a given temperature
Step-by-Step Instructions
To use the Hansen property method in your simulation, go to the Properties environment, Methods, Specifications. Change the Method Filter to ALL. Select in the Base method list the HANSEN method.
You can also use the Hansen activity coefficient model to replace the activity coefficient model of another property method, e.g. NRTL, using the following steps. The same steps can be followed if you want to change the option code for the estimation of the Hansen volume parameter.
1. Go to the Properties environment
2. Click Properties folder, click Specifications
3. On Specifications sheet, specify an activity coefficient property method as Base method; for instance NRTL
4. From Properties folder, click Property Methods
5. From Object manager, click New
6. In Create New ID box, type a name for HANSEN, say HANSEN
7. In Base property method drop list, select NRTL
8. Click Models
9. Change Model name for GAMMA from GMRENON to HANSEN
10. (Change Option Codes for Hansen volume parameter, optional) Click GAMMA
11. Click Option codes (it shows 0 as the default value)
12. Replace 0 by 1 or any other number
13. Go to Properties, Parameters, Pure component, click New to create a new scalar parameter object. You can then input the four pure component scalar parameter DELTAD, DELTAP, DELTAH, and HANVOL for each component.
Example
The attached bkp file is set up to perform 2phase flash calculation using HANSEN model. Click OK when prompted to update the databases.
Keywords: None
References: s
Frank, T. C.; Downey, J. R.; Gupta, S. K. Quickly Screen Solvents for Organic Solids. Chemical Engineering Progress 1999, December, 41.
Hansen, C. M. Hansen Solubility Parameters: A User's Handbook; CRC Press, 2000. |
Problem Statement: How do I know whether a mixture is Vapor-Liquid (VL) or Vapor – Liquid - Liquid (VLL)? Is there any option available in Aspen Plus to check? | Solution: Generally, in Aspen Plus, the valid phases are Vapor-Liquid. If the mixture has two liquid phases, the flash results will be wrong if the valid phases are set to Vapor-Liquid. In order to avoid these incorrect results, there are some options available on the Setup | Calculation Options sheet in the “Phase equilibrium results” section where users may turn on information, warning, or error massages for VLE or VLLE as shown below. This checking is not turned on by default.
Check Phase Equilibrium Results checks for fugacity differences between phases in the results greater than the specified relative tolerance in components with greater than the specified minimum mole fraction.
Check for VLE checks for a possible vapor-liquidSolution in streams set to a single phase.
Check for VLLE checks for a possible vapor-liquid-liquidSolution when one or two phases are specified.
In V12.1, there are also options to check for solid equilibrium.
Check for SLE checks for a possible solid-liquidSolution.
Check for SVE checks for a possible solid-vaporSolution.
Solubility threshold is the value for solubility index above which the SLE and SVE checks report messages. (Default 1.001) The solubility index is the ratio of the liquid or vapor phase fugacity for a component to its solid phase fugacity.
This option provides warning massages for VLE or VLLE:
VSTS 521916
Keywords: None
References: None |
Problem Statement: aspenONE Process Explorer Admin showing A connection with the server could not be established(Error code:500) after upgrading to V12.
Another symptom is that solr page(http://localhost:8983/solr) is not reachable.
It was running before upgradation. What could be the reason for this behavior? | Solution: Solr is independent from Tomcat in V12. It is controlled by a new service called SolrWindowsService
1- Please make sure that tomcat service is running.
2- Please make sure that SolrWindowsService service is running.
3- If SolrWindowsService service cannot be started, please check the logs at C:\Program Files\AspenTech\aspenONE\Logs
4- Please double check the Environment Variables JAVA_HOME and JRE_HOME. They should be pointed to 64-bit java 11 folder. If you installed AdoptOpenJDK, it should be like this.
Key Words:
Solr
SolrWindowsService
error 500
admin
Keywords: None
References: None |
Problem Statement: How can we transfer Equipment form OptiPlant to S3D? | Solution: OptiPlant generates an XML file that any user can import into S3D. Please follow the below mentioned steps to generate the XML file for Equipment:
Go to Deliverables >> Intergraph Smart 3D
A window will open, from that window go to Equipment tab
Select the required equipment for which the XML file needs to be created.
Finally, click on the Generate XML button. The XML file will be saved under Deliverable subfolder.
This XML file can be loaded into S3D using PDS Project and Data Translator.
Key Words:
Smart 3D, interface, XML
Keywords: None
References: None |
Problem Statement: How can we transfer Structures form OptiPlant to S3D? | Solution: OptiPlant generates a CIS/2 or STP file that can be imported into S3D. Please follow the below mentioned steps to generate the STP file for Structures:
Go to Deliverables >> Intergraph Smart 3D
A window will open, from that window go to Structure tab
Select the required Structures
Finally, click on the Generate STP File button. The STP file will be saved under Deliverable subfolder.
User can import this STP file in S3D.
Key Words:
Smart 3D, interface, STP, CIS/2
Keywords: None
References: None |
Problem Statement: How can we transfer Piping form OptiPlant to S3D? | Solution: OptiPlant generates an XML file that can be imported into S3D. Please follow the below mentioned steps to generate the XML file for Piping:
Go to Deliverables >> Intergraph Smart 3D
A window will open, from that window go to Piping tab
Select the required line ID's
Finally, click on the Generate XML File button. The XML file will be saved under Deliverable subfolder.
Users can import this XML in S3D using PDS Project and Data Translator.
Key Words:
Smart 3D, interface, XML, Piping
Keywords: None
References: None |
Problem Statement: Is there a list of OptiPlant interfaces? | Solution: OptiPlant shares interfaces with various tools. The list is as follows:
Key Words:
interface
Keywords: None
References: None |
Problem Statement: Is AI Training available in Aspen HYSYS Dynamics mode? | Solution: AI Training is not available in Dynamics mode.
Keywords: AI Training, Dynamics, etc.
References: None |
Problem Statement: Where can I get the AI Training user guide & sample examples in Aspen HYSYS V12.1? | Solution: Users can get the Aspen HYSYS V12.1 Help menu topic AI Training for a more detailed workflow/step by step guide to using AI Training in Aspen HYSYS V12.1
AI Training sample examples are available at the below path or can be accessed from Aspen HYSYS V12.1, Resources menu, Examples.
C:\Program Files\AspenTech\Aspen HYSYS V12.1\Samples\AI Training
Keywords: AI Training, samples, etc.
References: None |
Problem Statement: Choked flow is a fluid dynamic condition where a flowing fluid at a given pressure and temperature passes through a constriction (such as the throat of a convergent-divergent nozzle or a valve in a pipe) into a lower pressure environment and the fluid velocity increases.
Aspen Flare has the option to include or exclude choke analysis within the calculous of the entire pipeline and when this condition is reach AFSA will display a warning.
This check box is selected by default under Calculation Settings | General | Choked flow check. | Solution: If selected, the program will check for choking at each pipe outlet. If the program determines that the pipe outlet is choked (i.e. Mach number = 1), choking pressure will be calculated at the pipe outlet conditions. The pipe outlet pressure will be set to this choking pressure. Note that the choking can occur only at a pipe outlet that has subsequent expansion.
If cleared, velocities will not be limited to the sonic condition. This is useful in sizing calculations since the mach number limitations will still be met by the time the finalSolution is reached. Calculation speed is greater at the risk of numerical instability and convergence failure.
Keywords: Choked flow, choked flow check.
References: None |
Problem Statement: Is if possible to set the minimum values for the displayed flows and fractions in streams ?
Is it possible to have flows or fractions that are very, very small for a nuclear reactor? It seems that at some point the small fractions are clipped to a value of zero.
Alternatively, is it possible to remove the very small values for the flowrates and fractions of the components? | Solution: We drop trace components of the inlets (not the outlets) based on Minimum Flow (PACK-FLOW) and Minimum Fraction (PACK-FRAC) on the Setup | Calculation Options | Limits sheet. The default values are a mole flow of 1e-15 kmol/sec and a mole fraction of 1e-15. This value is global for the entire simulation and affects the stream and block calculations.
It is possible to change the minimum flow and fraction setting on the Setup | Simulation options | Limits Tab from the default value of 1E-15 to any value required. See the screenshot below:
In the Stream Summary, it is possible to to report trace or 0 for any components below a threshold value. This is for reporting only. The specific value is retained and used in stream and block calculations.
Keywords: Minimum Flow, Minimum Fraction, setup
References: None |
Problem Statement: How to configure email alerts for license denials or license expiration? | Solution: To enable email alerts, users need to create a configuration file with information on the license strings from the license file. The configuration should be “lservrc.cnf”, is a general-purpose configuration file and shall reside on the same directory as the license file.
Defining Alerts:
Each alert type should have the following format:
<alert-type> = <reporting-type1> ON/OFF <reporting-type2> ON/OFF
Following alert-type are allowed:
Hardlimit => Hardlimit exceeded
Appstart => License issued
Appstop => License returned
Denied => License denied
Apptimeout => License time-out
Expired => License Expiration date
Following are the two reporting-type:
Email => Email will be sent to the recipients given after EMAIL=
Script => An external script given will be invoked after SCRIPT=
Users shall make use of lsmail.exe application to create this configuration file. Default Location of this application: <C:\Program Files (x86)\Common Files\SafeNet Sentinel\Sentinel RMS License Manager\WinNT>
On launching the application, type the hostname or IP address of an MS Exchange Server
Once configured, users can create lservrc.cnf using notepad for their alert requirement.
Example configuration for license denial alert for Aspen Plus and Aspen Rate Based Distillation, with timeout alert for Aspen Rate Based Distillation:
[SLM_AspenPlus *]
softlimit = SCRIPT OFF EMAIL OFF
hardlimit = SCRIPT OFF EMAIL OFF
appstart = SCRIPT OFF EMAIL OFF
appstop = SCRIPT OFF EMAIL OFF
denied = SCRIPT OFF EMAIL ON
apptimeout = SCRIPT OFF EMAIL OFF
expired = SCRIPT OFF EMAIL OFF
[email protected]
[SLM_Aspen_RateSep *]
softlimit = SCRIPT OFF EMAIL OFF
hardlimit = SCRIPT OFF EMAIL OFF
appstart = SCRIPT OFF EMAIL OFF
appstop = SCRIPT OFF EMAIL OFF
denied = SCRIPT OFF EMAIL ON
apptimeout = SCRIPT OFF EMAIL ON
expired = SCRIPT OFF EMAIL OFF
[email protected]
Keywords: Lsmail.exe, configure email alert, alert configuration, license denial
References: None |
Problem Statement: Is there any limitation if we import Equipment into S3D from OptiPlant? | Solution: There are a few limitations for OptiPlant to S3D interface for Equipment:
Ellipse or elliptical shape cannot be imported into S3D as an intelligently because of limitation of S3D. User can replace sphere with semielliptical head as per requirement.
Pyramid equipment of OptiPlant cannot be imported into S3D
Key Words:
Smart 3D, interface, Equipment
Keywords: None
References: None |
Problem Statement: Is there any limitation if we import piping into S3D from OptiPlant? | Solution: There are a few limitations for OptiPlant to S3D interface for piping:
Orifice component from OptiPlant cannot be imported into S3D
In S3D, header and branch are considered as one system whereas in OptiPlant it is different so if header and branch from OptiPlant is imported in S3D it creates one extra Pipe Run at the third End of the TEE, which user will have to remove after importing.
In case of reducer, S3D automatically creates a second pipe run after the reducer and it will be named as Split 1, if user wants, he can rename it or leave it as is as per his convenience.
Data for CONTROL valve in OptiPlant is same as the data for BUTTERFLY valve in S3D, therefore when control valve is exported to S3D, it shows the butterfly valve in S3D which is only for graphical representation but in actual it will be read as ‘Control Valve’ internally. So S3D Administrator has to add Butterfly valve in piping specification sheet.
Key Words:
Smart 3D, interface, Piping
Keywords: None
References: None |
Problem Statement: How can we transfer Equipment form S3D to OptiPlant? | Solution: OptiPlant reads equipment in excel format. The user has to export equipment form S3D into excel format and that excel file can be imported into OptiPlant. There are few key points that needs to be considered while exporting the equipment from S3D:
The equipment list format from S3D should be the same as OptiPlant equipment excel template format. Users can customize the equipment list in S3D to match with OptiPlant excel template.
The Default template is placed under C:\Program Files (x86)\AspenTech\Aspen OptiPlant V12.1\Data
Key Words:
Smart 3D, interface, Equipment
Keywords: None
References: None |
Problem Statement: How can we transfer Structures form S3D to OptiPlant? | Solution: OptiPlant can read STP or CIS/2 files extracted from S3D. If the user export Structures from S3D into STP format that can be read into OptiPlant by going to File >> CIS/2 Button. However, there are few pre-requisites. Users have to update the data files for structure member sizes and place them to the working project folder prior to import it to OptiPlant.
The Default location of the data files are C:\Program Files (x86)\AspenTech\Aspen OptiPlant V12.1\Data folder
Key Words:
Smart 3D, interface, STP, CIS/2
Keywords: None
References: None |
Problem Statement: How can we transfer Piping form S3D to OptiPlant? | Solution: OptiPlant can read PCF files extracted from S3D. If the user export piping from S3D into PCF format that can be read into OptiPlant by going to File >> PCF button. However, there are few pre-requisites. Users have to update two data files, which are as follows:
PCFInputConfiguration.csv
PipeListInput.csv
Note: By default these data files will be placed under C:\Program Files (x86)\AspenTech\Aspen OptiPlant V12.1\Data folder. The user has to update these data file copy them to the working project folder prior to import the PCF.
Key Words:
Smart 3D, interface, PCF, Piping
Keywords: None
References: None |
Problem Statement: You may get BadSecurityChecksFailed 'Error received from remote host: Could not verify security on OpenSecureChannel request.' while using Aspen OPC UA Explorer connect to OPC UA server.
This article provides you with an example using Aspen OPC UA Explorer connect to IP21 OPC UA server. | Solution: You will need to trust the certificate.
1. Launch UA Configuration Tool
2. Click on Find and browse to C:\Program Files\AspenTech\AspenOPCUAExplorer | open “AspenTech.UA.Client.AC.OPCUAExplorer” | click on ok
3. Click on Find and browse to C:\Program Files\AspenTech\InfoPlus.21\db21\code | Open “IP21OpcUAServerHost” | click on ok
4. Go to “Manage Applicaton” tab | select IP21OpcUAServerHost | Click on “View Application Certificate” | click on Export and save the certificate to some where
5. Go to “Manage Applicaton” tab | select AspenTech.UA.Client.AC.OPCUAExplorer | Click on “View Application Certificate” | click on Export and save the certificate to some where
6. Go to “Manage Security” tab | select IP21OpcUAServerHost | click on “select Certificate to Trust” | Select %CommonApplicationData%\OPC Foundation\CertificateStores\RejectedCertificates | double click on “AspenTech InfoPlus.21 OPC UA Explorer” | click on OK | click on OK
7. Go to “Manage Security” tab | AspenTech.UA.Client.AC.OPCUAExplorer | click on “select Certificate to Trust” | %CommonApplicationData%\OPC Foundation\CertificateStores\UA Applications | double click on “AspenTech InfoPlus.21 OPC UA Server” | click on OK | click on OK
8. Go to “View Trusted Certificate”
This time, you should be able to connect.
Besides DA, you can also see some other definitions
Key words
UA
Explorer
Certificate
BadSecurityChecksFailed
Keywords: None
References: None |
Problem Statement: How do I change default SQL Server port 1433 connecting to Aspen Mtell server? | Solution: This article shows how to change the port that Aspen Mtell uses to connect to the SQL server. SQL Server uses TCP port 1433 by default that can be changed to other available ports. If SQL has been configured to use a non-default port, this article shows how Mtell can be configured to use the SQL Server non-default port.
Configure SQL Server to Use a Different Port
Close SQL Server Management Studio if it is open.
Open SQL Server Configuration Manager. Open the dropdown menu under SQL Server Network Configuration [1]. Select Protocols for Your Server Name [2]. Here, it is MSSQLSERVER.
Right click on TCP/IP [3] and select Properties [4].
Change Enabled [5] to Yes.
Click on IP Addresses [6]. Under each IP, change the TCP Port [7] to the port you want SQL to use. Here, we are changing it to 6000.
Check that there is not a 0 for TCP Dynamic Ports [8]. If there is, remove it. This field should be left blank.
Select OK [9] to confirm your changes.
Next, we need to restart SQL Server for our changes to take place.
Open the Services app.
Find SQL Server (Your Server Name) [10]. Once again, the server in this example is MSSQLSERVER. Right click and select Restart [11].
SQL Server should now be listening on the port you configured.
Configure Mtell to Connect to SQL Over a Different Port
Open System Manager, and go to the Configuration tab [1], and click Database Connection [2].
Under Data Source, type your server name (here it is APM), followed by a comma and the port number you wish to connect through [3].
Check that the Database Name and Authentication Mode are correct.
Click to Test Connection [4]
Under Test Results, you should see “Database connection test successful” [5]. If you see that the test failed, check your SQL server settings. You may need to restart the SQL server.
Once the test is successful, select Okay. You will be asked to restart the application. If you have Agent Builder open, you should close and reopen it as well.
Next, go to Agent Services [7] under the Configuration tab [6] in System Manager.
If Agent Services has already been configured, find Data Source under Database and add a comma and your desired port after the server name [10].
If Agent Services has not been configured, click Find Services [8] and select to add the service [9]. After it is added, check that Data Source contains a comma and the port number after the server name [10]. If it does not, add it.
Check that Agent Services is processing. You may need to Update Registration if it is not.
Repeat steps 8-10 for Training Services.
Keywords: SQL port change
Change SQL port
Mtell port change
References: None |
Problem Statement: Since QueryDef records are not automatically saved as .sql file it is not possible to open these queries on text editor files and simply update its information using these text editors. Two queries were written to solve that issue. | Solution: In order to apply thatSolution, it is necessary to find out where your .sql queries are usually saved. Here it is being considered that they are saved in the default group200 path. For that, the user can create a new .sql file and “Save as” or the user could search for .sql files on their search magnifying glass on file explorer.
The first query is SaveQueryDefRecs.sql:
SaveQueryDefRecs prompts for a folder to save the source for the QueryDef files. The default location is 'C:\ProgramData\AspenTech\InfoPlus.21\db21\group200\sql\QueryDefSourceFiles\’.
Note: The path ‘C:\ProgramData\AspenTech\InfoPlus.21\db21\group200\sql\QueryDefSourceFiles\’ does not exist. You will need to create it prior to running the query.
It is highly recommended to not save the QueryDef records to the standard SQL folder. Also, it is recommended to create a backup folder like 'C:\ProgramData\AspenTech\InfoPlus.21\db21\group200\sql\QueryDefSourceFiles\backup’ and copy the queries that were saved to this folder.
With a text editor (like Notepad++, for instance), change tag names in the source files. Be aware that the source lines of the QueryDef records need to be kept under 80 characters, thus, it is important to make sure that the new tag names are longer than the old ones.
The second query is LoadQueryDefRecs.sql. This query loads the queries in 'C:\ProgramData\AspenTech\InfoPlus.21\db21\group200\sql\QueryDefSourceFiles\’ back into the QueryDef records in InfoPlus.21.
Note: Backup the InfoPlus.21 database before running LoadQueryDefRecs.
LoadQueryDefRecs first prompts for the location of the InfoPlus.21 code folder. The default is ‘C:\Program Files\AspenTech\InfoPlus.21\db21\code’. Next, the query prompts for the location of the QueryDef source files. The default is 'C:\ProgramData\AspenTech\InfoPlus.21\db21\group200\sql\QueryDefSourceFiles\’.
Then LoadQueryDefRecs loads the text files back into the InfoPlus.21 QueryDef records.
Keywords: Tag names
QueryDef
References: None |
Problem Statement: There may be a situation in which you want to use a scripting language other than VBScript to call on the Aspen SQLplus ODBC driver. This | Solution: aims to provide an example of how Python can be used to interface with Aspen SQLplus ODBC driver through the use of PyPyODBC which is a pure Python ODBC interface module based on ctypes.Solution
There are a few points to note.
1. Knowledge of Python language is a MUST.
2. Usage of Python 2.7.3 as the attached script has been tested against this version of Python.
3. Python 3.x may or may not work with the script due to the inherent change make in 3.x.
The attached Python script is provided AS IS to provide a sample and will not be supported by AspenTech Support.
Keywords: Python
ODBC
References: None |
Problem Statement: A Safety/relief valve is being specified for a Pipe bulk, but the results for estimate show different diameter, length, and quantities. Why is this?
User specified values:
Estimate results: | Solution: The safety/relief valve has a maximum pipe diameter value of 8 [200 MM], thus if the pipe diameter specified exceeds this maximum limit, ACCE will automatically adjust the system configuration to fit within the software ranges.
To to prevent this from happening, the user must specify pipe diameter values that fall within the software ranges.
User specified values:
Estimate results:
The attached project file showcases the adjustment made by ACCE when the specified pipe diameter falls outside the software ranges.
Keywords: Safety, relief, valve, length, diameter, quantity, modification, modified, different
References: None |
Problem Statement: The Icarus_User 120 database is not being successfully connected, thus the reporter does not work. The following error message is displayed when evaluating the project: | Solution: To solve the connectivity error, the database needs to be recreated. Follow the steps below to do so:
Close all Economics V12 applications
Delete all *.mdf and *.ldf files from the cache directory of the project (path: C:\Users\TIRADOCA\Documents\AspenTech\My Economic Evaluation V12.0 Files\Cached Project)
Delete the Icarus_User120.mdf and Icarus_User120_LOG.ldf file from the shared libraries location (path: C:\Users\Public\Documents\AspenTech\Shared Economic Evaluation V12.0\Reporter\Database)
It might be the case in which the Icarus_User120 database is still linked to ACCE, thus it won’t be possible to delete it directly. However, it can be done from Delete from SQL Server 2014 Management Studio. To do so follow the steps below:
Open SQL Server 2014 Management Studio
Connect to the ACCE server. By default, it will be (LocalDB)\MSSQLLocalDB_EEV12, unless the company is using a custom server
Expand the Databases folder
Right click on the Icarus_User120 database and select Delete from the menu
The Delete Object window will be opened, click on OK
Run the Command Prompt in your machine to execute the commands to stop, delete, and recreate the LocalDB instance. Make sure to execute one command at a time and to run the Command Prompt as an admin. The commands to run are the following, simply copy paste them in the command prompt window:
sqllocaldb info MSSQLLocalDB_EEV12
sqllocaldb stop MSSQLLocalDB_EEV12
sqllocaldb delete MSSQLLocalDB_EEV12
sqllocaldb create MSSQLLocalDB_EEV12 12.0 -s
Open ACCE V12.0 and run an example project. The Icarus_User120 will be recreated automatically and it should work.
Note: Make sure that SQL is not running in the background before running the commands, that may lead to failure. Check this on the task manager details and look for sqlserver.
Keywords: SQL, localdb, server
References: None |
Problem Statement: Why is there a difference between the calculated properties of a defined stream and a virtual stream if apparently all base conditions are transferred correctly? | Solution: This problem happens when a petroleum assay is being used in the stream from which the virtual stream is created. A virtual stream should never be used simultaneously with a petroleum assay because properties are not transferred correctly.
As the properties are not transferred exactly as they are in the original stream, the results obtained from the virtual stream will be erroneous. At first sight, when checking the values of the data given as input in the virtual stream, it would seem they are the same as in the original stream, but other calculated properties have different values. The reason behind this is that the compositions are different in both streams. On a mass fraction or mole fraction basis, the differences can´t be evaluated because of the cypher definition. However, if the compositional basis is changed to mole or mass flow, the differences will be seen.
ASolution for this is to make use of a balance block instead of a virtual stream. The balance block will transfer the properties of the source stream to the destination stream correctly.
Keywords: Different properties, virtual stream, enthalpy, entropy, heat flow.
References: None |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.