question
stringlengths
19
6.88k
answer
stringlengths
38
33.3k
Problem Statement: How to fix strings that are messed up (shown in mixed codes, numbers don't make sense) after switching your system to another language and trying to use the localized version of APS/MBO?
Solution: The user will need to install VC++ 2017 Redistributable for the language desired. For example, if I would like to use a Russian localized version of APS, here are the steps I need to take in order for everything to show up correctly: 1. Please make sure the region language is Russian, and UTF-8 is unchecked. (Or any other languages that you would like to use) 2. Install VC++ Redistributable from https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads x64 may not be required, but I installed both of them (the installation program is localized to Russian characters, please double check the region setting if it is not.). Restart machine after install them , then all APS UI can show Russian language by designed. Keywords: None References: None
Problem Statement: Having created a shortcut to a file or folder within the aspenONE Process Explorer (A1PE) document store, the shortcut is not displayed in the navigation pane: In the example above, the MyRemoteShareFolder shortcut was created to attempt to provide access within A1PE to a network shared folder: \\mscserver1\CJPDump\MyRemoteShareFolder . This does not show up under the Public folder for any user of A1PE. Can such a shortcut be added under the Public folder in order to give A1PE users access to a folder that may be somewhere else on the company network?
Solution: A1PE does not currently support the use of these shortcuts for use in navigation on the A1PE document store. There is a workaround though, use a symbolic link. On the A1PE web server, open a command prompt window and change directory to the location of your Public folder under the A1PE root directory. The A1PE root directory is customizable but by default can be found here: C:\ProgramData\AspenTech\A1PE You can then create your symbolic link pointing to the folder (or file) located elsewhere on the file system: CD C:\ProgramData\AspenTech\A1PE\Files\Public mklink /d MyRemoteShareFolder \\mscserver1\CJPDump\MyRemoteShareFolder The name for your symbolic link will immediately appear in the A1PE navigation pane: You can create such symbolic links anywhere within the A1PE document store. As long as the permissions to access the folder are configured correctly then the A1PE users will be presented with the contents of the remote folder. Keywords: missing tree References: None
Problem Statement: For the first derivative of time, user can add '$' as a prefix to input the 1st order time derivative. For example, Model A X as realvariable; Y as realvariable; $X = 2*Y; End But for the 2nd order time derivative, user cannot declare the variable like '$$X'.
Solution: Aspen Customer Modeler cannot understand the syntax like '$$'. But user can model the 2nd order time derivative equation by using intermediate variable. In order to input the 2nd order time derivative, user need to declare the intermediate variable and equate the variable with the 1st order time derivative. Then, user can declare the 1st order time derivative of the intermediate variable which is the 1st order time derivative. For example, let's think about simple Pendulum movement equation. We will declare 'dXdt' as a intermediate variable. Model SecondDerivative //variable X as realvariable(1,initial); dXdt as realvariable(0,initial); //intermediate variable m as realparameter(1); k as realparameter(10); //equation dXdt=$X; m*$dXdt=-k*X; End Keywords: 2nd order time derivative, aspen customer modeler, Pendulum References: None
Problem Statement: Which System, User, and Application Configuration Files should be Saved, Backed Up, and/or Transferred when Upgrading the MES Plant Operations Family of Product Installations for AORA and ATOMS to the aspenONE V11.0 Release.
Solution: The KB Article content documented below is intended to help you identify the important AORA and ATOMS Product Installation and/or Application and User Specific Files that need to be Saved and Backed Up during Upgrades, and/or Transferred to the New Server / Client Machines when Migrating to New Hardware during Upgrades, especially for those files located in the AspenTech Installation, Program Files, or User Specific Profile folder locations. In addition to referencing this KB Article for your V11.0 Upgrades, then it is also Highly advised that you Download and Review the Content provided in the V11.0 Product Installation, User, and Configuration Guides, which are posted and made available for Download from AspenTech’s Support Website. And when Reviewing those Product Documents pay close attention to the Chapters and Sections focused on Upgrades including Migrations of the Software from Old to New Machines, as well as any Steps that should be executed and completed Prior to an Upgrade. Important Files to Save, Backup, and/or Transfer when Upgrading AspenTech Installations on AORA and/or ATOMS Servers and Clients to the aspenONE V11.0 Release: Keywords: None References: None
Problem Statement: Which System, User, and Application Configuration Files should be Saved, Backed Up, and/or Transferred when Upgrading the MES InfoPlus.21 Family of Products to the aspenONE V11.0 Release.
Solution: NOTE: For this KB Article, please Download and Refer to the PDF File Attachment Named, "Important Files to Save, Backup, and or Transfer for V11.0 MES InfoPlus.21 Family Upgrades.pdf" The content provided in the attached PDF File for this KB Article is intended to help you identify the important Product Installation and/or Application and User Specific Files that need to be Saved and Backed Up during Upgrades, and/or Transferred to the New Server / Client Machines when Migrating to New Hardware during Upgrades, especially for those files located in the AspenTech Installation or Program Files folder locations. Supporting information related to some of the more Critical Files has also been provided in this article as well. In addition to referencing the content documented in the PDF File included with this KB Article for your V11.0 Upgrades, then it is also Highly advised that you Download and Review the Content provided in the V11.0 Product Installation, User, and Configuration Guides, which are posted and made available for Download from AspenTech’s Support Website. And when Reviewing those Product Documents pay close attention to the Chapters and Sections focused on Upgrades including Migrations of the Software from Old to New Machines. For any further Questions related to the Upgrades including the Files and Supporting Notes / Instructions documented below then please Contact AspenTech Support. Contact Methods: By Phone: https://esupport.aspentech.com/Contact_Phone By E-Mail: [email protected] OR [email protected] Via a Web Case Submission: https://esupport.aspentech.com/S_CaseSubmit Via a Web Chat: https://esupport.aspentech.com/S_ChatOpener Keywords: None References: None
Problem Statement: What settings and installation steps you need to perform before opening your APS/MBO model in another language?
Solution: You don't have to switch the entire OS to the other language system. For example, if you are currently in an English/US OS, you don't have to change the entire OS to Russian if you would like to run APS/MBO models in localized Russian version. W10E 64-bit US/UK English Install x86 and x64 VC++ Redist 2017. Here is the link to install: https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads W10 Settings à Time and Language, add a language, take Russkiy 3. Verify it added a Russian keyboard (just in case, click on ENG in right-hand corner taskbar, check that option to switch to “Russkiy” displays) 4. Control Panel | Region (1) “Formats” tab, change to Russian (2) “Location” tab, leave set to United Kingdom (Cortana will keep functioning). (3) “Administrative” tab, change system locale to Russian, and restart (do not touch “Copy Settings” to continue using W10 OS displaying in English). 5. After rebooting, set the Aspen Language Translator to Russian 6. Start APS and repeat all checks reported in case documentation: Result: Good. All menus in Russian. Russian equivalent of Simulator | Schedule Property Bias… menu does not crash. 7. Test MBO Multi-Blend Optimization and economic report. Result: Good, both translation-wise and regression wise (MBO v11 optimum identical to optimums obtained with MBO v8.8 and MBO v10.0). Keywords: None References: None
Problem Statement: Radfrac (Distillation Column) is supported in Aspen Plus Dynamics but the rigorous static head of side draw is not considered in the simulation. This example will illustrate how can user apply hydrostatic head of the side draw by using Flowsheet Constraint.
Solution: 1. After the steady-state column simulation is done, add a dummy 'Heater' block to indicate hydrostatic pressure of side-draw. As we don't need heat duty for this block, please give '0' to duty and give estimated outlet pressure. 2. Run again, and export to Aspen Plus Dynamic as flow-driven simulation. 3. The below picture indicate height of side-draw stage and pump inlet (or it can be fed to the other unit operation). 4. Dummy heater block may have 'P_Drop' as fixed variable. Change this to free as we will add one equation for this variable in next steps. 5. Input flowsheet constraint by using the below code. - Please note that blocks name and stream name need to be adjusted as per user's flowsheet - The stage of side-draw stream is the 15th stage in this example. CONSTRAINTS head as length(fixed. 12); // Height of Side-Draw stage - pump inlet Phead as pressure; uec1 as global realparameter; // acceleration of gravity ucf8 as global realparameter; // Unit conversion factor between 'bar' and 'pa' Phead = Blocks("COL").stage(15).rhoml*(head+Blocks("COL").stage(15).level)* uec1 *ucf8; // Density of liquid * g * h equation Blocks("Static").P_drop = -Phead; END 6. Now hydrostatic head is calculated for side-draw stream. User can adjust head height in localvariables table. Keywords: None References: None
Problem Statement: Viscosity of liquid sulfur is unusual and shows discontinuity around 320 F (160℃) because of the formation of allotropes. There is no viscosity correlation in Aspen Plus that can describe this phenomena. https://en.wikipedia.org/wiki/Allotropes_of_sulfur#List_of_allotropes_and_forms Also, Aspen Plus DB-Pure36 parameter of liquid sulfur viscosity is only suitable for the temperature above 368.42F. How can user model the viscosity of sulfur liquid reasonably?
Solution: In order to describe allotropes, input two sulphurs in the component which has different viscosity model. Name those component as S-ALPHA and S-BETA respectively. Prepare chemistry that can describe sharp conversion from S-Alpha to S-Beta. Then do the liquid viscosity analysis. All the parameters are set by attached excel spreadsheet. Meanwhile, HYSYS sulsim which is using empirical parameter can predict sulfur viscosity correctly. Keywords: None References: None
Problem Statement: How do I model blocks' elevation to apply hydrostatic pressure in Aspen Plus Dynamics?
Solution: In models with a liquid holdup, you have the option of representing the effect of the liquid static head on the liquid outlet pressure. This feature applies to these models: Decanter Flash2 Flash3 Mixer PetroFrac RadFrac RCSTR For these models, in addition to the height of liquid in the vessel, you can include the height difference between the outlet of the block and the inlet of the downstream block, by specifying the liquid outlet stream elevation change on the Configure form. You can switch on this option for individual blocks, from their Configure form. You can also switch it on for all blocks. To include the height difference flow sheet wide between the outlet of a block and the inlet of the downstream block: 1. In the Simulation Explorer, click the Globals folder, and in the Contents window, double-click DynamicsOptions to open the DynamicsOptions table. 2. Set the value of GlobalLiqHead to True. 3. Specify the height of equipment in "Veshead" variable. User can see "Phead" indicate hydrostatic pressure of "Veshead" + Level. By default, this option is turned off. This is for consistency with Aspen Plus, which cannot model this effect. You can also activate the option for individual block. To do so, switch the block parameter LiquidHead to Yes for all models except RadFrac and PetroFrac, or LiquidHeadRF and/or LiquidHeadRB for RadFrac and PetroFrac, where static heads only at overhead system and reboiler are counted. For other models that do not have the parameter, or side stream from column, you would have to use an extra mixer / heater block with flowsheet constraint to take into account elevation changes across flow sheet. For this option, please refer to the following KB. How to apply hydrostatic head of side draw in Aspen Plus Dynamics? https://esupport.aspentech.com/S_Article?id=000048603 Keywords: Hydrostatic, Static Head, Elevation, Pressure effect, Height, Liquidhead References: None
Problem Statement: Rankine cycle is commonly used to model thermal power plant. This example illustrate how can user model the Rankine cycle of 500MW thermal power plant.
Solution: Please use this example for reference and note that the operating condition of these product didn't referenced from real operating result. User need to modify operating condition based on the design data. 1) Property Setting Component : Water Property Package : STEAMNBS Flowrate : 179,754 kmol / hr of water 2) P-H Curve for STEAMNBS 3) Main Flowsheet Pump (pump) : Pressurize the liquid water stream to 60 bar. Boiler (Heater) : Heat up and vaporizing the inlet stream to 500degC. Turbin (Compr) : Depressurize high pressure steam and generate electric energy Condense (Heater) : Condensing steam in a vacuum pressure. Units 1 2 3 4 Temperature C 36.1668 36.56542 500 198.2379 Pressure bar 0.06 60 60 2 Mass Flows kg/hr 3238319 3238319 3238319 3238319 Molar Enthalpy kcal/mol -68.1095 -68.0793 -54.0355 -56.4273 Molar Entropy cal/mol-K -38.3055 -38.2917 -10.9405 -8.28201 Mass Enthalpy kcal/kg -3780.65 -3778.98 -2999.43 -3132.19 Mass Entropy cal/gm-K -2.12628 -2.12551 -0.60729 -0.45972 Molar Density kmol/cum 55.15208 55.28994 0.979899 0.051583 Mass Density kg/cum 993.5802 996.0638 17.65316 0.929281 Keywords: None References: None
Problem Statement: With Aspen products already installed, do a windows major upgrade, for example, upgrade Windows 10 from version 1803 to 1809. Then launch Aspen application on the new Windows, it may trigger a windows installer resiliency. Root Cause: A windows major upgrade will back up old windows to C:\Windows.old folder. With that, some files will not be kept under the new windows. AspenTech installer delivers some files under C:\Windows\Microsoft.NET\Framework\v2.0.50727\ and GAC assembly 2.0, which will be impacted by this windows upgrade behavior.
Solution: Do a repair install with original Aspen installer. So it will be repair all required missing stuff back to the new Windows. AspenTech provides two zip files for different version as a workaround solution. Please extract the zip file to a local folder. Then, run the batch file by right click it and select “Run As Administrator”. Retrieve Files From Windows.old for V8.4 to V10 products.zip NOTE: For versions V8.4 to V10.x Products affected: Aspen Production Record Manager Aspen Administration Tools(64bit) Aspen InfoPlus.21 Server(64bit) Aspen Interfaces(64bit) Aspen Process Explorer Aspen Production Record Manager Server(64bit) aspenONE Process Explorer(64Bit) Retrieve Files From Windows.old for V11 APRM.bat.zip NOTE: For version V11 only Product affected: Aspen Production Record Manager Keywords: None References: None
Problem Statement: Instruction on how to model Free-Radical polymerization. Calculation of moment (0th,1st and 2nd), MWn, MWw and PDI would be good reference for the user.
Solution: This example model uses very simple free-radical kinetic model to understand how polymerization kinetics work. This example assumes only combination-termination reaction. Initiator is decomposed to create 2 radicals, then radical reacts with monomer to produce live polymer. Live polymer does propagation, also turn to deal polymer by termination reaction. Use 'Foreach' sentence to calculate the moment. Moments are required to calculation of polymer attribute like MWN, MWW and PDI. Model FR N as integerparameter(400); I as conc_mole(initial,0.1); // Initiator Concentration R as conc_mole(initial,0); // Radical Concentration M as conc_mole(initial,20); // Monomer Concentration Pl([1:N]) as conc_mole(initial,0); // Live Polymer Concentration Pd([1:N]) as conc_mole(initial,0); // Dead Polymer Concentration P([1:N]) as conc_mole(0); // Total Polymer Concentration Pd(1) : fixed, 0; Ki as realvariable(fixed,5,description:"Initiator decomposition"); Kp as realvariable(fixed,5,description:"Propagation kinetic constant"); Kt as realvariable(fixed,1,description:"Termination Kinetic Constant"); $I = -Ki*I; //initial decomposition $R = Ki*2*I-Kp*R*M; //2*initial decomposition - prop*radical*monomer $M = -Kp*sigma(R,Pl)*M;//monomer consumed by propagation $Pl(1) = Kp*R*M - Kp*Pl(1)*M -Kt*Pl(1)*sigma(PL);//Live polymer balance for P(1) for i in [2:N] do $Pl(i)=Kp*Pl(i-1)*M-Kp*Pl(i)*M-Kt*Pl(i)*sigma(Pl); //Live Polymer Balance for P(2) or higher $Pd(i)=Kt*(sigma(foreach(j in [1:round(i/2)]) Pl(j)*Pl(i-j))); //Dead Polymer Balance, for example, dead P(5) is produced by the combination termination of P(1)+P(4), P(2)+P(3) endfor Z0 as realvariable; //0th Moment Z1 as realvariable; //1st Moment Z2 as realvariable; //2nd Moment MWN as realvariable(description:"Number average MW"); MWW as realvariable(description:"Weight average MW"); PDI as realvariable(description:"Polydispersity Index"); P=Pl+Pd; Z0 = sigma(P); Z1 = sigma(foreach(i in [1:N]) i*P(i)); Z2 = Sigma(foreach(i in [1:N]) i^2*P(i)); MWN*Z0 = Z1; MWW*Z1 = Z2; PDI*MWN = MWW; End Keywords: Polymerization, Free-Radical, Moment, PDI References: None
Problem Statement: Instruction on how to model 3-phase Flash calculation by using procedure.
Solution: Please use procedure "pFLASH3" to calculate 3-phase flash. Water and N-Butanol are used for this example. We can find that it is separated to 2 liquid phases for each of Butanol rich phase and Water rich phase. Model VLLE T as temperature(50,fixed); P as pressure(1,fixed); z(componentlist) as molefraction (fixed,description:"overall mixture mole fraction"); y(componentlist) as molefraction (description:"vapor phase mole fraction"); x1(componentlist) as molefraction (description:"liquid 1 phase mole fraction"); x2(componentlist) as molefraction (description:"liquid 2 phase mole fraction"); vf as vapfraction (description:"molar vapor phase fraction"); liq2f as liqfraction (description:"molar liquid 2 phase fraction"); hv as enth_mol_vap (description:"vapor phase molar enthalpy"); hl1 as enth_mol_liq (description:"liquid 1 phase molar enthalpy"); hl2 as enth_mol_liq (description:"liquid 2 phase molar enthalpy"); call (y, x1, x2, vf, liq2f, hv, hl1, hl2) = pFlash3 (T, P, z, "Full"); END Keywords: LLE, VLLE, Flash, 3-Phase, Procedure References: None
Problem Statement: How to draw "Absorption Equilibrium Line"?
Solution: If user want to draw absorption equilibrium line to decide the required number of stage and feasiblility of design, user can use Aspen Plus to draw "Equilibrium Lines". Please note the "Operating Line" need to be set by the operating condition (Solvent Flowrate, Solute fraction). We are going to bring up CO2 absorption to Water as example case. 1) Go to Property Sets / Add RAT-MLFR / Choose Liquid, component(Usually a solute) and base-component(Usually a solvent). RAT-MLFR is ratio of mole fraction for specified components, Following example uses CO2 as solute and H2O as solvent. We will use RAT-MLFR to calculate the molar concentration in solute-free basis. 2) Create another property set to draw vapor mole fraction of solute (CO2 in this example) 3) Go to Analysis | Create Generic Type 3) Please enter CO2, Methane and Water. 4) Please set the test condition and vary CO2 fraction. 5) Run the analysis and choose "Custom Plot" and set X-axis as RAT-MLFR and Y-axis as CO2 fraction in Vapor Phase. 6) Plot Created for analysis Keywords: Equilibrium Curve, Absorption, Absorption Curve References: None
Problem Statement: Instruction on how to model phase-equlibrium calculation for NRTL model (Same as Aspen Properties). This can be used for other property method if user has enough understanding on each method.
Solution: This example model uses Water and Methanol for reference, this is using PLXANT correlation same with Aspen Plus. Vapor phase is Ideal and activity is calculated from NRTL model. Model NRTL2 /* This example uses Water and Methanol for reference. If user want to apply for other component, please change Vapor pressure parameter */ N as integerparameter(100); //Number of Point dX as hidden realparameter; dX : 1/N; T([0:N]) as hidden temperature; P as pressure(fixed,1); //System Pressure X1([0:N]) as hidden molefraction(fixed); X2([0:N]) as hidden molefraction; Y1([0:N]),Y2([0:N]) as hidden molefraction; For i in [0:N] do X1(i) : dx*i; Endfor //Antoine Vapore Pressure, this is same as PLXANT model in Aspen Plus PL1([0:N]) as hidden pressure; PL2([0:N]) as hidden pressure; PL1_1 as realvariable(62.136075,fixed); PL1_2 as realvariable(-7258,fixed); PL1_5 as realvariable(-7.3037,fixed); PL1_6 as realvariable(4.17e-6,fixed); PL1_7 as realvariable(2,fixed); PL2_1 as realvariable(61.7911,fixed); PL2_2 as realvariable(-7122.3,fixed); PL2_5 as realvariable(-7.1424,fixed); PL2_6 as realvariable(2.89e-6,fixed); PL2_7 as realvariable(2,fixed); PL1=exp(PL1_1+PL1_2/(T+273.15)+PL1_5*LOGe(T+273.15)+PL1_6*(T+273.15)^PL1_7); PL2=exp(PL2_1+PL2_2/(T+273.15)+PL2_5*LOGe(T+273.15)+PL2_6*(T+273.15)^PL2_7); Aij as realvariable(0,fixed); Aji as realvariable(0,fixed); Bij as realvariable(0,fixed); Bji as realvariable(0,fixed); C as realvariable(0.3,fixed); Tau1([0:N]) as hidden realvariable; Tau2([0:N]) as hidden realvariable; G1([0:N]) as hidden realvariable; G2([0:N]) as hidden realvariable; Gamma1([0:N]) as hidden realvariable; Gamma2([0:N]) as hidden realvariable; GXS([0:N]) as hidden enth_mol(-1); //Flash Calculation Tau1 = Aij+Bij/(T+273.15); Tau2 = Aji+Bji/(T+273.15); G1 = exp(-(Tau1*C)); G2 = exp(-(Tau2*C)); X1+X2=1; Gamma1 = exp(X2^2*(Tau2*(G2/(X1+X2*G2))^2+G1*Tau1/(X1*G1+X2)^2)); Gamma2 = exp(X1^2*(Tau1*(G1/(X2+X1*G1))^2+G2*Tau2/(X2*G2+X1)^2)); Y1=X1*Gamma1*PL1/P; Y2=X2*Gamma2*PL2/P; Y1+Y2=1; GXS = 8.314*T*(X1*loge(X1*Gamma1+1e-8)+X2*loge(X2*Gamma1+1e-8)); End 1. Adjust NRTL binary parameters and pure component Antoine Vapor Pressure parameter. Aij,Aji,Bij,Bji,C : Same as Aspen Plus NRTL BIPs PL1_1,2,5,6,7 : 1st Component's PLXANT parameter PL2_1,2,5,6,7 : 2nd Component's PLXANT parameter 2. Check TXY, YX and Gibbs Energy of mixing Diagram Keywords: Flash, VLE, TXY, Aspen Custom Modeler References: None
Problem Statement: Many asymmetric monomer create polymer with different structure(For example, PolyPropylene).This structure affects properties of polymer product. This structure is categorized Isotactic / Syndiotactic and Atactic by its shape, this property is often called 'Tacticity'. Can we differentiate Isotactic and Syndiotactic from the polymer attribute?
Solution: We do not have specific reactions or component attributes to track syndiotactic or isotactic polymer. Generally, specific types of catalyst result in either syndiotactic or isotactic polymer, with small amounts of atactic polymer produced by side reactions. For example, polypropylene by Ziegler catalyst forms isotactic polymer, while polystyrene by metallocene catalyst forms syndiotactic polymer. The reactions in Aspen Polymers track the total polymer, with the special reaction for Atactic-propagation (ATACT-PROP) and the atactic fraction / flow attributes (ATFRAC / ATFLOW) tracking the side reactions leading to atactic polymer. Keywords: None References: None
Problem Statement: What is the purpose of Table PrProcessCapacityLimits?
Solution: The new format output database table PrProcessCapacityLimit contains process capacity ids and related information. Description of the Table: The Process Limits information from this table can be observed in the Process Limit Summary, for this example we used the Volume Sample Model Base Case: Keywords: None References: None
Problem Statement: …This KB article explains how to expand AW history retention to avoid losing data.
Solution: …The solution to increase history retention in the Aspen Watch server is adding new file sets in IP21. Aspen Watch is using custom records inside IP.21 to store the Advanced Process Control (APC) information. The records start with AW_ and there are several created for this purpose. The data is kept in the following repositories: TSK_DHIS, AW_AGGH, AW_EVTH. From an IP.21 standpoint you can store additional data online by adding additional filesets to the appropriate repository and then stop and start the database. The system would then use those new filesets as opposed to going back and overwriting existing ones. You would need to increase the available disk space for the server beforehand. You can create the filesets with the AW collection running, however they will show in grey. Right click on the repository and select add File Sets. While they are in grey, they cannot be used. Only after restarting IP.21 the fileset color changes to green and are available. Do this when you have a window with operations. Keywords: …Aspen Watch, Expand, History retention References: None
Problem Statement: What is the allowed length for variable/object names used in the Aspen Advanced Process Control (APC) software family?
Solution: The guidelines for naming variables/objects are detailed below based on the APC product used. DMCplus The numbers in brackets reflect the number of characters limited if the application needs to run with aspen watch configuration. Controller name - 1 to 12 characters (AW Limit: Same as DMCplus Limit). Subcontroller name - 1 to 16 characters (AW Limit: Same as DMCplus Limit). Independent name - 1 to 12 characters (AW Limit: Same as DMCplus Limit). Dependent name - 1 to 12 characters (AW Limit: Same as DMCplus Limit). User defined entry names - 1 to 16 characters (AW Limit: Same as DMCplus Limit). TestGroup Name - 1 to 16 characters (AW Limit: Same as DMCplus Limit). DCS Tagname - 1 to 56 characters DMC3 Builder The numbers in brackets reflect the number of characters limited if the application needs to run with aspen watch configuration. Controller name - No limitation (AW Limit: 1 to 16 characters). Subcontroller name - No limitation (AW Limit: 1 to 16 characters). Independent name - 1 to 12 characters (AW Limit: Same as APC Builder Limit). Dependent name - 1 to 12 characters (AW Limit: Same as APC Builder Limit). User defined entry names - No limitation (AW Limit: 1 to 16 characters). TestGroup name - 1 to 16 characters (AW Limit: Same as APC Builder Limit). DMC3 Builder calculation name - 1 to 32 characters. DCS Tagname - 1 to 256 characters IQ applications IQ model name - no limitations on number of characters. IQ application name - 1 to 14 characters. IQ independent name (in IQM file) - 1 to 33 characters. IQ dependent name (in IQM file) - 1 to 33 characters. DCS Tagname - 1 to 56 characters Aspen Watch Controller name - 1 to 16 characters. Subcontroller name - 1 to 16 characters. Independent name - 1 to 16 characters; Dependent name - 1 to 16 characters; Miscellaneous tag name - 1 to 16 characters. PID loop name - 1 to 16 characters. DCS Tagname - 1 to 39 for IoGetDef DCS Tagname - 1 to 79 for IoLongTagGetDef The following are some common error message pop-ups which indicate discrepancy in the length of the name and can be used for troubleshooting purposes - 1. "Name is too long" 2. "Index is out of bound of arrays" 3. "Bad assignment to subcontroller index 0" Keywords: Name length DMC3 Builder References: None
Problem Statement: How to handle nonlinear relationships for IQ models.
Solution: There're four model types in IQModel: Linear PLS, Fuzzy Non-Linear PLS, Hybrid: PLS - Neural Network, and Monotonic Neural Network. Linear PLS create linear relationships between Dependent and Independent(s), but we can use transformation to build a nonlinear model with Linear PLS. The transformation works for nonlinear variable if the nonlinear is univariate. For example, a valve position is a nonlinear univariate output, that means all the inputs are linear to flow and then flow is nonlinear to the valve. So when we transform the valve using the flow equivalent, the linear model works for all MVs. In another word, the output (valve position) uses the same nonlinear function across all inputs. The Melting Index and density for a polymer product (Polyethylene and Polypropylene) are multivariate nonlinear outputs. So the nonlinearity between the output and each inputs are difference. In this case, simply transform the output and try to fix the transform output to a linear models of the inputs won’t work. That is why we use the BDN model to fit the MI and density outputs. We recommend using Linear PLS for majority of the thermodynamics property (Boiling points, flash point, composition). If you think the nonlinearity is univariate then transform the data and use the Linear PLS. If you think that the nonlinearity is not univariate, use BDN (refer this KB: https://esupport.aspentech.com/S_Article?id=000096506 to build in DMC3 Builder as MISO model and export iqr file). We do not recommend any neural network model (Monotonic or Hybrid PLS) in IQmodel right now, those are old neural net model and they are unreliable in predict any output outside the training region. Keyword: IQ, Nonlinear, Model, Keywords: None References: None
Problem Statement: When login to PCWS (APC Web Server), there's the error: Retrieving the COM class factory for component with CLSID {0270B86F-F70A-443A-85C5-725DDFF3E452} failed due to the following error: 8007007e The specified module could not be found. (Exception from HRESULT: 0X8007007E).
Solution: This error mostly is caused by DLL files corrupted or missing. Execute a Repair Install can fix this issue, please also apply all pending Windows Update. If this issue is not fixed by repair installation, please contact AspenTech technical consultant. Keywords: APC PCWS COM class factory 0x8007007e Repair Install References: None
Problem Statement: Water and ethanol combine into an azeotropic mixture, which makes their separation process limited to the azeotropic point. In industry, an agent may be employed to alter the activity coefficient of the compounds of the mixture, in order to eliminate the azeotrope. This agent is referred to as an entrainer and may be modeled in Aspen Plus.
Solution: The example attached demonstrates the use of cyclohexane as a pseudo-binary component (entrainer) added to the water/ethanol mixture. For this case, Fig 1. shows the vapor-liquid equilibrium diagram (and the azeotropic point) for water/ethanol mixture with no entrainer. As we add cyclohexane to the mixture, it is possible to see the elimination of the azeotrope. Fig 1. y-x diagram for ethanol/water with no entrainer Fig 2. y-x diagram for ethanol/water with 50% cyclohexane as the entrainer. Note: it is important to select a proper thermodynamic package that can handle the non-ideality of the mixture in question. For this case, CPA was selected. Keywords: Pseudo-binary system; Entrainer; Cyclohexane; Ethanol; Water; References: None
Problem Statement: These error can be occurred for the issue: Error 1: Error : Input message does not match required format ( connection string Data Source=KDCVGDSQL03P;Initial Catalog=MesDataMart;Integrated Security=True, ) Error 2: Error: The '[2020-01-01]' member was not found in the cube when the string, [Calendar].[YearQuarterMonthDate].[Date].[2020-01-01], was parsed. MdxScript(SupplyChain) (17, 8) The dimension '[Today]' was not found in the cube when the string, [Today], was parsed. MdxScript(SupplyChain) (19, 8) The dimension '[Today]' was not found in the cube when the string, [Today], was parsed. MdxScript(SupplyChain) (21, 8) The dimension '[Today]' was not found in the cube when the string, [Today], was parsed.
Solution: This issue is caused by one of the table in ARF database, it is unable to accept the YEAR format 2020 and throws error. To fix this issue, please download the attached script and execute is as follow steps: 1. Download and copy the dateimedim.sql to a temp folder. 2. Open SQL Server Management Studio and connect to the SQL Server of ARF server. 3. With the ARF database in context, open this script and execute it. Keywords: Aspen Report Framework, ARF, SQL Server, AORA References: None
Problem Statement: A user's PIMS-EE role determines their access level to the model. There are typically four types of roles in PIMSEE, 1. Administrator - performs all modeling tasks, such as adding a user, copying a model, renaming a model, granting access, etc. 2. Modeler - has full privileges to all models, cases, views, and modeling tasks, that is to create model. 3. Planner - full privileges to models and cases for which he/she is the owner, and models and cases to which he/she has been granted access. In addition, the planner can perform a subset of modeling tasks, i.e. modifying a model. 4. Auditor - read-only privileges to models and cases to which he has been granted access. PIMS-EE Role Based Security is configured in Aspen Frame work (AFW). Once all the roles are configured. If you are the Administrator/Modeler, you should be able to access the model in full control. However, if you are Planer/Auditor, you will not see the model unless you have the permissions.
Solution: Someone with the Administrator role will often act as permissions controller. The best practice is to let Administrator grant permissions to Planner/Auditor through Case Manager in PIMS-EE. To do that the user (as the Administrator of PIMSEE) has to login in and Run PIMS-EE. In this screen, on right bottom, it shows your role as PIMSEE Administrator. On the top middle screen, from View drop down menu, select Case Manager. Close to right bottom, click the green arrow to expand the permissions window. All the users trying to access that model will be in the list. The administrator can change the permission levels for the user. Once the permissions are granted, Planner/Auditor will be able to see the model, or cases. Only the Auditor is allowed to view the data while the Planner is allowed to change the model or cases. Keywords: PIMES-EE Roles Permissions Security References: None
Problem Statement: How do I save a model from one database to another database over a server in PIMS-EE?
Solution: First, the user has to configure a data source pointed to the source database, and a data source pointed to the target database. If the target database is the same database as source database, then user only needs to configure one data source. Otherwise, it is required to configure two data sources. In this example, we assume there are two SQL server databases. We need to copy a Volume Sample model from one database to the second database. The first step is to configure the data sources, make them point to the different databases on the network, 1. Pimsee isc ? data source pointed to the source database 2. Pimsee kar ? data source pointed to the target database. The following screen shows the two data sources already being configured. For more details about how to configure the data sources, refer to solution document 127892. From PIMS-EE menu, File | Open Database, in the drop down menu, select ?pimsee isc?. In the drop down menu field of Model, choose ?Volume Sample', From PIMS-EE menu, select Model | Copy ?. PIMS-EE will open a window, called ?Copy a Model?, ? choose Target Database ?pimsee kar?, ? type in name ?test? under column ?Target Folder?, ? type in name ?Vol_test? under column ?Target Name?, press the ?Copy? button. Once it finished, PIMS-EE will be pointed to the target database with a new model Vol_Test opened. The ?Copy a Model? function can only copy to a new model name. It cannot overwrite an existing model. Keywords: Copy model PIMS-EE Datasource References: None
Problem Statement:
Solution: 128225 discusses Local Databases and describes how to create a 'Local Database' under SQL instance named 'PIMSEE'. If user does not have MSDE, this is a way to create a PIMS database under any instance. This example will demonstrate how to create a Aspen PIMS Enterprise Edition (PIMSEE) database under any SQL instance name, such as 'SQLEXPRESS'. Solution 1. Create a new database. After login SQL server instance SQLEXPRESS using Window Authentication, in the following window, right-click Databases, and select 'New Database'. 2. In window 'New Database', fill the field 'Database name'. In this example, it is 'pims_36' indicates the database version is 1.36. Click 'OK'. 3. Open SQL query from the PIMS installation directory, 'C:\Program Files\AspenTech\Aspen PIMS\Enterprise Configuration\Database\ PIMSEE_SQLServer_1.36_Create.sql', make sure the database 'pimsee_36' is chosen in the database field on the menu bar, then execute. 3. In order to use as PIMSEE database, user needs to create a new login name as 'pimsee' with password 'pimsee' and using 'SQL Server and Window Authentication Mode'. From SQL Server's main window, expand under 'Security', right-click 'Login', select 'New Login', fill the window as shown below. Enter the password 'pimsee', and default data 'master', Click 'OK'. 4. Then declare database role ownership for 'pimsee_36' by choosing 'User Mapping' from the left side of the window. Recheck 'pimsee_36' from the right top window to bring the highlight for the bottom window, then check the box 'db_owner'. Click 'OK'. 5. Change login to 'SQL Server and Window Authentication Mode'. Right click the instance name, select 'Property' | 'Security', check the box of 'SQL Server and Window Authentication Mode', show below. Note: before running PIMSEE, make sure that the database version is compatible to PIMS version. A detail list of version compatibility refers to solution 128220. Keywords: SQL Server Database PIMSEE References: None
Problem Statement: Can OLI be used when a Petroleum Assay is present?
Solution: Yes, OLI does support petroleum assays in its code. Aspen Plus petroleum assays are also supported in the OLI code. There are two options: Option 1: OLI Assays The chemistry model can be generated in OLI using OLI''s thermodynamic methods for petroleum assays. OLI uses the same API procedures as others. From the pseudocomponents created in this manner, the aqueous phase properties that are needed are generated. When interfacing to Aspen Plus, the user will need to provide alias names for each pseudocomponent cut. When using Aspen Plus, only the amount of the assay is entered for each stream, not the individual pseudocomponents. When solved, the assay is partitioned amongst its pseudocomponents. Option 2: Aspen Assays It is also possible to use the Aspen Plus generated Pseudocomponents directly in the OLI code. To do this, the OLI Chemistry model must already be generated and included in the Aspen Plus flowsheet. Then, when the pseudocomponents are created in Aspen Plus, the thermodynamic data for the pseudocomponents is retrieved and the OLI model regenerated. This method seems simpler but the thermodynamic basis for the two options are slightly different. Option 1 creates a more complete thermodynamic package but Option 2 allows users to use more familiar control features. KeyWords oli interface assay Keywords: None References: None
Problem Statement: The Aspen PIMS Enterprise Edition data source is a linkage between Aspen PIMS Enterprise Edition enterprise database and the Aspen PIMS application. Configuring this is one of the most important steps for the Aspen PIMS Enterprise Edition application. How do I to configure the datasource?
Solution: In this example, we use SQL Server Database. 1. From PIMS-EE, under menu Tools | Configure Enterprise Data Sources ?, the following screen will pop up, 2. Click the ?Add? button and the following screen appears. In the field under ?Data Source Configuration?, type the name of your new data source. In this example, we typed ?PIMS isc?. Then click the button with three dots ???, 3. Now the window ?Data Link Properties? appears. In the first field, enter a server name on the network. Note that the Aspen PIMS Enterprise Edition database server has to be configured with user name ?pimsee? and password ?pimsee?. Also the checkbox ?Allow saving password? needs to be checked. In field 3, ?Select the database on the server?, choose a database name created on the server. Make sure the database version matches the Aspen PIMS Enterprise Edition version. For example, database version 1.36 is for Aspen PIMS Enterprise Edition v7.1, while database version 1.35 is for Aspen PIMS Enterprise Edition 2006.5. 4. From the Aspen PIMS Enterprise Edition menu, choose File | Open Database and choose the data source name that you just created, i.e. pimee isc. Now, you should see the model on the server?s database in the field 'Model'. Keywords: PIMSEE Datasource Database Create References: None
Problem Statement: I am trying to model a gas sweetening process and want to select Ucarsol as the Amine. This component is not compatible with the Amine fluid property package. How can I model this system?
Solution: Ucarsol (C5H13NO2) is a proprietary solvent from Dow. While it is available in Aspen HYSYS, it is only compatible with certain property packages, such as Peng-Robinson Stryjek-Vera (PRSV), Lee Kesler Plocker (LKP), Generalized Cubic EOS (GCEOS) and all the activity models. The latest findings by Oil Field DBR (Schlumberger)suggest that Ucarsol can be modeled with 35-40 wt% MDEA(Methyl Diethyl Amine) and 5-10 wt% DEA. This mixture can be modeled in Aspen HYSYS using the Amines Property Package with MDEA (Methyl Diethyl Amine) and DEA (Diethanol Amine) as components. If desired, adjusting the H2S and CO2 efficiencies (or tray dimensions) will allow the user to match the results with available experimental or plant data . Key Words: Ucarsol, MDEA, DEA, Amines Keywords: None References: None
Problem Statement: Example with a mix of MDEA and piperazine as activator
Solution: Attached is an example file with Piperazine which is supported by the DBR Amine package in Aspen HYSYS. To add piperazine to the model do the following: 1. Go to Basis Environment | Fluid Package tab and click Add. 2. Select COM Thermo Radio button at the top 3. Under COM Thermo, select DBR amine package. 4. Then select the Thermodynamic Model for Aqueous Solutions (Select Li-Mather) and click OK. 5. Now next to the Component List Selection click the "View" button and add H2O, CO2, MDEA and Piperazine to the Component List. You should add H2O, CO2 (or H2S), and MDEA to the component list, otherwise you will get a warning. Keywords: Piperazine, MDEA, DBR amine package References: None
Problem Statement: How do I create a PIMSEE local database?
Solution: A PIMSEE local database is a PIMS input database created with MSDE (Microsoft SQL Server Desktop Engine) and maintained on a user's local machine. The user has to create an SQL instance with name 'PIMSEE' for any local database. Refer to solution 128271 'How to create a PIMSEE instance in SQL server' for details. This example demonstrates how to create a local database in SQL server. First in the PIMSEE interface menu, go to File | Local Database | Create, In the window, 'Create Local Database', enter a name for a local database, in this example, it is 'pimsee_test', then click 'OK'. Now from menu File | Open Database, there is a database named 'pims_test (local)' shown from the drop down list. The user can also check this database from SQL server by logging into the 'PIMSEE' instance. Open SQL server Management Studio, by entering sever name field, 'the local machine name\PIMSEE' (Here, the local machine name is 'KEENANX10') with windows Authentication. Expand tree 'Databases', and there is a local database, 'pimsee_test' which was just created from the PIMSEE application. Keywords: SQL server database local database instance References: None
Problem Statement: In Aspen PIMS Enterprise Edition (PIME-EE), when you try to point to a PIMS-EE database from SQL server, a message, 'The ConnectionString property has not been initialized', pops up. What does that mean? And how to resolve it?
Solution: In order to run PIMSEE, PIMS version needs to match its compatible database version. Below is the list, PIMS version Database verion 7.2 1.37 7.1 1.36 2006.5 1.35 2006 1.35 2004 1.35 The above warning message will pop up when you have incompatible PIMS version and database version. For example, if you are using PIMS v7.1, and the database version is 1.37. In PIMS-EE, from menu FILES | Open Database, choose a database that is version 1.37. Warning message of 'The ConnetionString property has been initialized' will show. Below, we will show you how to check which database version you need and what version your database has: ? How to check which database version you need? The way to check which database version required for the PIMS installed on your machine is by checking the directory, C:\Program Files\AspenTech\Aspen PIMS\Enterprise Configuration\Database\ and note the latest file version. In the next example, from the window, it shows that the required database version is 1.36. . Note: please do not execute these files unless there is a reason. ? How to check what version your database has? To check the version of an exist database, go to SQL server database, look for table dbo.EC_DATABASEVERSION Right click on the table name, from the drop down list, choose 'Open Table'. On the right side window, the entry in the first row, under column DATABASEVERSIONMINOR' shows the database version number. In the following screen shot, the database version is 1.36. Keywords: PIMSEE Compatible version database References: None
Problem Statement: Is it always required to configure data source for Aspen PIMS Enterprise Edition (PIMSEE)?
Solution: Data source is not always needed for PIMSEE application, it depends on where your database is located. The PIMSEE database could reside on either a server or the local machine. If it is located on a local machine, we called it local database. Here we are using SQL database for this example. If the database is on the server, it is necessary to configure a PIMS-EE data source. PIMS-EE data source is a linkage between the server database and the PIMS application. For details of how to configure data source, please refer to solution 127892. If it is a local database, you can skip the procedure of configuring a data source. In PIMSEE, from menu File | Open Database, there is a list of database names under the dropdown menu, shown in the next screen shot. From this list, any database ending with '(local)' indicates that it is a local database. The rest, such as 'pimsee isc', 'pimsee kar' are the data source names configured to link server databases to the PIMS application. Keywords: PIMSEE Data source configuration SQL server References: None
Problem Statement: How to test Aspen Cim-IO Store & Forward (S&F).
Solution: During normal operation S&F operates in the background by checking the connection between the Aspen Cim-IO client processes and the Aspen Cim-IO server processes. If the connection is broken, data is buffered in the store file on the Aspen Cim-IO server until the connection is re established. Testing S&F 1. Select a tag or group of tags to monitor in Aspen InfoPlus.21 and observe normal data collection. 2. Use the Aspen InfoPlus.21 Manager to stop the Cim-IO asynchronous client task (TSK_A_XXX) or the Cim-IO unsolicited client task(TSK_U_XXX) if used. 3. Review the tag(s) selected above and make a note of the IP_Input_Value and IP_Input_Time. The values in these fields are the last real time updates from Aspen Cim-IO. 4. On the Aspen Cim-IO server run ..\Cim-IO\code\CimIOSFMonitor.exe to monitor the activity of the store file 5. Wait for a few minutes, note the time and restart TSK_A_XXX or TSK_U_XXX. 6. After the connection is re established The IO_FWD_ASYNC_STATUS or IO_FWD_UNSOL_STATUS fields in the Aspen Cim-IO device record should indicate "IN PROGRESS", or "COMPLETE". 7. After the fields above show "COMPLETE" then review the tag(s) history. Data covering the period when the connection was down should have forwarded to the tag(s), and normal data collection resumed. If the S&F test did not function as expected then review the configuration which is outlined in the Aspen Cim-IO Users Guide. A complete clean start of Cim-IO per knowledge base ID 103176 may also be recommended. KeyWords: cimio store forward Keywords: None References: None
Problem Statement: How do I convert a model and connect to PIMS-EE model?
Solution: Procedure to convert a model and connect to a PIMSEE model 1. Back up the model before beginning this procedure. At the end of the procedure, the original spreadsheets will not be altered. However, the model file *.pimx will be altered, meaning that one will not be able to go back to the original state of the model. 2. Select menu Run | Save Model to Database o If a global model, each model will be saved to the database model. 3. Select menu Run | Save Model to Extended Tag Format Spreadsheet 4. Do this individually for each model if global model. o Each will produce and output spreadsheet file call PimsInputImage.xls. 5. On the Model Tree, right click on the Tables node. Select Suppress Whole Workbook o Do this individually for each model if global model. 6. On the Model Tree, right click on the Tables node. Select Add Tables in a Workbook. In the dialog, select the PimsInputImage.xls. o Do this individually for each model if global model. o If there are multiple CASE and/or EXPERT tables, one might have to attach these explicitly. 7. On the Model Tree, select General Model Settings | Input Database tab. o For Database Type select PIMS-EE Connection. o Select the desired connection from the list of connections. This should be the same connection used, when saving the model to database. o For the Model Id, click the ellipses ([...]) and in the dialog Select Database Model ID, select the model that was just converted. o For the PIMS-EE Spreadsheet Options, choose the second option, Keep all existing spreadsheets. o Click OK o 8. Again on the Model Tree, select General Model Settings | Miscellaneous tab. o Use Extended Tags is already checked (and cannot be changed). Place a check in the Preserve Original Tags checkbox. We can do this since we know that this model uses standard tags in the extended tag form. Internally PIMS converts extended tags to an alias that conforms to standard tags and uses that to build the matrix. With the preserve checkmark, PIMS uses the original tag as the alias. This is beneficial for the user since some messages use the alias tag. To understand what the message is trying to say, one would have to look up the alias tag in a translation file, such as ExtendedTagMpsProb.Mps. This files can be as big as sixteen megabytes and cumbersome to work with. o Click OK 9. The model is now connected to as a PIMSEE, or database and sometimes called Mixed-Mode, model. One can get visual confirmation that this is a PIMSEE model by looking at the icon for the Tables on the model tree. It should be the PISMEE icon and not the tables icon. Notes: o ...the database tables are in extended tag form and the attached spreadsheet must also be in extended tag. In other words, one cannot mix standard tags with extended tags. o ...if a spreadsheet is enabled, PIMS will use the spreadsheet and not the database table. o ...if there is a database table that one would like to disable, attach an empty spreadsheet and enable it. Please refer to solution 128277and 128271 for creating PIMSEE database in SQL, refer to solution 127892 for creating data source in PIMSEE. Keywords: PIMS-EE Partial PIMS-EE tables suppress procedure convert extended tags References: None
Problem Statement: How can the user set overhead stream purity as the end condition in a batch distillation process?
Solution: In one operation step, the user can go to the End Condition tab and select "Trigger value" as the step end condition. The example screenshot below demonstrates how the user can select the variable location (Distillate receiver), variable (overall mole fraction), and 99% mole basis methanol purity. Keywords: Purity, End condition, Trigger value, Overall Mole fraction. References: None
Problem Statement: What are Aspen PIMS Enterprise Edition (PIMSEE) local database features?
Solution: In PIMSEE, there is a special database, called 'Local Database' which should be located on your local machine with a special instance name called 'PIMSEE'. Any PIMSEE database created under this instance is a local database. If it is a local dataset, PIMSEE will recognize it. There is no need to configure any data source. For example, in the next screen, there are two PIMSEE database, 'pims_db' and pimsee_test' under instance 'PIMSEE' in SQL server. Refer to solution 128225 for how to create local database. Open PIMSEE, from menu File | Open Database, in the dropdown menu, those two database shown with '(local)' on their right side. Keywords: Local database features PIMSEE References: None
Problem Statement: How do I convert an Aspen PIMS model to PIMS-EE?
Solution: To convert an Aspen PIMS model to PIMS-EE, first step is to configure a PIMS-EE data source if Aspen PIMS database is not a local database. Refer to solution 128225 for how to create a local database. PIMS-EE data source is a linkage between PIMS-EE enterprise database and Aspen PIMS application, please refer to solution 127892 for details. Once the data source is configured, follow the steps below to convert a Aspen PIMS model to PIMS-EE. 1. Open the desired model in Aspen PIMS 2. Select the menu option `Run->Save Model to Database? 3. On the `Copy a Model?, select a `Target Database? 4. Modify `Target Folder?, `Target Name?, and/or `Target Description? if desired. 5. Click `Copy? 6. Open PIMSEE 7. Select the appropriate database from the menu option `File->Open Database? 8. Select the appropriate model from the Model Dropdown control 9. Select the View (in PIMS these are called tables) to look at from the View Dropdown control. This information assumes that the location is an Enterprise database and that all of the administrative tasks required to set up the database, its connection, and security have been completed. Keywords: PIMS-EE data source convert References: None
Problem Statement: In PIMS-EE, after creating a data source, the user wants to connect PIMS-EE to the database but there is an error message popped up saying 'The ConnectionString property has not been initialized'.
Solution: First, you have to have PIMS 17.5.6 and higher installed. There is a known issue while creating the data source in prior versions. Secondary, you have to have a correct version of the database created for PIMS-EE to work. Take SQL database as an example. Make sure you have Microsoft SQL server 2005 installed on your machine, and it has those ?.sql? command under C:\Program Files\AspenTech\Aspen PIMS\Enterprise Configuration\Database\ PIMSEE_SQLServer_1.xx_drop.sql - is used to cleanup the database. PIMSEE_SQLServer_1.xx_Create.sql - is used to create the database Where xx is the version number (not your PIMS version number) which should be the highest number you installed on your machine. In the following screen shot, this xx is 36. Third, after you create the database, make sure to check there is a table name ends with ....EC_DATABASEVERSION. Inside that table, there should be one line. The numbers under each column should match the version's number xx and PIMS version number. Then try to connect to the database. If all the procedures are followed, that error message should not appear. Keywords: PIMS-EE Data source ConnectionString References: None
Problem Statement: When the user is trying to import an Aspen Properties backup file (*. aprbkp) which contains regressed or estimated property parameters in Aspen Batch Modeler, the following warning messages are displayed: The importing of the aspen properties file will therefore not be possible. How can these warnings be avoided? error?
Solution: When a user is preparing an Aspen Properties file, sometimes estimated or regressed missing property parameters are needed. This can be done by changing the Analysis mode to Regression or Estimation mode, as shown in the screenshot below. If the user leaves the simulation file in Regression (or Estimation) mode, save the file and then import it, Aspen Batch Modeler will show those error messages. The user can observe that when they run a simulation in regression mode, Aspen Properties will always ask about a regression case to run, and about replacing existing properties. This causes some inconsistency when the user imports the file, as the file will try to run the regression/estimation case rather than the analysis case. To avoid this problem, the user should: (i) Run a simulation in Estimation or Regression mode to obtain the results from regression, then (ii) Change the Run Mode to Analysis, (iii) Save a file and then use it to import to Aspen Batch Modeler. Keywords: Aspen Batch Modeler, Aspen Batch Distillation, import properties, Aspen Properties backup file, import error References: None
Problem Statement: Why overhead Reflux drum specifications are grayed out even when the “Reflux Drum present” check box is selected?
Solution: By default, the Reflux drum specification is grayed out even when the “Reflux Drum present” check box is selected. To specify Reflux drum physical dimensions (e, g. Head type, diameter, Length etc), first enable the Pressure profile and holdups calculation mode. To enable this, select the Pressure/Holdups > Pressure > Calculated option: The selected “Calculated” mode will enable Reflux drum specification options. Key Words Reflux Drum, Pressure Profile, Calculated, Geometry Keywords: None References: None
Problem Statement: Some users have reported issues with running the ICARUS reports where the reports will not run. This is caused by the Crystal Reports installation not working correctly with mixed Windows and Office installations.
Solution: -------------- ************************************************************************************************************************************************************************************* IF YOUR CONFIGURATION IS WINDOWS 2000 WITH MICROSOFT OFFICE XP INSTALLED, THIS IS NOT A SUPPORTED CONFIGURATION FOR THE ASPEN SOFTWARE. THE SUPPORTED CONFIGURATIONS ARE: WINDOWS 2000 WITH MICROSOFT OFFICE 2000 WINDOWS XP WITH MICROSOFT OFFICE XP KeyWords: Crystal office xp windows xp windows 2000 office 2000 reports configuration Keywords: None References: None
Problem Statement: After relocating a project to another plant location, the Craft Rates seem very high, some almost 5 times higher than the default rates seen in the Icarus
Solution: Take a look at page 315 of the Decision Analyzer User Guide (8-15). There is an explanation on the Field Craft rates, which explains that the Field Craft Rates are a nearly all-in rate (loaded rate), which means that each craft rate is a unique composite of the following rate contributions: 1. Craft worker base hourly rate 2. Health, welfare, pension 3. Fringe benefits 4. Hourly indirect rate for: ? Temporary construction ? Consumables and small tools ? FICA Unemployment workers compensation insurance ? Multi-level construction This does not include: 1. Construction equipment rental including fuel, oil, lube and maintenance 2. Field supervision 3. Contractor Home office costs This is why the rate looks so high, because that rate is not just the craft worker base hourly rate. Keywords: craft rate, rates, wage, rate, relocate References: Guide. What is causing these new rates to be so high?
Problem Statement: The Advanced Editor is opened from within Bulkload. However, entries in GlobalData in the Advanced Editor are not being written to the SQL Database. When does the Bulkload program write to the database?
Solution: The Bulkload program only writes to the database when the Commit button is pushed. The Advanced Editor has two modes of operation. The first is if it is launched from the Website. In this mode any changes made are written to the database when the Apply or Ok buttons on the Edit dialog are selected. The second mode is when the Advanced Editor is opened from within Bulkload. In this mode any changes made by the Advanced Editor are held in memory until the Commit button on the Import tab of the Bulkload dialog is pushed. Please use the Commit button after making the changes and then check to see if the data has been written to the database. Keywords: References: None
Problem Statement: Perhaps you have two refineries between which you do transfers and they are distant from each other. If they are not connected by a pipeline but by a slower mode of transportation such as railway, you may encounter a situation when a transfer which is shipped from refinery A in one period reaches its destination only during the next period. This usually only applies to a certain portion of the transfer but can have following implications: · Incorrect input for the scheduling – e.g. lack of received octane booster can have serious MOGAS blending implications · Material arriving after the price driver has vanished: e.g. VGO transferred to be processed at FCC at destination plant and not at origin plants hydrocracker arriving after a sharp and expected fall in MOGAS spreads and rise in the diesel spreads (phenomena experienced almost every autumn). · Accounting differences in plan vs. actual, especially when plants operate as independent entities and material is sold under a long parity – there is a need to report material which is en route between two refineries separately.
Solution: Steps here need to be applied to the XPIMS Sample Model in order to represent the fact that certain amount of transfers might reach the destination only in the next period. Here we will be showing transfer of a finished good between plants B and A – material made at plant B and transferred to plant A. XPIMS Sample model is present in the Public Documents folder of every computer with PIMS installed. Example shown is for the transfer of finished diesel, material blended and sold as DSL at both plants. 1. In model X_VOLSAMP_B (one of the local models of the XPIMS Sample, other being X_VOLSAMP_A, from now on referred to as B and A) create a new submodel SXTR – transfer renaming (T. SUBMODS) 2. Attach the following spreadsheet to SXTR: * TABLE SXTR * Transfer renaming TEXT DST *** * VBALDSL Diesel from blending 1 VBALDST Diesel to transfer -1 *** 3. Add transfer code DST to local SELL table of local model B (do not forget to remove group DST from XPIMS sample) 4. In both local models (A and B) suppress ALTTAGS structure (to have all renaming of diesels in one place) 5. Create submodel SXPA (renaming of blending codes to sales codes) in both local models (A and B) * TABLE SXPA * Blending renaming TEXT DSX dsl *** * VBALDSL Diesel from blending 1 1 VBALDSX Diesel to export -1 VBALdsl Diesel to market -1 *** 6. In GLOBAL DEMAND table replace DSL demand with demand for code dsl (in order to create a renaming of transferred material which does not enter a blending code) – DEMALLOC needs not be adjusted in this Sample model but may need to be adjusted in any other 7. In GLOBAL table TRANSFER replace DSL code with DST for transfers towards export depot of Sample XPIMS, do the same with table DINV 8. In GLOBAL table TRANSFER insert transfer of DST from plant B to plant A (100 kbbl MAX and 0.5 USD/bbl cost) 9. In local model A create table SXTR in which incoming diesel is renamed to commercial codes (up to feasible capacity within one month) and also to cross-period en route inventoriy codes and from them back to commercial codes in next period (controlled by capacity rows and corresponding entries in table CAPS) * TABLE SXTR * Transfer manipulation TEXT dsl DSX DI1 DI2 DX1 DL1 DX2 DL2 *** * VBALDST Diesel from blending 1 1 1 1 VBALdsl Diesel to sales -1 -1 -1 VBALDSX Diesel to export -1 -1 -1 VBALDI1 Tr diesel 1-2 period -1 1 1 VBALDI2 Tr diesel 2-3 period -1 1 1 *** *** *Capacities CCAPDRI Initial diesel renaming 1 1 CCAPDS1 Surplus diesel 1st period 1 CCAPDS2 Surplus diesel 2nd period 1 CCAPDR2 Delayed diesel 2 per 1 1 CCAPDR3 Delayed diesel 3 per 1 1 *** Run any case. In table TRANSFER set minimum of diesel transfer between plants B and A to be 11 11kbbl/day in first two periods and 9 kbbl/day in 3rd period (to simulate optimization result). In plant A in table CAPS limit renaming of transferred diesel in first and second period to MAX 10 kbbl/day. This is to reflect that only up to 310 kbbl (daily rate * number of days) of transfer can actually be accomplished within the same period. The rest has to go to the inventory which represents material on the way from plant B to plan A. * En route inventories TEXT MIN MIN1 MIN2 MIN3 MAX MAX1 MAX2 MAX3 REPORT *** CDRI Maximum within period 0.000 0.000 0.000 10.000 10.000 10.000 CDs1 En route diesel 1 0.000 0.000 0.000 5.000 0.001 0.001 CDs2 En route diesel 2 0.001 5.000 0.001 CDR2 Diesel from 1-2 period 0.000 0.000 0.000 0.001 5.000 0.001 CDR3 Diesel from 2-3 period 0.000 0.000 0.000 0.001 0.001 5.000 *** In table CAPS and PINV you can make sure that material was only on inventory between first and second period. For the same phenomena between second and first period the second inventory code DI2 can be used. * TABLE PINV * Inventories TEXT OPEN OCOST MIN MIN1 MIN2 MIN3 TARG TARG3 MAX MAX1 MAX2 MAX3 CPRICE * CFP Cat Feed 50 25 10 50 35 50 100 50 24.99 RFT Reformate 50 25 10 50 35 50 100 50 24.99 DI1 Tr dsl 1-2 period 0 25 0 0 0 0 100 50 0.001 0.001 24.99 DI2 Tr dsl 2-3 period 0 25 0 0 0 0 100 0.001 50 0.001 24.99 *** By inspecting the results you can make sure that the structure works. Part of the material which left one plant in first period and will reach the second plant only in second period. This is going to be shown as a separate row in the inventory reports and is indeed not able to reach demand until next period. The advantage is that there is also optimization present because if the transfer is not on the maximum, PIMS may choose to maximize the part of the transfer which occurs early in the period and is completely accomplished during one period. Note: this is a simplified structure suitable mostly for finished goods and materials with properties which are not changing between plants. For materials with recursed properties we might want to build a PINV related structure using MIP to reduce the number of codes and cascading and similar to the one described in KB 103919. Keywords: XPIMS TRANSFER PERIOD SHIP RECEIVE References: None
Problem Statement: When running the SynchEM tool you may get an error like below; Failed to load groups from Aspen Event Management database using connection string "Provider=SQLOLEDB.1;User ID=emsynch;Initial Catalog=SCEMDB;Data Source=blabla;Password=(pwd)". Invalid object name 'GROUPPROFILE'. :::: at AspenTech.Security.Synchronization.EMGroupList..ctor(EMSynchSettings emSynchSettings) at AspenTech.Security.Synchronization.SynchEM.LoadEmGroupList() at AspenTech.Security.Synchronization.SynchEM.Synchronize() at AspenTech.Security.Synchronization.Start.Main(String[] args)
Solution: The problem is in the way the synch tool references the object "GROUPPROFILE". The user account needs to be SCEM since it will default to the SCEM database. Keywords: References: None
Problem Statement: The Aspen Event Management (EM) installation and configuration manuals have many references to using the Event Management Wizard during setup, configuration and usage. The Wizard can be started in two different ways a) Start | Programs | AspenTech | Operations Manager | Event Management | Event Management Wizard b) http://<hostname.domain.com>/wizard/home.jsp , This document addresses the possibility that you might get the error "This Page Cannot be displayed" when trying to start the Wizard
Solution: The process EM uses to render the initial web page can be described as 1.) Request is made to a url like http://machine_name/wizard 2.) IIS receives the request and looks up in its configuration to see if there is a default page to use. 3.) IIS should find default.htm is one of the pages to use by default. It should find this page and send it to the client browser 4.) The client browser receives the page and processes a meta tag in the page that instructs the browser to redirect to home.jsp 5.) IIS receives a requires for home.jsp and the tomcat ISAPI dll should intercept this request because its configured to process all . jsp pages in the wizard directory by virtue of the configuration in the PF/CF/AT Shared/Tomcat dir. 6.) The tomcat ISAPI dll sees that there should be a connector listening on port 8019 and forwards the request to this port 7.) The Aspen Event Management Server service is a tomcat instance that has an AJP13 connector listening on port 8019. It receives the request and processes the JSP page and sends the results to the client browser. As you can see from this description there are several things to check. Now let's suppose that Steps 1-4 are successful and that you have confirmed that the Event Management Server Service is running, try the following from a Windows Command Prompt: Netstat -ano | find "8019" This should return a line similar to the following: TCP 0.0.0.0:8019 0.0.0.0:0 LISTENING 7652 This last number '7652' is the process id of the process that is listening on this port. Make sure this is the EM tomcat. You could do this by stopping the Event Management Server process and making sure that command returns nothing. Then start the service and make sure there is a process called ?Tomcat' that has the same process id as the one returned from the netstat command. Keywords: References: None
Problem Statement: Why does the duty specified for Pot under Operating Step fall down to zero before the end condition is met?
Solution: By default, Aspen Batch Modeler linearly ramps down the duty to 0 if the volume of liquid in the pot falls to below 0.5 % of the vessel volume. The pot content may fall to 0.5% of its volume if the Heat duty rate is sufficiently high or initial liquid content in the pot is small. So even when the end condition is far from satisfied, the heat duty in the pot will be set to 0 by this default setting. The user can modify this by changing the parameter "B1.Pot.HeatTransfer.Vol_frac_min" via Task or Constraint from .005 to any other desired fraction. However, this is not recommended due to overheating (safety issue) of real equipment. Key Words Pot, Heat Duty, Volume Fraction Keywords: None References: None
Problem Statement: After running the Batch Modeler program I would like to create a plot from the plot templates for product composition. When I select the composition template, I get the following error message: "Unhandled exception has occurred in a component in your application. If you click Continue, the application will ignore this error and attempt to continue. Function Create2DPlot: Parameter 1. Plot names can contain alphabetical, numberic or _ characters only!"
Solution: The unhandled exception occurs because the name of the component selected in the list of components for the plot, contains a mix of alphanumeric characters. To solve this issue, please follow the next steps: 1. First of all restart the simulation ( ). 2. Go to the Species Form and on the Property calculation option, select Rigorous (click ok in the window that appears which indicates that components must have identical names in both methods: Rigorous and Simple to avoid losing the information). 3. Click on “Edit Using Aspen Properties�: In the Aspen Properties User Interface, modify the component name using the "Rename" option so that the component only has alphabetic letters with no number in the component description. Keywords: Plots, Alphanumeric components References: None
Problem Statement: This knowledge base article describes how to use the maximum number of vehicles that can be loaded in a period as a constraint in the solution to the Aspen Distribution Scheduler model.
Solution: The standard Aspen Distribution Scheduler CAP as configured does not use a constraint on the maximum number of vehicles for a mode at a location in a time period. However, there are maximum loading constraints by mode at a location in a time period which could serve as a very close surrogate for maximum vehicles, especially if all vehicles for a mode/location have the same capacity. If you wish to activate a maximum number of vehicles constraint do the following: 1. Create a new table (for example, CLDLIM with dimensions FLOC, MOD, PER) to hold limits on number of vehicles by mode/location/period 2. Write a new rule to convert the imported ICLDLIM table into the new table 3. Include a new generic row defined for FLOC/MOD/PER which would limit the number of vehicles loaded by location/mode/period 4. Populate COEF for this new row with a 1 on the appropriate VX columns (for VEH whose "From Location" attribute in VEHATTR matches the "From Location" for the constraint) and with the newly created table of maximum vehicles as the right hand side (this constraint would be less than). Keywords: modeling optimization linear program References: None
Problem Statement: How do I assign a scale value in an Optimization run?
Solution: The Scale factor value is assigned to guide the Optimization solver to put relative importance on different constraints. The iteration procedure will assign higher priority to solve the constraints which scale has large values. Usually, the smaller the constraint (i.e. tighter value), the larger should be the scale value. One of the examples of tighter constraints includes component compositions. Assigning an appropriate scale helps toward creating a robust optimization. Key Words Optimization, Constraint, Scale Keywords: None References: None
Problem Statement: How do I find tutorials or example files for Aspen Batch Distillation?
Solution: Tutorials can be found in the Help menu inside the Aspen Batch Distillation application. Tutorials: Open Aspen Batch Distillation application: Start / Programs / Process Modeling V7.1 / Aspen Batch Distillation then on the toolbar please go to Help / Aspen Batch Distillation Contents then expand + (sign) "Getting Started with Batch Distillation" then expand +(sign) "Aspen Batch Distillation Example Simulations" Example Files: The corresponding example files can be found in the following folder: (Assume that you have installed applications in "C" drive) C:\Program Files\AspenTech\Aspen Batch Distillation V7.1\Examples Keywords: Tutorial, examples, Batch Distillation documentation, solution files References: None
Problem Statement: Batch Distillation block in Aspen Plus V7.1 and V7.2 checks out multiple licenses when model is re-run. In addition to checking out multiple licenses, it deducts 18 tokens for each license checkout instance instead of 9. This Problem occurs only when BatchSep is used from within Aspen Plus and BatchSep_in_ap entry is not available within the license file.
Solution: This problem is fixed for Batch Distillation block in Aspen Plus V7.3. To solve the problem at this time, we encourage customers to get a new updated license file which will include SLM_BatchSep _in_ap entry. Getting a new license file will solve this problem and reduces token usage by half for each license checkout. Keywords: BatchSep Distillation block Batch References: None
Problem Statement: Error Message "Input String was not in a correct format"
Solution: Go to Settings/Control Panel/Regional and Language Options Select English(Canada) and hit OK. (There seems to be a bug in the internationalization features in .Net Framework 2.0 that is probably causing the failure that you see.) Keywords: ASPEN BATCHSEP 2006.5 INPUT STRING SYSTEM EXCEPTION FORMAT References: None
Problem Statement: The batch distillation column can be initialized properly without air or nitrogen, but if non-condensables like air is in the charge, the column will not initialize.
Solution: Typically, the column is initialize to total reflux. This means that the vapor coming off the top stage has to be totally condensed. When any non-condensable gas exists, this will not be possible. As a result, the initialization fails. If in fact you change contains non-condensable, the recommended approach is to specify the change without the non-condensable, and add the non-condensables from an attached stream after column is initialized. Keywords: non-condensable, initialization References: None
Problem Statement: During an Aspen Batch Distillation run, the Integrator sometimes fails during the transition between operating steps.
Solution: Often times convergence failure occurs around a discontinuity (when the failure occurs around the time it says "Restarting Integrator etc."). A work around for this problem is to go to the Solver Options | Integrator tab and uncheck the "Interpolate communication time" check box. When the Interpolate communication time check box is cleared, the simulation will cut an integration step to coincide with a communication time, giving the most accurate results possible with the current simulation settings. The penalty for selecting this option is slightly slower integration during the run. Keywords: Integrator failure Discontinuity Solver options References: None
Problem Statement: Sometimes, it is difficult in figuring out what determines the initial conditions for each batch in multi batch distillation. On the first batch, the conditions specified in the Initial Conditions form are used. One may expect to have the system go back to the conditions set in the Initial Conditions form before starting another batch. So question remains what determine the initial condition on each batch other than first batch and how to set up the initial condition on each batch excluding first batch.
Solution: In multi batch operation, initial condition on each subsequent batch is determined by the conditions set after the end of the last batch. sometimes operating specifications like Condenser Pressure, Jacket Duty etc. are modified in an Operating Step and those will remain at the modified value at the start of the next batch. If one want to go back to the initial condition of the first batch, one can add an additional Operating Step (e.g.; say "NextBatchInitialCondition") at the end that sets things like Condenser Pressure to values desired at the start of the next batch. Keywords: Multi batch, Operating Steps, Jacket Duty, Initial Condition References: None
Problem Statement: Aspen Batch Distillation V7.1 new features
Solution: This is Japanese solution. Please see attached pdf file. (Japanese) Keywords: V7.1 New features. Release References: None
Problem Statement: Initialization fails to converge at total reflux with fixed pressures and holdups.
Solution: When pressure and holdups are fixed, no hydraulic data is available. The key issue is now how to compute the flows. There are two methods implemented in Aspen BatchSep: Energy-balance and High-gain-controller. For the Energy-balance a quasi-steady-state energy balance equation is used to determine the vapor flow. The High-gain-controller makes use of a fictitious high-gain-controller to manipulate the vapor flow to maintain the pressure at the user-specified value. For columns at total reflux with fixed pressures and fixed holdups, that fail to initialize using the default Energy-balance method, switch to the High-gain-controller method. We recommend to only change the method, if using the default Energy-balance method leads to convergence problems. To change the method used, right mouse click on the Aspen BatchSep column and select Forms/Advanced. The parameter is called FixedPHoldupFlowMeth. Whenever you modify the FixedPHoldupFlowMeth, please ensure to set the parameter ReInit to True. This parameter is also located on the Advanced table. The parameter FixedPHoldupFlowMeth was introduced in Cumulative Patch 5 for Aspen BatchSep 2004.1. Keywords: None References: None
Problem Statement: One can specify the initial condition as either Empty or Total reflux. In the Empty condition, the column is initially filled with nitrogen or air at the specified initial temperature and initial pressure. To use this option, the component list must include a component with one of the names: NITROGEN or AIR. What if both nitrogen and air are impurities for the process in question?
Solution: It is correct that the input GUI will be marked as incomplete if initial condition is empty and a component named "N2", "NITROGEN" or "AIR" is not in the component list. The empty script sets the inert component by looking for "NITROGEN", or "AIR" in that order. Possible work around is to create a component with the properties of other inert gases (e.g; helium), but give it a component ID of "N2". This should fool Aspen Batch Distillation into accepting other inert gases as the inert component. Keywords: NITROGEN, empty condition, inert component References: None
Problem Statement: How to Reduce Aspen Batch Distillation Run Time?
Solution: Aspen Batch Distillation run time significantly depends on the number of variables and equations are being solved. Larger the number of equations or variables; the slower the run time will be. Here are some important points also needed to consider: 1) The number of components in the simulation: For every component (even if the composition is zero), heat and material balance equations along with the VLE equations have to be solved. Try to minimize the number of components and especially removed components that are in the component list that are not being used. 2) Number of stages in the column: Again, has to do with the number of variables/equations for the problem. Not much you can do here, especially if it is a difficult separation. 3) Column Internals: Vendor correlations for packing and trays. The vendor correlation option for packing and trays for holdup/pressure drop can significantly slow down the simulation. Simple tray or packing should be considered instead in order to reduce run time. 4) Speed of the simulation also depend on how robust the thermodynamics properties are. Inconsistent data on wide range of temperature and pressure ( User supplied) may increase run time or even may lead to convergence failure. Keywords: References: None
Problem Statement: How to avoid the error when changing the operating step
Solution: This is Japanese solution. Please see attached pdf file. (Japanese) Keywords: Operating step Error Converge References: None
Problem Statement: What composition is taken, when column is empty?
Solution: When a column is started at empty, it is filled up with nitrogen at the user-specified Initial T and P (Initial conditions\Main tab). As material is charged into the pot, it usually takes a while for some of the charged material to vaporize and then condense. Until condensation starts occurring, there is no liquid distillate. When there is really no liquid distillate, for robustness purposes, the mole fraction of the drum is set = 1.0/(Number of components) when there is no material in the drum (which is the case before condensation starts). Keywords: Composition, Initial, Initial Composition, Drum, Pot References: None
Problem Statement: What is the difference between Aspen BatchFrac and Aspen BatchSep?
Solution: Both products are batch distillation simulation packages. Aspen BatchFrac is a discontinued product that runs as a layered product inside Aspen Plus. AspenTech is no longer selling any new Aspen BatchFrac licenses. Existing customers will still be allowed to use this product until the license expires. The icon for Aspen Batchfrac will still appear in the Aspen Plus COLUMNS model library even if you don't have a license for Aspen Batchfrac. Aspen BatchSep is the replacement product for Aspen BatchFrac. Aspen BatchSep is available in version 2004 and higher. Aspen BatchSep was developed using the Aspen Custom Modeler environment. Currently, Aspen BatchSep does not interface with any of the unit operations in Aspen Plus. Keywords: batch distillation, batchfrac, batchsep References: None
Problem Statement: I'm running a simple distillation. In this specific example, I'm setting the reflux ratio to nearly zero (evaporation step) with a constant heating duty on the reboiler. I'm using the fixed pressure option. After a while, the vapor and the liquid flowrates are featuring peaks, i.e. they increase suddenly with no external perturbation that could justify this. For confidentiality reasons, I'm not allowed to send you an example. What is causing this problem?
Solution: The default algorithm used to compute flows when pressures and holdups are fixed can give odd results sometimes. First, please make sure to install Batch Sep 2004.1 cumulative patch 5 (see http://support.aspentech.com/webteamcgi/SolutionDisplay_view.cgi?key=119457)? In this patch, we implemented an alternative algorithm (FixedPHoldupFlowMeth = "High-gain-controller" available on the Advanced table) to address this deficiency. Once you have installed CP5, do this: - open your simulation file - do a right mouse click on the block and select under Forms, the Advanced form - change the parameter FixedPHoldupFlowMeth to "High-gain-controller" - close the form - double click the block to open the BatchSep data browser - go to the pressure definition sheet - change the pressure specification by about 10% - change the pressure back to its original value This trick is needed to force the GUI to run a proper initialization. You should find the results no longer show these odd peaks. Keywords: Fmlout Fvout glitch References: None
Problem Statement: What is the best way to migrate an Aspen Plus model containing a BatchFrac block to Aspen BatchSep?
Solution: Aspen BatchFrac was discontinued in 2003 and was replaced by Aspen BatchSep (see solution 117767). The two products are not compatible and therefore there is no interface for the migration. The below procedure outlines the steps to transfer the property and component data from Aspen Plus to Aspen BatchSep. The Aspen BatchSep column data / sizes / configuration / operating steps will have to be input manually. Here is the procedure: 1) Open the model in Aspen Plus and review the Aspen BatchFrac Setup 2) Start a new Aspen Properties session using the BLANK template, and import the Aspen Plus model used in step 1). 3) Save the Aspen Properties file as a backup (*.aprbkp) file in the same folder as the Aspen BatchSep model will reside. 4) Open BatchSep. 5) In the Exploring - Simulation window, navigate to COMPONENT LISTS and then double click on "CONFIGURE PROPERTIES" in the CONTENTS OF COMPONENT LISTS frame. In the "Physical Properties Configuration" window choose the "Use Aspen property system" option and then click on the "Import Aspen Properties file" button. Use the pop-up dialogue to navigate to and find the aprbkp file created in step 3). Then, click on OK to close the "Physical Properties Configuration" window. 5A) To select the components, double click on the "Default" component list in the CONTENTS OF COMPONENT LISTS window. In the "Build Component List - Default" window, move the components needed for the simulation from the "Available Components:" frame to the "Components:" frame. To ensure the proper property method options are going to be used for this component list, click on the "Edit Physical Properties" button, review the options and make appropriate changes. 6) Add a Aspen BATCHSEP model to the flowsheet and double click on the block. The resulting forms should look similar to Aspen Plus' forms. Keep the Aspen Plus session open so you can review the setup on the input forms versus the setup in Aspen BatchSep. Keywords: None References: None
Problem Statement: Use Edit Using Aspen Properties option to configure the properties in BatchSep, everything appears to be fine, but when I return to BatchSep, I do not get component list under Component List/Default.
Solution: The reason might be, File/Save As was used to save Aspen Properties file. Once this option is used, the file is saved to different location other than the rest of the BatchSep files and BatchSep will not be able to locate the properties file. Once Save as option is used, no matter how many times you revisit Aspen Properties, the problem can not be corrected anymore. User has to start a new BatchSep case from scratch. To avoid above problem from happening, upon exit Aspen Properties use either Save, or just click close and when prompted, say yes to save the file. Keywords: Component List Default References: None
Problem Statement: How to use the rigorous property calculation option in Aspen Batch Modeler.
Solution: Steps to use rigorous property calculation option in Aspen Batch distillation (Aspen Batch Modeler): 1. In Aspen Batch distillation (batch modeler), Click on Species section. Check the Rigorous as the property calculation option (as shown below) 2. Click on the Edit Using Aspen Properties button (refer the above screen capture) to invoke the Aspen Properties user interface from within Aspen Batch Modeler (Batch Distillation). 3. Click on the Import Aspen Properties File button to load the species and properties information into Aspen Reaction Modeler. 4. Click on the Edit Using Aspen Properties button to invoke the Aspen Properties user interface from within Aspen Batch Modeler. 5. Enter all the information in the newly opened Aspen Properties file. Close the Aspen properties window once the inputs are entered. Keywords: Aspen Batch Modeler, properties, Rigorous References: None
Problem Statement: How can an AutoCAD symbol (.dwg or .dxf) be imported and compiled to be used into ABE using the Graphic Definer interface?
Solution: In order to import and compile an AutoCAD symbol (.dwg or .dxf) to be used into ABE using the Graphic Definer interface, the below procedure has to be followed: From the ABE ‘Graphics Definer’, open the files with extensions .dwg or .dxf, as the screenshot below shows: 2. After that, one needs to save the file as .SYM and compile it. 3. Reload the workspace under Administration tool and it should be done: 4. In order to extract the attributes and class from AutoCAD to ABE, once the symbol is generated in ABE, one might edit them under the Graphic Definer. There is a known issue reported already (see KB article # 000044372. Link: https://esupport.aspentech.com/S_Article?id=000044372) related to some extension files such as: 2004 (.dwg and .dxf) 2007 (.dwg and .dxf) 2010 (.dxf) 2013 (.dwg and .dxf) The Graphics Definer only allows users to open AutoCAD drawings saved as either *.dwg or *.dxf files, on R14 and R12 form respectively. So, in AutoCAD, one must first save the drawing as either a *.dwg file on R14 form or as a .dxf file on R12 form. This way, the AutoCAD drawing will be successfully opened in ABE | Graphics Definer. Keywords: AutoCAD symbol import, .dwg, .dxf. References: None
Problem Statement: When one needs to highlight data modification within a revision. E.g. Suppose that someone changes a temperature data coming from material balance. One would like to highlight that change with something similar to a post-it.
Solution: It can be done via a Demon which runs when certain attributes of a specified class change their the values. So, under the KBs folder, one can find the following KB file called: PPID.azkbs under the directory: C:\AspenZyqadServer\Basic EngineeringXX.X\WorkspaceLibraries\KBs\ExampleScripts; and add the following demon to it: [?Class=MaterialFlowPhase?] AZEvent ModifyAttribute() set Attr = EventData.Attribute ' Get the changed attribute if Attr.name = "Temperature" then Server.SendWarningMessage "Temperature Is being modified" end if End AZEvent [?Class=MaterialFlowBulk?] AZEvent ModifyAttribute() set Attr = EventData.Attribute ' Get the changed attribute if Attr.name = "Temperature" then Server.SendWarningMessage "Temperature Is being modified" end if End AZEvent This will pop a window with this message as you requested. Keywords: Revision, Demon, post-it. References: None
Problem Statement: Where can it be found an attribute to map a set point on a safety valve's datasheet (see screenshot below for reference):
Solution: There is an attribute at a datasheet template coming OOTB (out-of-the-box) called: AZ Safety Pressure Valve.xlsm (it can be also found under the Datasheets folder: C:\AspenZyqadServer\Basic EngineeringXX.X\WorkspaceLibraries\Datasheets). This uses some attributes which can be useful for this case. For example, for the set point, it uses ‘SetPressure’ as the screenshot below shows: Hence, the same Class Views used on this datasheet may be used for the customised one. Just open both datasheets using the ABE Datasheet definer and copy the ‘Object Class View’ found under Datasheet Properties | Object Class View. In this particular case should be: ‘SafetyPressureValve’, as the screenshot below shows: Keywords: Safety Pressure Valve, Class Views, ‘SetPressure’ attribute. References: None
Problem Statement: When I try to find in the Class Library Editor an 'Allowable Pressure Drop' attribute to create a class view for a filter's datasheet, there are many alternatives to choose from the classes.
Solution: This attribute in specific can be found under ‘Filter Design Criteria’, so in that case the right one to map and select is: ‘FilterDesignCriteria’ Filter class as ‘PressureDrop’. See screenshot below as reference: Keywords: 'Allowable Pressure Drop' attribute, Filter, ‘FilterDesignCriteria’. References: None
Problem Statement: Example of a rule to create a collection of global process equipment from a workspace or project, in order to show their item numbers.
Solution: This is the statement of a rule to create a collection of global process equipment from a workspace or project: AZRule ListPipingSystemst() set cPS = ClassStore.FindClass("PipingSystem") set mPS = cPS.Members mPS.Sort = " NameSort " mPS.refresh for each oPS in mPS server.trace 1, oPS.ItemNumber next end AZRule Keywords: Rule, process equipment, item number. References: None
Problem Statement: When trying to access to the symbols folder library in the Drawing Editor using a Citrix environment, the folder is not fully accessible.
Solution: This issue happens due to some server configuration issues. Below a procedure which might help: In the server machine where ABE Server is installed we have to: a. Define the whole path for symbols and datasheets: Grant permission for users to read these folders. We do this directly in AspenZyqadServer folder, but maybe only Symbol and Datasheet folder require permission: Share the folder: All this requires full Admin Rights. Maybe it is necessary to include the IT person because of Citrix configuration. Keywords: Symbol’s folder library, Server configuration, Citrix. References: None
Problem Statement: There are some cases when one wants to make visible on the Drawing Editor symbol’s library the nozzles for a pump or for different objects created.
Solution: The nozzle symbol can be found in the ‘Pipelines and other connections’ folder (see screenshot below for reference), but it is hidden. In order to place nozzles manually, one just needs to unhide the file (in Windows Explorer). However, nozzles are added automatically when a pipe is drawn connecting to the equipment item. If nozzles are placed manually, then there is a risk that the equipment might end up with duplicate nozzles – one from the manual placement, and one created automatically when a pipe is drawn. Pipes do not connect to existing nozzles, they connect to the equipment item and the nozzle is created automatically: Keywords: Nozzles symbol, Pipelines, Symbol’s library. References: None
Problem Statement: How to edit and compile a very specific datasheet which contains two equipment in the same datasheet? For example, in one of the pages includes something similar to ABE’s equipment list and the nozzles within it.
Solution: An ABE datasheet can only relate to one piece of equipment. There are two cases: An equipment datasheet shows data from items that are strictly separate pieces of equipment, but only exist within the context of the parent. A vessel datasheet might show instruments, but the instruments are really part of the vessel and should be modelled as such. In some cases, the datasheet might need to show data from another piece of equipment that is genuinely a separate item with its own existence. One example is a compressor datasheet, which might have a page for data for the intercooler. The intercooler is a separate piece of equipment, it appears on the equipment list, so it cannot just be a child of the compressor, but it must have an association with the compressor so the compressor can “see” the intercooler during mapping. Here, the data model would need to be extended to include an association between the compressor and the intercooler, but once this is done, the compressor datasheet could be mapped to show data from both the compressor and the intercooler. As an alternative, pages could be added at run-time to a datasheet to show data from the other equipment. To use the compressor/intercooler example, the compressor datasheet would be defined ignoring the intercooler. Then, when the compressor datasheet is created, additional pages would be added from the intercooler datasheet. Keywords: Equipment datasheet, Nozzle, Association. References: None
Problem Statement: How can one copy object data via a Demon script in ABE?
Solution: You can calculate values via a Demon which runs when certain attributes of a specified class. So, under the KBs folder one can add the Demon .azkbs file under the directory: C:\AspenZyqadServer\Basic EngineeringXX.X\WorkspaceLibraries\KBs\ExampleScripts. A script code file is attached. Keywords: Copy object data, Demon. References: None
Problem Statement: Is there a way to make visible the instruments symbol’s folder in the Drawing Editor for P&ID?
Solution: In order to add a new folder inside the Symbols Library, for example the instruments folder for P&ID, one has to update the .xml file related to the P&IDs (PPID.xml) found under the workspace’s templates folder. One just have to add them editing and including those folders on the PPID.xml file using Notepad or any text file editor, but remember to make all the files and folders visible since these are hidden by default: Once these folders are included, the workspaces should be reloaded on the Administration. Keywords: Instruments folder, P&ID, PPID.xml. References: None
Problem Statement: When one tries to use one of the symbols available under the Drawing Editor’s symbols library, the symbol does not allow some users to connect properly with another object or even can be seen on its folder’s library.
Solution: This occurs due to OS permissions have not been set properly to the user. In order to check the OS permissions settings on a symbol's file, the procedure below has to be followed: Right click on a Symbol file (.sym) which you can see on the Drawing Editor’s symbols library. Select properties from the context menu. Select the user account in the Users section and make a note of your privileges on the security tab of the properties dialog. Repeat the steps on the Symbol that is failing and compare the properties. If there are any differences, edit the user's permissions on the Symbol’s properties. Keywords: Symbol’s library, OS permissions. References: None
Problem Statement: Column profile data can easily be extracted from Aspen using VBA. Unfortunately, some info is missing that may be used for definition of auxiliaries like instruments or analysers (viscosity, heat capacity, surface tension, density, etc.). Is there a capability in Aspen Plus similar to the one used in Pro-II called “CopyTrayToStream” which allows to retrieve all the tray stream’s variables information? Or, what would be the easiest way to get the full stream info for some column trays via VBA?
Solution: Aspen Plus VBA does not have that functionality as Pro-II. However, one can connect a pseudo stream moving the feed tray using VBA, iteratively recalculate and retrieve the next tray's stream properties. Keywords: VBA, Tray’s Stream Variables, Pseudo stream. References: None
Problem Statement: How to setup a centralized SQL Server to host Aspen Properties Database for V9 or later version
Solution: Go to the Engineering suite installation media folder for v9 and right click the setup.exe and run as administrator: Select the Install AspenONE products option: Accept the terms of the agreement and click Next: Uncheck every option and search for the Aspen Properties Enterprise Database Server inside the Server products and tools option (note: This option will only appear if you are installing on a windows server OS and requires an SQL instance installed): After the installation finishes, reboot the server, this will conclude the APED server installation. Keywords: None References: None
Problem Statement: After installing Aspen EDR V10, the program becomes unresponsive and closes when accessing the menus for Drawings (e.g. "Setting plan" and "Tube Layout").
Solution: This is probably due to the fact that the newly installed version 10 is not registered, and other versions are installed on the same machine. In order to fix this, please use the "Set Version - Aspen EDR V10" utility and set the new version (36.1). Keywords: Set Version, Drawings, Crash References: None
Problem Statement: I prefer to run with the option Open Report Windows turned on so that I have all the reports which I want to check open. However when I am running more cases this makes it difficult for me to go back to the beginning and check the Execution Log or to just jump from one case to another if they are not adjacent. Scrolling option in the bottom right is slow as it requires one click for moving from each screen to another.
Solution: One way of quicker navigation through many open windows is to click on the Windows tab and then select "More Windows" as shown below: This way toggling is quick and there is no need to re-open and thanks to the list the time spent looking up the report we're interested in is minimised. Keywords: None References: None
Problem Statement: When opening a spreadsheet template on Datasheet Definer without having loaded the workspace firstly, the Object Class View field is shown empty under the Datasheet Properties window:
Solution: The workflow to follow is: Open Datasheet Definer V10. Load and open a workspace under the ABE Datasheet Definer (ribbon) | Workspace. Then, open the spreadsheet template from File | Open. Once it is open one can see that the Object Class View field under the Datasheet Properties window shows the link properly. Keywords: Datasheet Definer, Object Class View, Datasheet Properties, Workflow. References: None
Problem Statement: How to disable the ABE naming DLL.
Solution: The naming DLL can be replaced with the naming script. Under the ‘StandardLibrarySet.cfg’ file that is shipped with ABE, one will see a section as follows: # KB Configuration KBScriptDirectory = "KBS" ManagedKBsDirectory= "KBS" ExcludedManagedKBs = "TEF.DataSvcImpl" This has to be modified to look as follows: # KB Configuration KBScriptDirectory = "KBS" KBScripts = "Naming" ManagedKBsDirectory= "KBS" ExcludedManagedKBs = "TEF.DataSvcImpl, Naming.DataSvcImpl" Keywords: ABE naming DLL, StandardLibrarySet.cfg. References: None
Problem Statement: How can values be calculated via a Demon script in ABE?
Solution: You can calculate values via a Demon which runs when certain attributes of a specified class. So, under the KBs folder one can add the Demon .azkbs file under the directory: C:\AspenZyqadServer\Basic EngineeringXX.X\WorkspaceLibraries\KBs\ExampleScripts. Below the script code for a pressure vessel calculation: [?Class=PressureVessel?] AZDemon SampleDemon AZPattern() opPress = self.NormalOperatingCriteria.Pressure demon.PerformAction opPress End AZPattern AZAction(opPress) desPress = opPress * 1.2 if (self.NormalDesignCriteria.Value is Nothing) then set des = self.NormalDesignCriteria.AssertValue(1, "DesignCriteria") des.Pressure = desPress else self.NormalDesignCriteria.Pressure = desPress end if End AZAction End AZDemon Keywords: Value calculation, Demon, Pressure vessel. References: None
Problem Statement: When trying to access to the Aspen Plus Online Help the following text appears: "Help landing page does not exist"
Solution: This issue occurs due to a corrupted Aspen Plus installation since the help files could not be saved on the right folder. Hence, in order to solve this issue one has two alternatives: 1) Repair the installation of Aspen Plus. 2) Or, if one check this directory: C:\ProgramData\AspenTech\Aspen Plus VX.X\HtmlHelp, one will notice that the folder is empty, so we can provide with a copy of all the topics help files available for Aspen Plus and these can be pasted on this folder. As soon as one opens Aspen Plus after pasted these files the Online Help content will be available. Keywords: Help landing page does not exist, Online Help, HtmlHelp. References: None
Problem Statement: How to handle the Out of Memory Failure Imminent warning in DMC3 Builder?
Solution: The Out of Memory Failure Imminent warning typically occurs you are using more than about 1000 MB of memory. You can monitor and reduce your memory usage as follows: Open the Windows Task Manager by right-clicking on the Task Bar at the bottom of the screen. On Windows 8 and later you may need to click "More details" so you can see the memory usage for individual programs. Locate DMC3 Builder and note the memory usage. (This number is not exactly the same as the "private memory bytes monitored by the application, but it tracks that measure pretty well.) It is best to keep the memory usage below 800 MB if possible, but certainly below 1000 MB. To reduce memory usage, first close any view tabs you aren't using. To reduce memory further, export and delete any applications you are not using. (For example, delete old snapshots that are no longer needed.) Check the memory usage again. Because Windows recovers freed memory as a background operation, you may see the memory usage go up and down as the program runs. In a short while the number will decline if you have freed sufficient memory. Keywords: Out of memory DMC3 Builder References: None
Problem Statement: Aspen Properties Enterprise Database (APED) is needed on a remote server and user wants to access a database on that server from a client PC using the Aspen Properties Database Manager.
Solution: SQL installation and network settings Install SQL express 2012 or any version above (2012 is included on the installation media) Complete the installation as default, just set the SQL browser service to Automatic: After the SQL is installed, check that it is enabled over the firewall, for that go to Control Panel, then to Network and Sharing Center and then Windows Firewall, then select the option Allow and App or Feature through windows firewall. Select the Allow another app… and click Browse... Add the following programs: C:\Program Files\Microsoft SQL Server\MSSQL11.SQLEXPRESS\MSSQL\Binn\sqlservr.exe C:\Program Files (x86)\Microsoft SQL Server\90\Shared\sqlbrowser.exe It is also needed to review certain SQL configurations, for that Launch the SQL Server Configuration Manager from Start | Programs | Microsoft SQL Server 2012 | SQL Server Configuration Manager. Under SQL Server Configuration Manager (Local) | SQL Server Network Configuration | Protocols for <instance name such as SQLEXPRESS>, enable the TCP/IP and Named Pipes protocols by right-clicking and selecting Enable on each one. Under SQL Server Configuration Manager (Local) | SQL Native Client Configuration | Client Protocols, enable all protocols With this, the SQL express is set up to work as a central Database server. Keywords: None References: None
Problem Statement: Example of how to expose Equipment Tag Numbers and Datasheet Document Numbers to a Bridge for Bulk Renaming.
Solution: The example helps to develop a class view to expose the ItemNumber attribute for an equipment item and the DocumentNumber attribute for a datasheet. The overall objective is to expose these data to a Bridge to rename equipment and documents in bulk to meet a client’s standards. The solution comprises the following parts: Two extra attributes added to the ProcessPlantEquipment class: DatasheetDocumentNumber is where the class view for the Bridge reads the document number from and writes it to, rather than directly from the datasheet. It is synchronized with the DocumentNumber attribute on the datasheet by a pair of demons ClassName contains a string that is the name of the class of the equipment item. It is useful to expose the class name to the Bridge, so one could use it to make intelligent choices about the prefix for the tag. Even if one does not need it, there is no need to remove it as computing the value will not consume significant computing resource. A class view called “ExposeDocumentNumberForBulkRename”. This exposes the equipment item’s ItemNumber, DatasheetDocumentNumber and ClassName attributes and the Bridge should talk to this class view. A composite view also called “ExposeDocumentNumberForBulkRename” to link the class view to the class. A pair of demons which together synchronize the datasheet’s DocumentNumber attribute and the equipment item’s DatasheetDocumentNumber attribute: “TransferDatasheetDocumentNumberToItem” runs when a datasheet is added to the equipment item or if the DocumentNumber attribute of the datasheet is changed. It traverses the route to the datasheet for the equipment item and copies the datasheet’s DocumentNumber attribute into the equipment item’s DatasheetDocumentNumber attribute. “TransferDatasheetDocumentNumberToDatasheet” runs when the equipment item’s DatasheetDocumentNumber attribute is changed (when the Bridge writes data back to ABE). It traverses the route to the datasheet for the equipment item and copies the DatasheetDocumentNumber attribute from the equipment item to the DocumentNumber attribute of the datasheet. If the equipment item also appears on an equipment list, the demons select the datasheet and ignore the list. This is something that one could not achieve simply by creating a route in the class view from the equipment item to the datasheet’s document number. These demons could also be extended to handle the case where an equipment item has multiple datasheets. A rule “InitializeEquipmentForBulkRenaming”. This has the same effect as the demon “TransferDatasheetDocumentNumberToItem”, but needs to be run manually. If one knows that one will never add or remove equipment from project to project, one can use the rule and one will not need the demon. Once one has run the rule, all equipment items will have their DatasheetDocumentNumber attributes filled in, so when one re-uses the project, the data will already be complete. However, if there is any chance that the equipment inventory will change from project to project, use the demon. For example, one might have a future project where one adds an air cooler to reduce heat load on a water-cooled exchanger because cooling water is limited on a site. If the demon is loaded, it will take care of this situation automatically, whereas one would need to remember to run the rule after one adds the air cooler. One will always needs the “TransferDatasheetDocumentNumberToDatasheet” demon to update the datasheets after the Bridge returns data to ABE, the rule can only replace the “TransferDatasheetDocumentNumberToItem” demon The solution is packaged in a class library include file “ExposeDocumentNumberToBridge.azci” and a knowledge base file “DocumentNumber.azkbs”. Installation instructions Add the two attributes to the ProcessPlantEquipment class (it is not possible to extend an existing class in an azci file, so this has to be done by hand-editing the class library): ClassName, type = string, multiplicity = 1, DefaultFixed = false, CaseFixed = True, CloneFixed = false DatasheetDocumentNumber, type = string, multiplicity = 1, DefaultFixed = false, CaseFixed = True, CloneFixed = false Add the .azci include file to the class library. This will add the class view and the composite view to the class library. Recompile the class library. Put the KB file in the KBs directory and add it to the workspace configuration file. Run “Reset all demons” from the Rules Editor. This will initialize the demons and also cause them to execute to update the DatasheetDocumentNumber attribute on existing equipment items. One can now build the Bridge, using the “ExposeDocumentNumberForBulkRename” class view. (In practice, one can do this any time after step 3, because all one needs for the Bridge is a class library) Keywords: Equipment Tag Numbers, Datasheet Document Numbers to a Bridge, Bulk Renaming. References: None
Problem Statement: Implementation of the functional group approach and step-growth kinetics for a novel biomass-derived polyester in Aspen Plus. Background Aspen Plus contains a database on step-growth polymers which includes several well-known, industrially-produced polymers, such as PET, PBT, PC or Nylon 6. It is possible however to model and simulate newly developed polymers by using the functional group approach and providing the necessary kinetic parameters. This Aspen Plus feature is very useful when designing new polymers and simulating their scale-up production. In this article, the implementation of the functional-group methodology along with the consequent kinetic scheme is presented for a biomass-derived polyester. This novel polyester is made of three monomers which are currently being produced from bioderived sources and are exciting candidates to replace oil-derived monomers: 2,5-furandicarboxylic acid (FDCA), succinic acid (SA) and 1,5-pentanediol (1,5-PDO).
Solution: 1. First, the main species are identified and the segments defined. The conventional species considered are the monomers and water. The diol and diacids polymerize to form the corresponding polyesters, which are composed of terminal (T-) and bound (B-) segments. These species are defined under the Properties Environment|Components. Note that the component types have been selected as segment and polymer accordingly. 2. Since Aspen Plus does not have these segments defined in the database, one should define their molecular structure as they are used as a reference point for the calculation of the polymeric attributes. This is done using the Van Krevelen group contribution method, which accounts for the different functional groups and their number of occurrences present in a segment. The method and functional group number for each segment are specified under Components|Molecular Structure|Functional Group. 3. Under Components|Polymers|Characterization|Segments, one defines the segments accordingly as end (T) or repeat (B) while the step-growth kinetics mechanism and the attribute list to be calculated are defined in the next tab, Components|Polymers|Characterization|Polymers. 4. The property method is the defined, in this case POLYNRTL, under Methods|Specifications|Global. 5. To define the kinetic scheme, open the simulation environment. Then a new reaction definition is created under Simulation|Reactions, by choosing the step-growth kinetics option. Aspen Plus generates the reactions based on the functional groups involved, which the simulator classifies as nucleophilic (N-GRP, NN-GRP) or electrophilic (E-GRP, EE-GRP), which need to be defined under Simulation|Reactions|Species. Nucleophilic groups are electron-strong groups (diol and water) whereas electrophilic groups are electron-weak groups (acids and esters). 6. After classifying the groups, Aspen Plus automatically generates a set of reactions for the polyester under study, (Simulation|Reactions|Reactions|Generate Reactions). which are classified into forward condensation with the diol and terminal diol segment, reverse condensation, forward ester interchange and reverse ester interchange (polymerization). We have then 5 types of reaction sets. 7. The corresponding kinetic parameters for each of the 5 sets of reactions are defined under Simulation|Reactions|Rate Constants, where one can introduce the preexponential factor and activation energy. The kinetic parameters are then finally assigned to each of the reactions generated previously, under the tab Simulation|Reactions| Assign Rate Constants. In this way, the definition of the kinetic scheme is complete and ready to be incorporated into the polyesterification process simulation. Keywords: Step-growth kinetics, polyesters, biomass, group contribution References: None
Problem Statement: With the sunset of Aspen Batch Modeler, can I convert my Aspen Batch Modeler files to Aspen Plus V10 files?
Solution: . The attached conversion utility allows you to automatically generate an Aspen Plus V10 equivalent file starting from a Batch Modeler file. Instructions. Please follow the instructions on the attached ABM to Aspen Plus V10 Converter Guide.pdf for using the converter. After performing the conversion, review the inputs in the converted Aspen Plus simulation for consistency with the original Aspen Batch Modeler simulation. Notes. The following restrictions and limitation apply to the conversion tool: The converter will output an Aspen Plus file compatible with V10. Earlier versions of Aspen Plus are not supported. Only Aspen Batch Modeler simulations created in V8.4 or later will be converted. If you have an older simulation, you must load it in Aspen Batch Modeler V8.4 or later and save it before attempting to convert to Aspen Plus. When names of charge streams, operating steps and controllers in Aspen Batch Modeler are incompatible with the naming restrictions in Aspen Plus (maximum 8 alphanumeric characters only), they will be renamed. In some instances, parameters that are modified in an operating step may not have the correct initial value in the converted Aspen Plus simulation. Please validate the inputs before using the new Aspen Plus file for production. Aspen Batch Modeler simulations that have one or more of the following configurations will NOT be converted Simulations with Configuration = Pot only Simulations where mass transfer was modeled Simulations set up for kinetic fitting Simulations with Vendor Trays or Vendor Packing Keywords: Aspen Batch Modeler, ABM, Aspen Plus, Batch, Batch Model, Converter, Conversion References: None
Problem Statement: How do I increase the file size upload limit on the Auto-Upload Tool?
Solution: In order to increase the upload limit on the Auto-Upload Tool, please go through the following steps on your license server: 1. Open “C:\Program Files (x86)\AspenTech\ALC\Xml\HttpsXmlDoc.xml” file in Notepad. 2. Update MaxRequestLength value to the file size limit desired in bytes (for example, for 250 MB, you would enter the value "250000"). 3. Save file and close. 4. Open Control Panel and open Windows Services. 5. Restart the Aspen ALC Auto-upload schedule service. Keywords: size limit, upload size, upload limit, Auto-Upload Tool References: None
Problem Statement: How do I increase the file size upload limit on the Auto-Upload Tool?
Solution: In order to increase the upload limit on the Auto-Upload Tool, please go through the following steps on your license server: 1. Open “C:\Program Files (x86)\AspenTech\ALC\Xml\HttpsXmlDoc.xml” file in Notepad. 2. Update MaxRequestLength value to the file size limit desired in bytes (for example, for 250 MB, you would enter the value "250000"). 3. Save file and close. 4. Open Control Panel and open Windows Services. 5. Restart the Aspen ALC Auto-upload schedule service. Keywords: size limit, upload size, upload limit, Auto-Upload Tool References: None
Problem Statement: When opening and running a simulation file in Aspen HYSYS or Aspen Plus in a Solo installation, then pressing the Datasheet new functionality incorporated on V10, the ABE Explorer cannot be accessed and seen under the simulator’s GUI.
Solution: This issue occurs due to a process which is not triggered to initialize when clicking on the Datasheet button. The process can be seen when opening the Task Manager under Processes (tab): When the OwinHost process is not initialized, it impedes to connect to ABE from Aspen HYSYS or Aspen Plus. Hence, ABE Explorer must be run for the first time after installation using the browser option, then it should be run from the simulator’s GUI without any issues. Keywords: Datasheet, OwinHost. References: None
Problem Statement: Where can I find a list of all the queries available to be used under ABE Query Editor?
Solution: Under the following folder: C:\Program Files\AspenTech\Basic Engineering VXX.X\UserServices\Help One can find this file: AZDBQuery.chm. If one double-clicks and open this file, the ABE Query Editor Help will be shown: Navigating through all the topics one can find different information about the codes and queries used to create ABE’s KBs. Keywords: ABE Query Editor, AZDBQuery.chm, Help. References: None
Problem Statement: Is it possible to create a design specification with variable tolerance and limits?
Solution: Attached is an example of a Design Specification with variable tolerance and limits. See file - Design-spec2.bkp. A design specification designates that the inlet and outlet entropies of a Heater block HX1 are equal. The temperature of HX1 is chosen as the manipulated variable. Temperature limits cannot be set a priori, but it is known that the isentropic temperature will be within 75oF of the inlet temperature. The tolerance for the specification is a function of the entropy. The inlet and outlet entropy and the inlet temperature of the block HX1 are the sample variables. These are all Stream Variables. The entropy of the inlet stream HX1-IN is called SIN. The outlet entropy of the outlet stream HX1-OUT is called SOUT. The temperature of stream HX1-IN is called TIN. The design specification sets the inlet entropy SOUT equal to the inlet entropy SIN. The tolerance is specified as the variable TOL. TOL is specified as 0.0001 times the absolute value of the entropy of the inlet stream SIN on the Fortran sheet of the design specification The design specification is satisfied when |SOUT - SIN| < TOL. Fortran expressions such as TOL specified on the Fortran sheet can be calculated and used in any part of the specification expression: the spec, the target or the tolerance. The heater temperature is the manipulated variable. The design specification convergence block will find the heater temperature that makes SOUT=SIN. The manipulated variable is specified in the heater block just as if there were no design specification. The specified value is the initial estimate used by the design specification convergence block. The design specification convergence block will not try a temperature less than the inlet temperature TIN - 75F or greater than TIN + 75F, even if the solution to the objective function lies outside this range. The limits become alternative specifications if the design specification cannot be achieved. The initial estimate entered in the reactor block lies within these limits. You do not have to specify convergence of the design specification. Aspen Plus will automatically generate a convergence block to converge the specification. For more information see the Aspen Plus Help topic Simulation and Analysis Tools -> Sequential Modular Flowsheeting Tools -> Design Specifications: Feedback Control. Keywords: None References: None
Problem Statement: What do the column labels in the Raw Data tab of the Usage and Denials Transactional Report spreadsheet mean?
Solution: This KB describes the meaning of the different columns of “Raw Data” tab in the Monthly Report, which is available through the AspenTech Support website. See KB 145209 for instructions on how to access and download the usage reports from the AspenTech Support Website. Column Label Description Transaction ID Each license key check-out transaction is given a unique Transaction ID. Because each row of the raw data corresponds to a license key check-out and return transaction, each row has a unique Transaction ID. Log ID Each raw usage log file that is generated on license servers is given a unique Log ID when it is received and processed by AspenTech. Each row of the raw data shows a license key transaction and which raw log file by Log ID that it was recorded in. Company The company name that the raw log files were uploaded under and processed for will be in this column. Server The name of the license server that has the AspenTech license has on it and that each row’s license key transaction occurred on will be found in this column. System Name Each AspenTech license is given a unique identifier called the System Name. This identifier corresponding to the license being used for each license key transaction (row in the Raw Data tab) will be found in this column. License This column contains the license key for each product or product feature that is being checked out in each row (for each license transaction). For example, the license key for using Aspen HYSYS is SLM_HYSYS_Process. Type The Type of license transaction can be Regular Returned – a normal license check-out and return transaction, Commute Returned – a license check-out and return that occurred for a commuted license, or Denied – a license denial transaction, where a license could not be checked out. Start Timestamp This is the exact timestamp when each license key check-out on each row occurred. These timestamps correspond to times that are standardized to GMT/UTC time. The raw usage log files will record the times in local time, but when they are processed the times are standardized to GMT/UTC time. End Timestamp This is the exact timestamp when each license key return on each row occurred. These timestamps correspond to times that are standardized to GMT/UTC time. The raw usage log files will record the times in local time, but when they are processed the times are standardized to GMT/UTC time. Start Date This is the exact date and time when each license key check-out on each row occurred. These times correspond to times that are standardized to GMT/UTC time. The raw usage log files will record the times in local time, but when they are processed the times are standardized to GMT/UTC time. End Date This is the exact date and time when each license key return on each row occurred. These times correspond to times that are standardized to GMT/UTC time. The raw usage log files will record the times in local time, but when they are processed the times are standardized to GMT/UTC time. Duration This is the duration of time (in seconds) that the license key on each row was checked-out for before being returned. This value will be equal to the difference of the End Date and Start Date. Bucket The bucket of the license that each license key transaction accessed. Each license could have multiple buckets (default, v2, v3, etc.) User The username of the user that executed the license key transaction. User Region The region of the world that each user in the “User” column is located in. Machine Name The name of the machine that the user in the “User” column was using when the license key transaction for that row was executed. IP Address The IP Address of the machine that the user in the “User” column was using when the license key transaction for that row was executed. Project The project being worked on by the “User”. Extra usage information, if specified by license user. Department The department that the “User” works in. Extra usage information, if specified by license user. Location The location that the “User” works in. Extra usage information, if specified by license user. Tokens Consumed (raw) Total tokens used in the bucket of the license specified in the “Bucket” column at the exact time of the Start Date (specified in the “Start Date” column). Tokens Consumed by License (raw) Token value for the license key in “License” column if checked out. “-1” values will be seen for license keys that do not consume any tokens. Simultaneous Products Number of the specific license keys (specified in “License” column) checked out in the bucket of the license (specified in the “Bucket” column) at the exact time of the Start Date (specified in the “Start Date” column). Simultaneous Tokens Value is determined by multiplying the value in “Simultaneous Products” column by the “Tokens Consumed by License (raw)”. This value states the number of simultaneous tokens that are checked out for all the license keys specified in the “License” column at the exact time specified in the “Start Date” column. Product Token Seconds Value is determined by multiplying the value in “Duration” column by the “Tokens Consumed by License (raw)”. AspenONE Version Version of the aspenONE software being used with this license transaction. Licenses Handled Number of licenses being checked out for this transaction. Date Date derived from the “Start Date” column. Year Year derived from the “Start Date” column. Hour Hour derived from the “Start Date” column. Min Minute derived from the “Start Date” column. Day of Week Day of Week. Value from 1-7, representing Monday-Sunday. Work Zone Work Zone. Based on Hour and Region Lookup sheet. SubNet Subnet. First two digit of the IP Address. Bad Data Flag A flag if there is bad data for this specific license transaction. Licensed Feature The licensed feature that is used in this specific license transaction. Product The product that is used in this specific license transaction. Tokens Consumed Total tokens used in the bucket of the license specified in the “Bucket” column at the exact time of the Start Date (specified in the “Start Date” column). Tokens Consumed by License Token value for the license key in “License” column if checked out. “0” values will be seen for license keys that do not consume any tokens. Std Tokens Consumed Token value for the license key in “License” column if checked out. “0” values will be seen for license keys that do not consume any tokens. However, this column is based on Std Token Value from TokenValue sheet. Tokens Consumed by License (no denials) This column will only have values for license transactions that were license denials (have “Denied” in the License Type column). The value in this column represents the number of tokens that would have been consumed if this license transaction were to occur, instead of getting denied. Duration (no denials) This column will only have values for license transactions that were license denials (have “Denied” in the License Type column). The value in this column represents the duration of time for which this license transaction would occur, if it did not getting denied. Extra Tokens Needed This column will only have values for license transactions that were license denials (have “Denied” in the License Type column). The value in this column represents the extra tokens needed for this license transaction to occur, if it did not getting denied Keywords: Raw data tab Column labels Usage and denials transactional report Transactional report Token usage report References: None
Problem Statement: After a significant change to some variables, an equation-oriented simulation fails to converge. Is there any way to improve the convergence?
Solution: Equation-Oriented process simulation models offer powerful and very fast solutions, even for large-scale plant models with heat integration and multiple recycles which could be difficult to converge. However, even state-of-art solvers can fail to converge when significant changes are made to specified variables. Homotopy is a powerful technique that helps achieve transitions from one set of conditions to another by breaking large variable moves into several smaller moves which are each easier to converge. Homotopy can be applied when using the DMO or LSSQP solvers to resolve Simulation or Parameter Estimation models. The homotopy option can be activated from the Convergence| EO Options | Solver form as shown below. Select one or more constant variables on the Homotopy Variables form. Enter the target values for each of the variables to be manipulated. There are several convergence factors that influence how the homotopy solver moves to the solution. The model will start from a converged solution at the initial variables values. At the first step, it will adjust the manipulated variables by taking a linear step from the initial condition towards the target using the initial step: New value = Initial Value + initial step * ( Target Value – Initial Value ) If the step converges, the next step will be increased (multiplied by) the iteration threshold value. New Value = Last value + initial step * (iteration threshold) * ( target value – initial value) The step size will be limited to the maximum homotopy step. If a step fails to converge, the algorithm returns to the last converged values and makes a smaller step (using the step size decrement factor to reduce the step). New Value = Last good value + last step * (step size decrement factor) * ( target value – initial value) The algorithm applies minimum and maximum values to the steps (note: all factors below are based on fractions of the range between the initial value and target value of the individual variables). Keywords: None References: None
Problem Statement: You create an Aspen Custom Model and you want to access the computer time in a script - for example to display the time that an event occurs.
Solution: You can access the computer's time by using "now", as shown below and in the attached screenshot. To get a timestamp with date and time, simply use "now": application.msg " Timestamp: " & now To select certain elements of the timestamp use the keywords year, month, day, hour, minute and second: application.msg " Year: " & year(now) application.msg " Month: " & month(now) application.msg " Day: " & day(now) application.msg " Hour: " & hour(now) application.msg " Minute: " & minute(now) application.msg " Second: " & second(now) These commands will return these messages in the Simulation Messages window: Keywords: None References: None