qid
int64
1
74.7M
question
stringlengths
0
58.3k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
2
48.3k
response_k
stringlengths
2
40.5k
45,806,209
Trying to complete a simple node.js exercise, I have tried several variations on this. I suspect I am missing something very simple. The reason I created var Calc was because I wanted to export the 'calculator' function. the error: ``` /Users/alex/zdev/react-project/calc.js:4 var add = function(){ ^^^ SyntaxError: Unexpected identifier ``` file calc.js: (file has been shortened to stay concise) ``` var readline = require('readline-sync'); var Calc = { var add = function(){ var num1 = readline.question("num1: "); var num2 = readline.question("num2: "); console.log(num1 + num2); }; } module.export = Calc; ``` calling file: ``` var calc = require('./calc'); var Calc = new calc.Calc(); Calc.add(); Calc.sub(); Calc.divide(); ```
2017/08/21
[ "https://Stackoverflow.com/questions/45806209", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4231932/" ]
You define a new object `Calc` with a function `add`, but the syntax is incorrect. The correct syntax is: ``` var Calc = { add: function() { var num1 = readline.question("num1: "); var num2 = readline.question("num2: "); console.log(num1 + num2); } }; ```
I suggest using [JavaScript classes introduced in ECMAScript 2015](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Classes) ``` class Calculator { constructor() { console.log("[Calc] created!"); } static add(a, b) { return a+b; } } let Calc = new Calculator(); ```
45,806,209
Trying to complete a simple node.js exercise, I have tried several variations on this. I suspect I am missing something very simple. The reason I created var Calc was because I wanted to export the 'calculator' function. the error: ``` /Users/alex/zdev/react-project/calc.js:4 var add = function(){ ^^^ SyntaxError: Unexpected identifier ``` file calc.js: (file has been shortened to stay concise) ``` var readline = require('readline-sync'); var Calc = { var add = function(){ var num1 = readline.question("num1: "); var num2 = readline.question("num2: "); console.log(num1 + num2); }; } module.export = Calc; ``` calling file: ``` var calc = require('./calc'); var Calc = new calc.Calc(); Calc.add(); Calc.sub(); Calc.divide(); ```
2017/08/21
[ "https://Stackoverflow.com/questions/45806209", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4231932/" ]
You define a new object `Calc` with a function `add`, but the syntax is incorrect. The correct syntax is: ``` var Calc = { add: function() { var num1 = readline.question("num1: "); var num2 = readline.question("num2: "); console.log(num1 + num2); } }; ```
solution is as follows: call file: ``` var calc = require('./calc'); var Calc = calc.Calc; Calc.add(); ``` calc file: ``` var Calc = { add: function(){ var num1 = readline.question("num1: "); var num2 = readline.question("num2: "); console.log(num1 + num2); }, divide: function(){ var num1 = readline.question("num1: "); var num2 = readline.question("num2: "); console.log(num1 / num2); }, sub: function(){ var num1 = readline.question("num1: "); var num2 = readline.question("num2: "); console.log(num1 - num2); } } module.exports = {Calc:Calc} ``` the following lines pulled are where the original mistakes were: defining my class after importing from other function ``` Calc = calc.Calc; ``` using a commas to seperate my object properties instead of a semicolon ``` }, ``` not defining a dictionary in module exports. Also, I wrote 'module.export' not 'module.exports' originally ``` module.exports = {Calc:Calc} ``` And I forgot to put my parseInt() for my num1 and num2 values.
19,316,687
I have a kendo grid, it has a date column. I want to show date and time there. I am using below format in column definition, `format: "{0:dd-MMM-yyyy hh:mm:ss tt}"` In modal I used date type `Updated_Date: { type: "date" }` Output date is coming as '10-Oct-2013 12:00:00 AM', but actual date returned via ajax call is "Updated\_Date":"2013-10-10T05:02:40.44" What to do to show the correct time in Grid like 10-Oct-2013 05:02:40 AM?
2013/10/11
[ "https://Stackoverflow.com/questions/19316687", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1235024/" ]
There are two fields that are commonly confused: * [format](http://docs.kendoui.com/api/web/datetimepicker#configuration-format) : Specifies the format, which is used to format the value of the DateTimePicker displayed in the input. * [parseFormats](http://docs.kendoui.com/api/web/datetimepicker#configuration-parseFormats): Specifies the formats, which are used to parse the value set with value() method or by direct input. So actually you need to define a parseFormat because of the `T` between date and time that makes the format not being a default one: Try: ``` columns : [ ... { field : "Date", title : "Date", format : "{0:dd-MMM-yyyy hh:mm:ss tt}", parseFormats: ["yyyy-MM-dd'T'HH:mm:ss.zz"] } ] ``` Running example here : <http://jsfiddle.net/OnaBai/Ahq6s/>
Just had the exact same problem. It is because the grid is not recognising the field as a date. You need to add the "type" as follows: ``` columns : [ ... { field : "Date", title : "Date", type : "date", format : "{0:dd-MMM-yyyy hh:mm:ss tt}", parseFormats: ["yyyy-MM-dd'T'HH:mm:ss.zz"] } ``` ]
253,735
[![enter image description here](https://i.stack.imgur.com/2Ag15.jpg)](https://i.stack.imgur.com/2Ag15.jpg)I have an older outdoor GFCI outlet on a dedicated circuit that trips periodically. I am planning on switching it with a regular outlet and a GFCI breaker in order to get the GFCI circuitry away from the weather. The circuit is landed on a subpanel model CH12L125B that takes Cutler Hammer/Eaton CH type breakers. My main panel is a Siemens G3040B1200. When the subpanel was installed (3" away from the main panel), the installer brought only the hots into the subpanel and left the neutrals in the main panel. Is this OK? If I put the GFCI breaker in the subpanel I will need to find the neutral and bring it over at least (I suppose). Or should I just swap with another 15a circuit in the main panel and buy a GFCI breaker for the main panel? I was looking for a 15a single-pole GFCI breaker for the subpanel and I was having a difficult time figuring out the current model number for the breaker I would need. I don't believe this circuit needs AFCI, self-test, or "equipment protection" (but idk). Any advice would be appreciated.
2022/07/28
[ "https://diy.stackexchange.com/questions/253735", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/126129/" ]
> > My main panel is a Siemens G3040B1200. When the subpanel was installed (3" away from the main panel), the installer brought only the hots into the subpanel and left the neutrals in the main panel. Is this OK? > > > No, that is not OK. This was very shabbily done and you should bring in a *competent* electrician to correct it, or DIY. * The supply breaker to the subpanel needs to be a Siemens QP250, not a GE which does not belong in this panel. * The Square D HOM breaker also needs to go. What is wrong with people? Replace with Siemes QP230. * Circuits entering the main panel that have had their hots extended into the sub, need to have their neutrals extended also. Grounds can stay where they are. And even when you do all that stuff, you'll have panels that are completely full. No, this is not OK. You've been "living from one breaker space to the next" and look where it's gotten you. If it were me ------------- Look. I'm a CH super-fan. But just the same, I would tear the CH panel off the wall and throw it in the trash, and replace it with a 30-space Siemens panel. That way you can use the same breakers. 30-space because spaces are cheap, and obviously you go through breaker spaces really fast. 30 space not 30 circuits. With an accessory ground bar. Same size as your existing panel, so all the ports will line up. I would use a RMC metal conduit nipple for the feeder and several 3/4" metal conduit nipples linking side ports up and down the unit (just to make thru-wiring more convenient, often the wires can make it to a new breaker). > > I have an older outdoor GFCI outlet on a dedicated circuit that trips periodically. > > > I have an older smoke detector that goes off everytime I burn toast. The difference between my smoke detector and your GFCI is you can see smoke but you can't see ground faults, so I presume my smoke detector is doing its job, and you presume your GFCI is defective. The GFCI is probably doing its job, and you're trying to shoot the messenger. **Ground faults on outdoor wiring *is really not a surprise*.** Try opening up all the boxes downline of the GFCI and cleaning out all the paper-wasp condominiums and fixing the water getting into the boxes.
The part number for the breaker you're searching for is: Eaton/Cutler-Hammer CH115GFI 1 Pole Circuit Breaker. The "CH" is important. ETN makes a "BR" breaker and those will not work in your sub panel. As mentioned in the comments, there might be better ways to do this.
142,215
I have a double-bowl, stainless steel kitchen sink mounted to a granite countertop, with a faucet with a pull-out spray head (pictured). [![faucet](https://i.stack.imgur.com/fgfWe.jpg)](https://i.stack.imgur.com/fgfWe.jpg) The faucet is no longer tightly mounted, so it wobbles back and forth. I'd like to tighten it. Unfortunately, it's not obvious to me how to tighten it. Here's a picture of the underside: [![mounting bracket on underside of faucet](https://i.stack.imgur.com/R1QQb.jpg)](https://i.stack.imgur.com/R1QQb.jpg) There aren't any obvious screws. All I can guess is that the nut around that brass threaded pipe is how to tighten it? Is tightening that the right way to go? And do you think I can get it tight enough just with that? Or should I expect to have to do something extra (plumber's putty?)?
2018/07/13
[ "https://diy.stackexchange.com/questions/142215", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/87736/" ]
> > the nut around that brass threaded pipe is how to tighten it? Is tightening that the right way to go? > > > YES and YES > > do you think I can get it tight enough just with that? Or should I expect to have to do something extra (plumber's putty?) > > > YES and NO - you should go ahead and try tightening it up with the nut, it should work. Those types of faucets normally have a gasket under the upper portion and no plumbers putty is required. NOTE- it will likely be difficult to tighten that nut, most of those types of faucets come with a manufacturer-supplied special tool to reach up and tighten the nut with. If you don't have that tool then you will need to improvise. [![enter image description here](https://i.stack.imgur.com/0Wq3l.jpg)](https://i.stack.imgur.com/0Wq3l.jpg)
That brass nut is what I'd try if you can reach it. It looks like some sort of makeshift clamp blocks access to the nut. Is that silicone rubber all over one of the fittings? That is non standard and suggests either there was a leak or someone didn't know what they were doing in the first place. There are *two* corrugated hoses which probably have some sort of quick connect/disconnect. The disconnection process can be tricky. These should be easy to temporarily remove for access, but the presence of the silicone rubber and the makeshift clamp suggests there might be problems.
33,796,615
I have a table with few fields out of which 2 are of Varchar and Blob type. When I'm retrieving them using queryForMap, I'm getting a map instance with two keys(name of the column). I'm able to cast varchar to String simply but getting ClassCast Exception doing the same with Object. ``` file = new File(System.currentTimeMillis() + (String) map.get("SAMPLE_DOC_FILE_NAME")); blob = (Blob) map.get("SAMPLE_DOC"); ``` My DAO layer method is: ``` public Map<String, Object> getSampleDoc1(String docName) throws SQLException { String query = "select form.sample_doc_file_name as SAMPLE_DOC_FILE_NAME, form.sample_document as SAMPLE_DOC from forms form where form.document_name=?"; return localJdbcTemplateObject.queryForMap(query, docName); } ``` Exception - `java.lang.ClassCastException: [B cannot be cast to java.sql.Blob` What can I do to get back this object as Blob?
2015/11/19
[ "https://Stackoverflow.com/questions/33796615", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5036536/" ]
A `Blob` is just a wrapper for a (possibly large) `byte[]`. What you're getting back from Spring here is the interesting raw data, the `byte[]` (or `B[` in the exception's notation). So, just use that instead, it'll be much easier to work with: ``` byte[] blob = (byte[]) map.get("SAMPLE_DOC"); ```
Please check what class it has as this: ``` System.out.println(map.get("SAMPLE_DOC").getClass().getName()); ``` then cast to this type, then you can use the API of this type to do something with it.
24,590,228
I just upgraded angularjs from angularjs 1.2.9 to angularjs 1.2.19 and angularjs bootstrap version .7 upgraded to .11 I am now getting an that I am not sure about. here is the error: > > TypeError: ngModelCtrl.$render is not a function > > > The `$render` error appears to be due to a line in ui-bootstrap-0.11 on line 2201. ``` $scope.$watch('totalPages', function(value) { setNumPages($scope.$parent, value); // Readonly variable if ( $scope.page > value ) { $scope.selectPage(value); } else { ngModelCtrl.$render(); } }); }]) ``` this has got to be a bug due to my upgrade...anyone have any advice on how I should handle this? THanks, dave
2014/07/05
[ "https://Stackoverflow.com/questions/24590228", "https://Stackoverflow.com", "https://Stackoverflow.com/users/347644/" ]
In a similar case, I replaced page attribute with ng-model on pagination directive, that fixed my issue. Can you try that too?
Jacob pointed out that there was a discussion that I overlooked on this very issue: <https://github.com/angular-ui/bootstrap/issues/2157> To fix my issue all I did was add an ng-model to the pagination directive. Thanks for your help, everyone.
24,590,228
I just upgraded angularjs from angularjs 1.2.9 to angularjs 1.2.19 and angularjs bootstrap version .7 upgraded to .11 I am now getting an that I am not sure about. here is the error: > > TypeError: ngModelCtrl.$render is not a function > > > The `$render` error appears to be due to a line in ui-bootstrap-0.11 on line 2201. ``` $scope.$watch('totalPages', function(value) { setNumPages($scope.$parent, value); // Readonly variable if ( $scope.page > value ) { $scope.selectPage(value); } else { ngModelCtrl.$render(); } }); }]) ``` this has got to be a bug due to my upgrade...anyone have any advice on how I should handle this? THanks, dave
2014/07/05
[ "https://Stackoverflow.com/questions/24590228", "https://Stackoverflow.com", "https://Stackoverflow.com/users/347644/" ]
Jacob pointed out that there was a discussion that I overlooked on this very issue: <https://github.com/angular-ui/bootstrap/issues/2157> To fix my issue all I did was add an ng-model to the pagination directive. Thanks for your help, everyone.
I changed Directive Name 'pagination' to diffrentName. Then problem was solved.
24,590,228
I just upgraded angularjs from angularjs 1.2.9 to angularjs 1.2.19 and angularjs bootstrap version .7 upgraded to .11 I am now getting an that I am not sure about. here is the error: > > TypeError: ngModelCtrl.$render is not a function > > > The `$render` error appears to be due to a line in ui-bootstrap-0.11 on line 2201. ``` $scope.$watch('totalPages', function(value) { setNumPages($scope.$parent, value); // Readonly variable if ( $scope.page > value ) { $scope.selectPage(value); } else { ngModelCtrl.$render(); } }); }]) ``` this has got to be a bug due to my upgrade...anyone have any advice on how I should handle this? THanks, dave
2014/07/05
[ "https://Stackoverflow.com/questions/24590228", "https://Stackoverflow.com", "https://Stackoverflow.com/users/347644/" ]
In a similar case, I replaced page attribute with ng-model on pagination directive, that fixed my issue. Can you try that too?
I changed Directive Name 'pagination' to diffrentName. Then problem was solved.
60,579
After reading ["Is *everyone*" singular or plural?"](https://english.stackexchange.com/questions/225/is-everyone-singular-or-plural), I would refrain from asking this question, but the husband of a colleague of mine (English professor, native speaker of British English) stated against it, so I am looking for further enlightenment. He advocates *some* should be solely used to refer to plural forms. Thus, > > some non-existent towns > > > is perfectly correct, but > > some non-existent town > > > should be replaced by > > a non-existent town > > > Is that true?
2012/03/09
[ "https://english.stackexchange.com/questions/60579", "https://english.stackexchange.com", "https://english.stackexchange.com/users/18424/" ]
***Some*** indeed can be use in this general sense. If you visit [OneLook](http://www.onelook.com/?w=some&ls=a), you'll see several meanings of the word ***some***. One of them reads: > > **some** *used for referring to a person or thing without knowing or without saying exactly which one* > > > So, saying: > > *We'll go to some beach tomorrow.* > > > *We'll stop at some restaurant on the way home.* > > > are both perfectly acceptable. As a matter of fact, if you made the noun plural: > > *We'll stop at some restaurants on the way home.* > > > That would imply that we are stopping at *more than one* restaurant.
*[Some](http://dictionary.cambridge.org/dictionary/british/some_1)* is used to refer to a particular person or thing without stating which. For example, > > **Some** customer called yesterday. > > >
60,579
After reading ["Is *everyone*" singular or plural?"](https://english.stackexchange.com/questions/225/is-everyone-singular-or-plural), I would refrain from asking this question, but the husband of a colleague of mine (English professor, native speaker of British English) stated against it, so I am looking for further enlightenment. He advocates *some* should be solely used to refer to plural forms. Thus, > > some non-existent towns > > > is perfectly correct, but > > some non-existent town > > > should be replaced by > > a non-existent town > > > Is that true?
2012/03/09
[ "https://english.stackexchange.com/questions/60579", "https://english.stackexchange.com", "https://english.stackexchange.com/users/18424/" ]
***Some*** indeed can be use in this general sense. If you visit [OneLook](http://www.onelook.com/?w=some&ls=a), you'll see several meanings of the word ***some***. One of them reads: > > **some** *used for referring to a person or thing without knowing or without saying exactly which one* > > > So, saying: > > *We'll go to some beach tomorrow.* > > > *We'll stop at some restaurant on the way home.* > > > are both perfectly acceptable. As a matter of fact, if you made the noun plural: > > *We'll stop at some restaurants on the way home.* > > > That would imply that we are stopping at *more than one* restaurant.
You're right, he's right, they're right, everyone's right. Just incomplete. No one has mentioned the important detail. "A non-existent town" - CORRECT "Some non-existent town" - INFORMAL That's all there is to it. "Some" is being used EXACTLY the way "a" is supposed to be used. You can't use "some" in this way in proper speech or writing. It in formal, stressing the indefiniteness. For further emphasis (almost to the point of being silly), we may also say "Some random non-existent town."
60,579
After reading ["Is *everyone*" singular or plural?"](https://english.stackexchange.com/questions/225/is-everyone-singular-or-plural), I would refrain from asking this question, but the husband of a colleague of mine (English professor, native speaker of British English) stated against it, so I am looking for further enlightenment. He advocates *some* should be solely used to refer to plural forms. Thus, > > some non-existent towns > > > is perfectly correct, but > > some non-existent town > > > should be replaced by > > a non-existent town > > > Is that true?
2012/03/09
[ "https://english.stackexchange.com/questions/60579", "https://english.stackexchange.com", "https://english.stackexchange.com/users/18424/" ]
*[Some](http://dictionary.cambridge.org/dictionary/british/some_1)* is used to refer to a particular person or thing without stating which. For example, > > **Some** customer called yesterday. > > >
You're right, he's right, they're right, everyone's right. Just incomplete. No one has mentioned the important detail. "A non-existent town" - CORRECT "Some non-existent town" - INFORMAL That's all there is to it. "Some" is being used EXACTLY the way "a" is supposed to be used. You can't use "some" in this way in proper speech or writing. It in formal, stressing the indefiniteness. For further emphasis (almost to the point of being silly), we may also say "Some random non-existent town."
10,399,488
having some issues upgrading my Sitefinity 4.1.1339.0 installation to 4.3 (and ultimately to Sitefinity 5). It is giving me the following error: > > > > > > Could not load file or assembly 'Telerik.Sitefinity, Version=4.1.1339.0, Culture=neutral, PublicKeyToken=b28c218413bdf563' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) > > > > > > > > > I was thinking it was the /bin/Telerik.Sitefinity.dll file but everything seems to be updated in there. Any ideas? Thanks!
2012/05/01
[ "https://Stackoverflow.com/questions/10399488", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1245804/" ]
I'm not sure I understand what's going on here, but are they complaining about the data going from the form to the "http://website/script.asp" without being checked? Because, if so, then that's just ridiculous because even if you *chose* to do some JavaScript / other client-side checking, one should never rely on it. All the sanitisation should be done server side, which I presume it is? Could you post the original script for clarity?
It's a security risk because it's possible to type `<script>cross_site_​scripting.nasl</script>`.asp into a form (or a hidden element), and have it appear on your pages. That lets other websites inject arbitary code into your website. If you're *just* using a contact form and have no user login or cookies, then this is just a spam risk. (Post the HTML/PHP which generates the form, and this may become clearer)
10,399,488
having some issues upgrading my Sitefinity 4.1.1339.0 installation to 4.3 (and ultimately to Sitefinity 5). It is giving me the following error: > > > > > > Could not load file or assembly 'Telerik.Sitefinity, Version=4.1.1339.0, Culture=neutral, PublicKeyToken=b28c218413bdf563' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) > > > > > > > > > I was thinking it was the /bin/Telerik.Sitefinity.dll file but everything seems to be updated in there. Any ideas? Thanks!
2012/05/01
[ "https://Stackoverflow.com/questions/10399488", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1245804/" ]
I'm not sure I understand what's going on here, but are they complaining about the data going from the form to the "http://website/script.asp" without being checked? Because, if so, then that's just ridiculous because even if you *chose* to do some JavaScript / other client-side checking, one should never rely on it. All the sanitisation should be done server side, which I presume it is? Could you post the original script for clarity?
Your script is immediately at risk because you're blindly using $\_POST for your email headers. This could allow a person to manipulate the header of the email to send to a wildcard email address, or simply use your form on your server to send spam to others. With a content type of HTML, you should run `htmlentities()` on the user input ( Assuming you don't really want them to have HTML fire off in an email viewer as mentioned). Additionally, look into `filter_input()` to ensure the email address they're entering is valid and not header injection. <http://us2.php.net/manual/en/function.filter-input.php>
17,341
I'm looking for historical data of the CAC40 components. I looked at these previously asked questions: * [What data sources are available online?](https://quant.stackexchange.com/questions/141/what-data-sources-are-available-online) * [Finding historical data for indices](https://quant.stackexchange.com/questions/14687/finding-historical-data-for-indices) as well as Yahoo Finance and the official CAC40 site (on Euronext). The issue is that it's easy with Yahoo finance to find: * the historical data for a CAC40 company * a list of **current** CAC40 components but **not** the list of CAC40 components at the time. Are there any sources from which I could download this information from ? Edit: If this kind of data is available for the SP500, I would be very interested as well!
2015/04/11
[ "https://quant.stackexchange.com/questions/17341", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/15910/" ]
I'm not sure if you are looking for the components only or if you want more data, like the weights in the index. Unfortunately, unlike most other data on the web, it's hard to get any good financial data for free. The only easy way is to pay for accessing it through a financial data provider such as Bloomberg (with MEMB function when you select an index). For the S&P500, I found this website, where it's a lot cheaper than Bloomberg (USD3 vs USD24000) : <http://www.daytradingbias.com/?page_id=105159> For the CAC40 you are lucky, I found this website where it's free (I hope you have some knowledge of French), but there is no more data than the dates and names of components : <http://www.bnains.org/archives/histocac/histocac.htm> But free (or cheap) data actually come at a price: I do not guarantee data correctness (actually I'm pretty sure there are some errors, by looking at their building methods).
You can try Quandl. They have a nice API to R and Python which you can use to do the data-wrangling.
17,341
I'm looking for historical data of the CAC40 components. I looked at these previously asked questions: * [What data sources are available online?](https://quant.stackexchange.com/questions/141/what-data-sources-are-available-online) * [Finding historical data for indices](https://quant.stackexchange.com/questions/14687/finding-historical-data-for-indices) as well as Yahoo Finance and the official CAC40 site (on Euronext). The issue is that it's easy with Yahoo finance to find: * the historical data for a CAC40 company * a list of **current** CAC40 components but **not** the list of CAC40 components at the time. Are there any sources from which I could download this information from ? Edit: If this kind of data is available for the SP500, I would be very interested as well!
2015/04/11
[ "https://quant.stackexchange.com/questions/17341", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/15910/" ]
I'm not sure if you are looking for the components only or if you want more data, like the weights in the index. Unfortunately, unlike most other data on the web, it's hard to get any good financial data for free. The only easy way is to pay for accessing it through a financial data provider such as Bloomberg (with MEMB function when you select an index). For the S&P500, I found this website, where it's a lot cheaper than Bloomberg (USD3 vs USD24000) : <http://www.daytradingbias.com/?page_id=105159> For the CAC40 you are lucky, I found this website where it's free (I hope you have some knowledge of French), but there is no more data than the dates and names of components : <http://www.bnains.org/archives/histocac/histocac.htm> But free (or cheap) data actually come at a price: I do not guarantee data correctness (actually I'm pretty sure there are some errors, by looking at their building methods).
As you said yourself, Yahoo finance provides the historical stock data. The only thing left is to know the historical composition of the CAC40. This information can be extracted from the french [wikipedia site about the CAC40](http://fr.wikipedia.org/wiki/CAC_40#Historique_de_la_composition), or from the source @jean-paul-sartre mentioned. In my answer I will concentrate on how to scrap the information. Some time ago I wrote a `R`function to scrap information from wikipedia tables. Using the `R-package RCurl` one has to specify the URL of the wikipedia website and a number $n$ which table to scrap. I download the sourcecode by ``` x <- getURL(url="http://fr.wikipedia.org/wiki/CAC_40") ``` and with the help of `gregexpr` I search for `<table` and `</table>`, take the $n$-th table. Then I iterate over `<tr>` cells, get the heading from `<th>` and the actual content from `<td>` cells. Sometimes one has to tidy up a bit. Scraping the two tables from wikipedia, combining them and extracting the historical development of the CAC40 constituents is now straightforward.
17,341
I'm looking for historical data of the CAC40 components. I looked at these previously asked questions: * [What data sources are available online?](https://quant.stackexchange.com/questions/141/what-data-sources-are-available-online) * [Finding historical data for indices](https://quant.stackexchange.com/questions/14687/finding-historical-data-for-indices) as well as Yahoo Finance and the official CAC40 site (on Euronext). The issue is that it's easy with Yahoo finance to find: * the historical data for a CAC40 company * a list of **current** CAC40 components but **not** the list of CAC40 components at the time. Are there any sources from which I could download this information from ? Edit: If this kind of data is available for the SP500, I would be very interested as well!
2015/04/11
[ "https://quant.stackexchange.com/questions/17341", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/15910/" ]
I'm not sure if you are looking for the components only or if you want more data, like the weights in the index. Unfortunately, unlike most other data on the web, it's hard to get any good financial data for free. The only easy way is to pay for accessing it through a financial data provider such as Bloomberg (with MEMB function when you select an index). For the S&P500, I found this website, where it's a lot cheaper than Bloomberg (USD3 vs USD24000) : <http://www.daytradingbias.com/?page_id=105159> For the CAC40 you are lucky, I found this website where it's free (I hope you have some knowledge of French), but there is no more data than the dates and names of components : <http://www.bnains.org/archives/histocac/histocac.htm> But free (or cheap) data actually come at a price: I do not guarantee data correctness (actually I'm pretty sure there are some errors, by looking at their building methods).
This data isn't free obviously, but Euronext (the index provider) might be inclined to give you this information if it's for academic purposes. It's advertised on their website here: <https://www.euronext.com/fr/market-data/products/end-day-index-data>
17,341
I'm looking for historical data of the CAC40 components. I looked at these previously asked questions: * [What data sources are available online?](https://quant.stackexchange.com/questions/141/what-data-sources-are-available-online) * [Finding historical data for indices](https://quant.stackexchange.com/questions/14687/finding-historical-data-for-indices) as well as Yahoo Finance and the official CAC40 site (on Euronext). The issue is that it's easy with Yahoo finance to find: * the historical data for a CAC40 company * a list of **current** CAC40 components but **not** the list of CAC40 components at the time. Are there any sources from which I could download this information from ? Edit: If this kind of data is available for the SP500, I would be very interested as well!
2015/04/11
[ "https://quant.stackexchange.com/questions/17341", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/15910/" ]
I'm not sure if you are looking for the components only or if you want more data, like the weights in the index. Unfortunately, unlike most other data on the web, it's hard to get any good financial data for free. The only easy way is to pay for accessing it through a financial data provider such as Bloomberg (with MEMB function when you select an index). For the S&P500, I found this website, where it's a lot cheaper than Bloomberg (USD3 vs USD24000) : <http://www.daytradingbias.com/?page_id=105159> For the CAC40 you are lucky, I found this website where it's free (I hope you have some knowledge of French), but there is no more data than the dates and names of components : <http://www.bnains.org/archives/histocac/histocac.htm> But free (or cheap) data actually come at a price: I do not guarantee data correctness (actually I'm pretty sure there are some errors, by looking at their building methods).
Bloomberg or datastream are the only possible sources.
17,341
I'm looking for historical data of the CAC40 components. I looked at these previously asked questions: * [What data sources are available online?](https://quant.stackexchange.com/questions/141/what-data-sources-are-available-online) * [Finding historical data for indices](https://quant.stackexchange.com/questions/14687/finding-historical-data-for-indices) as well as Yahoo Finance and the official CAC40 site (on Euronext). The issue is that it's easy with Yahoo finance to find: * the historical data for a CAC40 company * a list of **current** CAC40 components but **not** the list of CAC40 components at the time. Are there any sources from which I could download this information from ? Edit: If this kind of data is available for the SP500, I would be very interested as well!
2015/04/11
[ "https://quant.stackexchange.com/questions/17341", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/15910/" ]
As you said yourself, Yahoo finance provides the historical stock data. The only thing left is to know the historical composition of the CAC40. This information can be extracted from the french [wikipedia site about the CAC40](http://fr.wikipedia.org/wiki/CAC_40#Historique_de_la_composition), or from the source @jean-paul-sartre mentioned. In my answer I will concentrate on how to scrap the information. Some time ago I wrote a `R`function to scrap information from wikipedia tables. Using the `R-package RCurl` one has to specify the URL of the wikipedia website and a number $n$ which table to scrap. I download the sourcecode by ``` x <- getURL(url="http://fr.wikipedia.org/wiki/CAC_40") ``` and with the help of `gregexpr` I search for `<table` and `</table>`, take the $n$-th table. Then I iterate over `<tr>` cells, get the heading from `<th>` and the actual content from `<td>` cells. Sometimes one has to tidy up a bit. Scraping the two tables from wikipedia, combining them and extracting the historical development of the CAC40 constituents is now straightforward.
You can try Quandl. They have a nice API to R and Python which you can use to do the data-wrangling.
17,341
I'm looking for historical data of the CAC40 components. I looked at these previously asked questions: * [What data sources are available online?](https://quant.stackexchange.com/questions/141/what-data-sources-are-available-online) * [Finding historical data for indices](https://quant.stackexchange.com/questions/14687/finding-historical-data-for-indices) as well as Yahoo Finance and the official CAC40 site (on Euronext). The issue is that it's easy with Yahoo finance to find: * the historical data for a CAC40 company * a list of **current** CAC40 components but **not** the list of CAC40 components at the time. Are there any sources from which I could download this information from ? Edit: If this kind of data is available for the SP500, I would be very interested as well!
2015/04/11
[ "https://quant.stackexchange.com/questions/17341", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/15910/" ]
This data isn't free obviously, but Euronext (the index provider) might be inclined to give you this information if it's for academic purposes. It's advertised on their website here: <https://www.euronext.com/fr/market-data/products/end-day-index-data>
You can try Quandl. They have a nice API to R and Python which you can use to do the data-wrangling.
17,341
I'm looking for historical data of the CAC40 components. I looked at these previously asked questions: * [What data sources are available online?](https://quant.stackexchange.com/questions/141/what-data-sources-are-available-online) * [Finding historical data for indices](https://quant.stackexchange.com/questions/14687/finding-historical-data-for-indices) as well as Yahoo Finance and the official CAC40 site (on Euronext). The issue is that it's easy with Yahoo finance to find: * the historical data for a CAC40 company * a list of **current** CAC40 components but **not** the list of CAC40 components at the time. Are there any sources from which I could download this information from ? Edit: If this kind of data is available for the SP500, I would be very interested as well!
2015/04/11
[ "https://quant.stackexchange.com/questions/17341", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/15910/" ]
As you said yourself, Yahoo finance provides the historical stock data. The only thing left is to know the historical composition of the CAC40. This information can be extracted from the french [wikipedia site about the CAC40](http://fr.wikipedia.org/wiki/CAC_40#Historique_de_la_composition), or from the source @jean-paul-sartre mentioned. In my answer I will concentrate on how to scrap the information. Some time ago I wrote a `R`function to scrap information from wikipedia tables. Using the `R-package RCurl` one has to specify the URL of the wikipedia website and a number $n$ which table to scrap. I download the sourcecode by ``` x <- getURL(url="http://fr.wikipedia.org/wiki/CAC_40") ``` and with the help of `gregexpr` I search for `<table` and `</table>`, take the $n$-th table. Then I iterate over `<tr>` cells, get the heading from `<th>` and the actual content from `<td>` cells. Sometimes one has to tidy up a bit. Scraping the two tables from wikipedia, combining them and extracting the historical development of the CAC40 constituents is now straightforward.
Bloomberg or datastream are the only possible sources.
17,341
I'm looking for historical data of the CAC40 components. I looked at these previously asked questions: * [What data sources are available online?](https://quant.stackexchange.com/questions/141/what-data-sources-are-available-online) * [Finding historical data for indices](https://quant.stackexchange.com/questions/14687/finding-historical-data-for-indices) as well as Yahoo Finance and the official CAC40 site (on Euronext). The issue is that it's easy with Yahoo finance to find: * the historical data for a CAC40 company * a list of **current** CAC40 components but **not** the list of CAC40 components at the time. Are there any sources from which I could download this information from ? Edit: If this kind of data is available for the SP500, I would be very interested as well!
2015/04/11
[ "https://quant.stackexchange.com/questions/17341", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/15910/" ]
This data isn't free obviously, but Euronext (the index provider) might be inclined to give you this information if it's for academic purposes. It's advertised on their website here: <https://www.euronext.com/fr/market-data/products/end-day-index-data>
Bloomberg or datastream are the only possible sources.
30,834,806
One of the goals of Flux is to make the app more predictable by reducing crazy tangled dependencies. Using the Dispatcher you can define a strict order in which the Stores are updated. That creates a nice tree dependency hierarchy. That's the theory. Consider following situation: I have a game. The store sitting at the top of the hierarchy is **StateStore** that holds only the current game state, i. e. *playing*, *paused*, *over*. It is updated via actions like **PAUSE** or **RESUME**. All other stores depend on this one. So when a store handles some kind of update action (i. e. **MOVE\_LEFT**), it first checks the StateStore and if the game is *paused* or *over*, it ignores the action. Now let's say that there is an action that would cause game over. It updates some store and the store decides that the game shouldn't continue (*"the game character moves left and falls into a trap"*). So the state in the StateStore should change to *over*. How do I do that? Theoretically, it should go like this: 1. The given store is updated first and reaches the point of game over 2. The StateStore is updated afterwards (it *waitsFor* the other store), checks the other store and switches the state to *over*. Unfortunately, the other store needs to access the StateStore as well to check the current game state to see if it should be updated at all (i. e. the game is not paused). They clearly depend on each other. Possible solutions: 1. Merge such stores into one store. That would probably cause my whole app to collapse into a single store which brings up the question whether Flux in this case is a good idea. 2. Distinguish *update order* and *read-only* dependencies. All stores would be updated in a strict order, however they could read from each other arbitrarily. The StateStore would therefore for every action check all existing stores and if any of them indicated game over, it would change the state to *over*, effectively preventing all other stores from updating. What do you think?
2015/06/14
[ "https://Stackoverflow.com/questions/30834806", "https://Stackoverflow.com", "https://Stackoverflow.com/users/576997/" ]
In [Flux](https://facebook.github.io/flux/docs/overview.html#content) stores should be as independent from each other as possible and should not read from each other. The only way to change their state is through actions. In your case, if some store decides that the game is over — you should update a StateStore from the ActionCreator. You can do it by calling a HaltGameActionCreator from the store or by dispatching a HALT\_GAME action from ActionCreator that triggered the store change in the first place.
For those having the same issue, you can read [here](https://github.com/tobice/flux-lumines/wiki/Making-of-Lumines#circular-dependencies) about the actual application I had this problem with and how I approached it. Long story short, I allowed all the stores to arbitrarily read from each other (the suggested solution no. 2). Note that ES6 modules allow circular dependencies which simplifies the implementation. Nevertheless, looking back I'm not sure if it was a right decision. If a piece of business logic inherently contains a circular dependency, we should not try to apply a solution that doesn't really support it just because somebody says so. Flux is only one pattern, there are many other ways how to structure the code. So perhaps I would recommend collapsing the whole logic into a single store and use one of the other ways to implement the store itself (e.g. standard OOP techniques). I would also consider using [redux](https://github.com/reactjs/redux) with [reselect](https://github.com/reactjs/reselect) instead of Flux. The problem with the original example is with the StateStore that depends on two different inputs. It can be changed either by the user explicitly pausing/resuming the game, or by the *game situation* reaching game over. The advantage of this approach is that you need to check only one store to get the current game state. With redux/reselect, you'd have one reducer handling pause/resume actions and another reducer handling the *game situation*. Then you'd have a selector combining these two pieces of information into the final game state. Most of the business logic would be moved from the stores to action creators, i.e., in the `moveLeft()` action creator, you'd use this selector to check the game state and only then you'd dispatch `MOVE_LEFT` action. Note that this is just a rough idea and I don't know if it's viable.
30,108
I am on SLiM, and I don't like the default login screen. I want a login screen like the one shown below: ![enter image description here](https://i.stack.imgur.com/t9Jgs.png) But instead I have a pretty minimal one which has just one textbox and nothing else on the screen. I can't find a screenshot of it, but that is what I got when I am done installing. Is changing to GDM the only way to get a login screen like this? Is there any other way?
2012/01/26
[ "https://unix.stackexchange.com/questions/30108", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4605/" ]
I found that this is possible by editing the `slim.conf` file available in `/etc`. You would need admin credentials to open this file. SLiM themes are placed in `/usr/share/slim/themes`: ![enter image description here](https://i.stack.imgur.com/34K7n.png) In the `slim.conf` file, there is a section that mentions the theme: ``` # current theme, use comma separated list to specify a set to # randomly choose from current_theme crunchbang ``` You can change this to any of the themes shown in the previous screenshot. Change the theme and exit the file. Try logging out and logging back in. That's it. Login screen is changed with immediate effect. More information available here: <http://slim.berlios.de/>
The SLiM website has a page detailing how to [create your own theme](http://slim.berlios.de/themes_howto.php). You can find additional themes on a number of sites, including [Gnome Look](http://gnome-look.org/) and in the [Arch User Repository](https://aur.archlinux.org/packages.php?O=0&K=slim-theme&do_Search=Go)...
30,108
I am on SLiM, and I don't like the default login screen. I want a login screen like the one shown below: ![enter image description here](https://i.stack.imgur.com/t9Jgs.png) But instead I have a pretty minimal one which has just one textbox and nothing else on the screen. I can't find a screenshot of it, but that is what I got when I am done installing. Is changing to GDM the only way to get a login screen like this? Is there any other way?
2012/01/26
[ "https://unix.stackexchange.com/questions/30108", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4605/" ]
I found that this is possible by editing the `slim.conf` file available in `/etc`. You would need admin credentials to open this file. SLiM themes are placed in `/usr/share/slim/themes`: ![enter image description here](https://i.stack.imgur.com/34K7n.png) In the `slim.conf` file, there is a section that mentions the theme: ``` # current theme, use comma separated list to specify a set to # randomly choose from current_theme crunchbang ``` You can change this to any of the themes shown in the previous screenshot. Change the theme and exit the file. Try logging out and logging back in. That's it. Login screen is changed with immediate effect. More information available here: <http://slim.berlios.de/>
Probably the best way would be to install GDM, beacuse the login manager is much more better than slim, Slim has LITERALLY no scrollbars... or anything. You cannot change your window manager or desktop enviroment either. (On slim) I preffer GDM, since its more reliable and easy to use. It does not require editing the configuration file AFTER logging in to change the de/wm.
30,108
I am on SLiM, and I don't like the default login screen. I want a login screen like the one shown below: ![enter image description here](https://i.stack.imgur.com/t9Jgs.png) But instead I have a pretty minimal one which has just one textbox and nothing else on the screen. I can't find a screenshot of it, but that is what I got when I am done installing. Is changing to GDM the only way to get a login screen like this? Is there any other way?
2012/01/26
[ "https://unix.stackexchange.com/questions/30108", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4605/" ]
The SLiM website has a page detailing how to [create your own theme](http://slim.berlios.de/themes_howto.php). You can find additional themes on a number of sites, including [Gnome Look](http://gnome-look.org/) and in the [Arch User Repository](https://aur.archlinux.org/packages.php?O=0&K=slim-theme&do_Search=Go)...
Probably the best way would be to install GDM, beacuse the login manager is much more better than slim, Slim has LITERALLY no scrollbars... or anything. You cannot change your window manager or desktop enviroment either. (On slim) I preffer GDM, since its more reliable and easy to use. It does not require editing the configuration file AFTER logging in to change the de/wm.
36,903,952
I want to create a log file whilst my script extracts 7z I'm using the following: ``` 7z x "*.7z" >> logfile.log 2>&1 ``` But terminal stops displaying output and the log file is blank...
2016/04/28
[ "https://Stackoverflow.com/questions/36903952", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6251051/" ]
`doors[k] % j` is nonsense (`"Closed" % 1`). `k % j == 0` will be true for the 0-th door, always. Use `(k + 1) % j` to trigger 2nd, 5th, 8th (i.e. #3, #6, #9 in English)... when `j` is `3`. While it's not an error, it would be easier if you had just Boolean values in the array: say, `false` for closed, `true` for open. Then toggling is just `doors[value] = !doors[value]`. Transform to strings at output, as `"Closed"` and `"Open"` are only relevant to humans.
This is how I solved this problem. I think you dont need "closed" or "open". Instead try to use booleans(true or false) instead. ```js function getFinalOpenedDoors(numDoors) { const array=[]; for(let i=1; i<101;i++){ array[i]=false } let turn=1 while(turn<=100){ for(let i=turn; i<101;i++){ if(i%turn===0){ array[i]=!array[i] } } turn=turn+1 } const container=[] for(let j=0;j<array.length;j++){ if(array[j]===true) container.push(j) } return container } getFinalOpenedDoors(100) ```
57,219
I'm trying to pick out the best plants to attract birds to my yard, and have become very interested in serviceberry trees for this purpose. Should I avoid the hybrid varieties of this species (such as *Amelanchier × grandiflora*) in favor of versions like *A. canadensis* which birds and insects may be more familiar with? Or does that not matter, since both "parents" of the hybrid are also native species to my area? Does this apply more generally when selecting native plants to build a better wildlife habitat?
2021/04/07
[ "https://gardening.stackexchange.com/questions/57219", "https://gardening.stackexchange.com", "https://gardening.stackexchange.com/users/24416/" ]
*Amelanchier* × *grandiflora* is an example of a "nativar," that is, a cultivated variety of a native plant. While there's some range of opinions, reputable sources agree that it's best to plant the native species (eg, *A. canadensis*) if you can get them. If you can't get native species, nativars are at least better than non-native plants. If you do have to plant a nativar, at least make sure it's not sterile. Beware of varieties that advertise "double flowers." Double flowers are usually sterile because they make extra petals instead of the pollen- and seed-producing parts. (*Amelanchier* × *grandiflora* does make non-sterile flowers, so it's better than some nativars.) Further reading: * [Native vs. "Nativar"](https://extension.illinois.edu/blogs/garden-scoop/2019-12-28-native-vs-nativar), by Ryan Pankau, Horticulture Extension Educator; University of Illinois at Urbana-Champaign. * [Native, or Not So Much? Native plants transformed into flashy “nativars” may look pretty, but are they good for wildlife?](https://www.nwf.org/Magazines/National-Wildlife/2016/JuneJuly/Gardening/Cultivars), by Janet Marinelli, The National Wildlife Foundation. * [Citizen Scientists Help Parse the Native/Nativar Debate](https://www.ecolandscaping.org/07/designing-ecological-landscapes/native-plants/citizen-scientists-help-parse-the-native-nativar-debate/) by Jessamine (Jessa) Finch, Ecological Landscape Alliance.
While this is a natural question to ask, there is no one-size-fits-all answer to this question: it depends on the particular hybrid. **Naturally-occurring Hybrids** Some hybrids are naturally occurring and even fertile. For example, in the Mid-Atlantic, the hybrid oak *Quercus x heterophylla*, which is a cross of willow oak (*Q. phellos*) and northern red oak (*Q. rubra*) is found in most counties from southeastern PA and northern DE, east through much of NJ. Not only are both parent species native in this region, but the hybrid itself is. Another, even more widespread native and naturally-occurring hybrid is *Apocynum x floribundum*, or "intermediate dogbane", a hybrid of the two native dogbane species. There is no reason to avoid most such hybrids, but in some cases, they can be sterile. For example, the hybrid enchanter's nightshade *Circaea x sterilis*, although relatively widepread in the wild, cannot reproduce by seed, so it represents a genetic "dead end". If you wish to contribute genetic diversity to local populations of one of the parent species, you need to plant one of the parents. On the other hand, some gardeners might actively want sterile species, as they will only reproduce vegetatively and not by seed. **Unnatural Hybrids of Native Species** *Amelanchier x grandiflora* is a slightly different example. Its parents are both native, but it does not occur in the wild. There might be some ecological consequences for planting such hybrids, but in general I think such hybrids are likely to cause less damage than a hybrid that contains non-native parents. **Hybrids with one or more non-native parents** The worst-case would be a hybrid containing one or more non-native parents. An example of this would be *Quercus ×bimundorum* which is a hybrid of English Oak (*Q. robur*), native to Europe, and white oak (*Q. alba*), native to North America. There is potentially more harm that could come from such hybrids, the worst case being introducing new genetics that cause a plant to become invasive. Examples of this phenomenon occurring would be the invasiveness of mulberry (*Morus*) hybrids between the introduced white mulberry and the native red mulberry in North America, and a second example of this phenomenon occurring in the same species would be the common reed, *Phragmites australis*. Anyone who has worked trying to control either of these plants will testify to the importance of avoiding the introduction of new genetic material through hybrids. **Cultivars that are also hybrids** It is also, however, worth considering not just whether the plant is a hybrid but also whether or not it is a cultivar. Cultivars are specific, named varieties of plants, usually developed by the nursery industry, but some of them are simply wild plants that were selected for desireable properties and then propagated. Cultivars can have less ecological value in a long list of ways. Cultivars lack genetic diversity: in most cases all individuals of the cultivar are clones. In many cases, selective breeding has modified the plant in such a way that makes it less adapted to survival in the wild, or modified it in such a way that makes it less attractive to the insects or other animals that depend on it (such as being bred for insect resistance, or a flower structure that may look pretty to humans but pollinators cannot access.) There is also a risk of outbreeding depression and/or maladaptive characteristics for local conditions when using cultivars derived from distant populations. And lastly, there is the risk of creating new problem populations of weedy plants by introducing new genetic material into local plant populations. Which of these problems, if any, varies based on the cultivar, but if you don't know the answer to these questions, and don't want to put in the time to researching the particular cultivar in question, best practice is to avoid all cultivars. Does this all seem like a lot of information, perhaps more than you want to think about? Then stick to using wild-type, naturally-occuring hybrids or straight species. **In Summary** For a quick-and-dirty answer on best practices, I would say that it is best to only plant hybrids if they are hybrids that occur naturally in the wild in your area, and then, ideally plant one that either occurred naturally in the wild or was propagated from such a plant, not one that was bred in a nursery. If you can't find these, just stick to straight species (not cultivars) of locally native plants. I find the best place to look up the range maps in fine detail (to county level) is BONAP. BONAP lists naturally-occurring hybrids. Hybrids are probably under-reported so you're probably fine planting a hybrid a few counties over but I still think pure species are the better choice, especially when the hybrids are sterile.
57,219
I'm trying to pick out the best plants to attract birds to my yard, and have become very interested in serviceberry trees for this purpose. Should I avoid the hybrid varieties of this species (such as *Amelanchier × grandiflora*) in favor of versions like *A. canadensis* which birds and insects may be more familiar with? Or does that not matter, since both "parents" of the hybrid are also native species to my area? Does this apply more generally when selecting native plants to build a better wildlife habitat?
2021/04/07
[ "https://gardening.stackexchange.com/questions/57219", "https://gardening.stackexchange.com", "https://gardening.stackexchange.com/users/24416/" ]
*Amelanchier* × *grandiflora* is an example of a "nativar," that is, a cultivated variety of a native plant. While there's some range of opinions, reputable sources agree that it's best to plant the native species (eg, *A. canadensis*) if you can get them. If you can't get native species, nativars are at least better than non-native plants. If you do have to plant a nativar, at least make sure it's not sterile. Beware of varieties that advertise "double flowers." Double flowers are usually sterile because they make extra petals instead of the pollen- and seed-producing parts. (*Amelanchier* × *grandiflora* does make non-sterile flowers, so it's better than some nativars.) Further reading: * [Native vs. "Nativar"](https://extension.illinois.edu/blogs/garden-scoop/2019-12-28-native-vs-nativar), by Ryan Pankau, Horticulture Extension Educator; University of Illinois at Urbana-Champaign. * [Native, or Not So Much? Native plants transformed into flashy “nativars” may look pretty, but are they good for wildlife?](https://www.nwf.org/Magazines/National-Wildlife/2016/JuneJuly/Gardening/Cultivars), by Janet Marinelli, The National Wildlife Foundation. * [Citizen Scientists Help Parse the Native/Nativar Debate](https://www.ecolandscaping.org/07/designing-ecological-landscapes/native-plants/citizen-scientists-help-parse-the-native-nativar-debate/) by Jessamine (Jessa) Finch, Ecological Landscape Alliance.
Though this answer doesn't supplant the great and detailed ones that were given when I first asked the question, I recently came across a source which addresses the precise nativar cross I was looking into: <https://www.ecobeneficial.com/ask_ecobeneficial/is-autumn-brilliance-serviceberry-a-good-pollinator-bird-plant/> Turns out that in my particular case, Autumn Brilliance is still beloved by birds (and produces tasty fruit for humans, too). The article also illustrates some of the perils of nativars more generally, though - such as the lower genetic diversity you get compared to the straight species.
48,084,907
For example, I have a column with email addresses and I want to remove everything before the @ sign and everything after the '.' so I can attain the company names. Such as: ``` Emails [email protected] [email protected] [email protected] [email protected] [email protected] ``` I want to create a new column that looks like this: ``` Companies yahoo google espn apple ahmed ``` What is a function I can use to attain this new column?
2018/01/03
[ "https://Stackoverflow.com/questions/48084907", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5578650/" ]
You can easily use Mid and Find functions. Mid gives you the substring from one text with arguments : text from which to find, start and no. of characters to be extracted. And find returns the position of character in the word . Use this formula in A2 cell and A1 cell contains your string ``` =MID(A1,FIND("@",A1)+1,FIND(".",A1)-FIND("@",A1)-1) ```
with data in **A2**, in **B2** enter: ``` =SUBSTITUTE(MID(A2,FIND("@",A2)+1,9999),".com","") ``` [![enter image description here](https://i.stack.imgur.com/nrYL9.png)](https://i.stack.imgur.com/nrYL9.png) This will work for all emails ending in *.com* and if there are some records that do not have *com* at the end, use: ``` =MID(A1,FIND("@",A1)+1,FIND(".",A1,FIND("@",A1)+1)-(FIND("@",A1)+1)) ``` This will handle records like: ``` [email protected] ``` in which a dot occurs before the **@**
48,084,907
For example, I have a column with email addresses and I want to remove everything before the @ sign and everything after the '.' so I can attain the company names. Such as: ``` Emails [email protected] [email protected] [email protected] [email protected] [email protected] ``` I want to create a new column that looks like this: ``` Companies yahoo google espn apple ahmed ``` What is a function I can use to attain this new column?
2018/01/03
[ "https://Stackoverflow.com/questions/48084907", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5578650/" ]
You can easily use Mid and Find functions. Mid gives you the substring from one text with arguments : text from which to find, start and no. of characters to be extracted. And find returns the position of character in the word . Use this formula in A2 cell and A1 cell contains your string ``` =MID(A1,FIND("@",A1)+1,FIND(".",A1)-FIND("@",A1)-1) ```
Select the column (or copy it into a new column) and press `Ctrl`+`H` to go to Find & Replace: 1. Find \*@ and replace with nothing (keep blank). 2. Find .\* and replace with nothing (keep blank). Here \* represents any sequence of characters.
1,407,121
> > Let n be a positive integer with $f(n)= 1! +2! +3!+... +n!$ and P(x), > Q(x) be polynomials in $x$ such that $f(n+2)=P(n)f(n+1)+Q(n)f(n)$ for > all $n \geq 1$, then which of the options **is/are** correct? > > > 1. $P(x)= x+3$ > 2. $Q(x)= -x-2$ > 3. $P(x)= -x-2$ > 4. $Q(x)= x+3$ > > > I managed to get that options 1 and 2 are correct by calculating $f(1),f(2),f(3),f(4)$, substituting them in the equation and then trying out all the options. Is there any way to do the problem without using the options?
2015/08/23
[ "https://math.stackexchange.com/questions/1407121", "https://math.stackexchange.com", "https://math.stackexchange.com/users/201608/" ]
$$ \big[1!+2!+\dots+n!+(n+1)!+(n+2)!\big]=\\ \big[1!+2!+\dots+n!+(n+1)!\big]P(n)+\\ \big[1!+2!+\dots+n!\big]Q(n) $$ So an easy way to get this is to have $P(n)+Q(n)=1$ so that all terms $1!+2!+\dots+n!$ come out right. Then we want $P(n)$ so that $$ (n+1)!P(n) = (n+1)!+(n+2)! $$ Divide by $(n+1)!$ to get $$ P(n) = 1+(n+2)=n+3. $$ and, as noted $P(n)+Q(n)=1$, so $Q(n) = -n-2$.
If $P(x)$ is greater than linear or $Q(x)$ is greater than quadratic you have a delicate cancellation which can not be achieved. You have to have $Q$ one degree higher than $P$. Now look at lots of points for large $n$ and argue the difference cannot have that many roots. I am handwaving, but I believe it. Given that, we have five constants to determine, so can just take five values and solve the equations $$10=3P(2)+Q(1)\\34=10P(3)+3Q(2)$$ and so on. Now let $P(n)=an+b, Q(n)=cn^2+dn+e$ and you get five equations in five unknowns. In this case, the high order term comes only from $P(n)$, which forces that $P(n)=n+b$ and $Q(n)=dn+3$ so you only need three equations. I also don't know how to prove that.
5,836,329
How many bytes is `unsigned long long`? Is it the same as `unsigned long long int` ?
2011/04/29
[ "https://Stackoverflow.com/questions/5836329", "https://Stackoverflow.com", "https://Stackoverflow.com/users/731638/" ]
Executive summary: it's 64 bits, or larger. `unsigned long long` is the same as `unsigned long long int`. Its size is platform-dependent, but guaranteed by the C standard (ISO C99) to be at least 64 bits. There was no `long long` in C89, but apparently even MSVC supports it, so it's quite portable. In the current C++ standard (issued in 2003), there is no `long long`, though many compilers support it as an extension. The upcoming C++0x standard will support it and its size will be the same as in C, so at least 64 bits. You can get the exact size, in bytes (8 bits on typical platforms) with the expression `sizeof(unsigned long long)`. If you want exactly 64 bits, use `uint64_t`, which is defined in the header `<stdint.h>` along with a bunch of related types (available in C99, C++11 and some current C++ compilers).
The beauty of C++, like C, is that the sized of these things are implementation-defined, so there's no correct answer without your specifying the compiler you're using. Are those two the same? Yes. "long long" is a synonym for "long long int", for any compiler that will accept both.
5,836,329
How many bytes is `unsigned long long`? Is it the same as `unsigned long long int` ?
2011/04/29
[ "https://Stackoverflow.com/questions/5836329", "https://Stackoverflow.com", "https://Stackoverflow.com/users/731638/" ]
The beauty of C++, like C, is that the sized of these things are implementation-defined, so there's no correct answer without your specifying the compiler you're using. Are those two the same? Yes. "long long" is a synonym for "long long int", for any compiler that will accept both.
Use the operator `sizeof`, it will give you the size of a type expressed in byte. One byte is eight bits. See the following program: ``` #include <iostream> int main(int,char**) { std::cout << "unsigned long long " << sizeof(unsigned long long) << "\n"; std::cout << "unsigned long long int " << sizeof(unsigned long long int) << "\n"; return 0; } ```
5,836,329
How many bytes is `unsigned long long`? Is it the same as `unsigned long long int` ?
2011/04/29
[ "https://Stackoverflow.com/questions/5836329", "https://Stackoverflow.com", "https://Stackoverflow.com/users/731638/" ]
Executive summary: it's 64 bits, or larger. `unsigned long long` is the same as `unsigned long long int`. Its size is platform-dependent, but guaranteed by the C standard (ISO C99) to be at least 64 bits. There was no `long long` in C89, but apparently even MSVC supports it, so it's quite portable. In the current C++ standard (issued in 2003), there is no `long long`, though many compilers support it as an extension. The upcoming C++0x standard will support it and its size will be the same as in C, so at least 64 bits. You can get the exact size, in bytes (8 bits on typical platforms) with the expression `sizeof(unsigned long long)`. If you want exactly 64 bits, use `uint64_t`, which is defined in the header `<stdint.h>` along with a bunch of related types (available in C99, C++11 and some current C++ compilers).
It must be at least 64 bits. Other than that it's implementation defined. Strictly speaking, `unsigned long long` isn't standard in C++ until the C++0x standard. `unsigned long long` is a 'simple-type-specifier' for the type `unsigned long long int` (so they're synonyms). The `long long` set of types is also in C99 and was a common extension to C++ compilers even before being standardized.
5,836,329
How many bytes is `unsigned long long`? Is it the same as `unsigned long long int` ?
2011/04/29
[ "https://Stackoverflow.com/questions/5836329", "https://Stackoverflow.com", "https://Stackoverflow.com/users/731638/" ]
Executive summary: it's 64 bits, or larger. `unsigned long long` is the same as `unsigned long long int`. Its size is platform-dependent, but guaranteed by the C standard (ISO C99) to be at least 64 bits. There was no `long long` in C89, but apparently even MSVC supports it, so it's quite portable. In the current C++ standard (issued in 2003), there is no `long long`, though many compilers support it as an extension. The upcoming C++0x standard will support it and its size will be the same as in C, so at least 64 bits. You can get the exact size, in bytes (8 bits on typical platforms) with the expression `sizeof(unsigned long long)`. If you want exactly 64 bits, use `uint64_t`, which is defined in the header `<stdint.h>` along with a bunch of related types (available in C99, C++11 and some current C++ compilers).
Use the operator `sizeof`, it will give you the size of a type expressed in byte. One byte is eight bits. See the following program: ``` #include <iostream> int main(int,char**) { std::cout << "unsigned long long " << sizeof(unsigned long long) << "\n"; std::cout << "unsigned long long int " << sizeof(unsigned long long int) << "\n"; return 0; } ```
5,836,329
How many bytes is `unsigned long long`? Is it the same as `unsigned long long int` ?
2011/04/29
[ "https://Stackoverflow.com/questions/5836329", "https://Stackoverflow.com", "https://Stackoverflow.com/users/731638/" ]
It must be at least 64 bits. Other than that it's implementation defined. Strictly speaking, `unsigned long long` isn't standard in C++ until the C++0x standard. `unsigned long long` is a 'simple-type-specifier' for the type `unsigned long long int` (so they're synonyms). The `long long` set of types is also in C99 and was a common extension to C++ compilers even before being standardized.
Use the operator `sizeof`, it will give you the size of a type expressed in byte. One byte is eight bits. See the following program: ``` #include <iostream> int main(int,char**) { std::cout << "unsigned long long " << sizeof(unsigned long long) << "\n"; std::cout << "unsigned long long int " << sizeof(unsigned long long int) << "\n"; return 0; } ```
193,787
I'm building a compiler for fun, and this is my first pass at the lexer for it. It should handle all tokens for C99, with the exception of the preprocessor tokens. It's very minimal, only grabbing the token and lexeme. Specific areas where I'd like feedback: * I rarely get to use anything beyond C++98. Are there any more recent language features I could be taking advantage of? Anything that could be more idiomatic? * My error-handling seems a little ad-hoc. Is there a smarter approach? * I'm not sure about the best way to choose tokens. The C99 standard just breaks things down into punctuators, keywords, identifiers, and constants. But the compilers I've looked at tend to be more granular. Is there a better approach to choosing tokens? Why? * I think there has to be a more maintainable way of structuring this. Things I don't care about: * I know there are more efficient ways to handle input. My aim here is to just do the simplest thing and worry about fancy stuff/performance tuning later. * You probably don't like my brace style. Sorry. :/ Lexer.h ``` #ifndef __JMCOMP_LEXER_H__ #define __JMCOMP_LEXER_H__ #include <iostream> #include <string> enum class Token { PUNCTUATOR, KEYWORD, IDENTIFIER, INT_CONST, FLOAT_CONST, CHAR_CONST, ERROR }; struct LexResult { Token token = Token::ERROR; std::string lexeme = ""; }; class Lexer { public: explicit Lexer(std::istream& stream); LexResult next(); private: std::istream& mStream; char nextChar(); char skipWhitespace(); char peekChar(); bool peekWord(const std::string& word); std::string nextIdentifier(char character, bool& isValid); std::string lexUniversalCharacter(char character, bool& isValid); std::string lexHexQuad(bool& isValid); LexResult lexIdentifier(char character); LexResult lexConstant(char character); std::string readIntConstant(bool predicate(char)); std::string readIntSuffix(); std::string readLongSuffix(); LexResult checkInvalidToken(const LexResult& token); LexResult readFloatConstant(const std::string& prefix, bool predicate(char)); LexResult lexCharConst(const std::string& initial, char delimiter); std::string lexEscapeSequence(bool& isValid); void skipLineComment(); void skipBlockComment(); }; #endif ``` Lexer.cpp ``` #include <set> #include "Lexer.h" namespace { bool startsIdentifier(char character) { return std::isalpha(character) || character == '_'); } bool isInIdentifier(char character) { return startsIdentifier(character) || std::isdigit(character); } bool isHexChar(char character) { return std::isdigit(character) || (character >= 'A' && character <= 'F') || (character >= 'a' && character <= 'f'); } bool isOctal(char character) { return std::isdigit(character) && '8' - character > 0; } bool isDigit(char character) { return std::isdigit(character); } bool isWhitespace(char character) { return character == ' ' || character == '\n' || character == '\t'; } bool isExponentChar(char character) { return (character == 'e' || character == 'E' || character == 'p' || character == 'P'); } static std::set<std::string> keywords = { "auto", "break", "case", "char", "const", "continue", "default", "do", "double", "else", "enum", "extern", "float", "for", "goto", "if", "inline", "int", "long", "register", "restrict", "return", "short", "signed", "sizeof", "static", "struct", "switch", "typedef", "union", "unsigned", "void", "volatile", "while", "_Bool", "_Complex", "_Imaginary" }; } Lexer::Lexer(std::istream& stream): mStream(stream) { } char Lexer::nextChar() { char c; mStream.read(&c, 1); return c; } char Lexer::skipWhitespace() { char c; do c = nextChar(); while (isWhitespace(c)); return c; } char Lexer::peekChar() { return mStream.peek(); } bool Lexer::peekWord(const std::string& word) { std::string lexedWord; // We've already read the first character, so set that in advance lexedWord.resize(word.size()); lexedWord[0] = word[0]; const size_t bytesToRead = lexedWord.size() - 1; mStream.read(&lexedWord[1], bytesToRead); if (lexedWord == word) { return true; } else { mStream.seekg((-bytesToRead), std::ios::cur); return false; } } std::string Lexer::lexHexQuad(bool& isValid) { std::string hexQuad; for (size_t i = 0; i < 4; ++i) { char character = nextChar(); // First check if the character is okay if (!isHexChar(character)) isValid = false; // Get a better error message (I hope?) if we keep reading // until a "logical" break, so no matter what keep going unless // there's whitespace if (!isWhitespace(character) && mStream) hexQuad += character; // If we hit whitespace, there's no way it's valid // We're at a logical boundary, so just return here else return hexQuad; } return hexQuad; } std::string Lexer::lexUniversalCharacter(char character, bool& isValid) { std::string universalCharacter(1, character); character = nextChar(); universalCharacter += character; if (character == 'u') universalCharacter += lexHexQuad(isValid); else if (character == 'U') { universalCharacter += lexHexQuad(isValid); universalCharacter += lexHexQuad(isValid); } else isValid = false; return universalCharacter; } std::string Lexer::nextIdentifier(char character, bool& isValid) { std::string identifier; while (isInIdentifier(character)) { if (character == '\\') { identifier += lexUniversalCharacter(character, isValid); if (!isValid) return identifier; } else identifier += character; character = nextChar(); } mStream.seekg(-1, std::ios::cur); return identifier; } LexResult Lexer::checkInvalidToken(const LexResult& token) { std::string lexeme = token.lexeme; if (!isWhitespace(peekChar()) && peekChar() != EOF) { while (!isWhitespace(peekChar()) && peekChar() != EOF) lexeme += nextChar(); return LexResult{Token::ERROR, lexeme}; } return token; } LexResult Lexer::lexIdentifier(char character) { bool isValid = true; const std::string identifier = nextIdentifier(character, isValid); if (!isValid) return LexResult{Token::ERROR, identifier}; if (keywords.count(identifier)) return LexResult{Token::KEYWORD, identifier}; return LexResult{Token::IDENTIFIER, identifier}; } std::string Lexer::readIntConstant(bool predicate(char)) { std::string lexeme; while (predicate(peekChar())) lexeme += nextChar(); return lexeme; } std::string Lexer::readLongSuffix() { std::string suffix; suffix += nextChar(); if (peekChar() == 'L' || peekChar() == 'l') suffix += nextChar(); return suffix; } std::string Lexer::readIntSuffix() { std::string suffix; if (peekChar() == 'L' || peekChar() == 'l') { suffix += readLongSuffix(); if (peekChar() == 'u' || peekChar() == 'U') suffix += nextChar(); } else if (peekChar() == 'u' || peekChar() == 'U') { suffix += nextChar(); if (peekChar() == 'l' || peekChar() == 'L') suffix += readLongSuffix(); } return suffix; } LexResult Lexer::readFloatConstant(const std::string& prefix, bool predicate(char)) { std::string result = prefix; if (peekChar() == '.') { result += nextChar(); result += readIntConstant(predicate); } if (isExponentChar(peekChar())) { result += nextChar(); if (peekChar() == '+' || peekChar() == '-') result += nextChar(); const std::string exponentSequence = readIntConstant(predicate); if (exponentSequence.empty()) return checkInvalidToken(LexResult{Token::ERROR, result}); result += exponentSequence; } const char suffix = peekChar(); if (suffix == 'f' || suffix == 'F' || suffix == 'l' || suffix == 'L') result += nextChar(); return checkInvalidToken(LexResult{Token::FLOAT_CONST, result}); } LexResult Lexer::lexConstant(char character) { std::string lexeme(1, character); Token token = Token::INT_CONST; auto predicate = isDigit; if (peekChar() == 'x' || peekChar() == 'X') { predicate = isHexChar; lexeme += nextChar(); } else if (character == '0') predicate = isOctal; lexeme += readIntConstant(predicate); if (peekChar() == '.' || isExponentChar(peekChar())) return readFloatConstant(lexeme, predicate); lexeme += readIntSuffix(); return checkInvalidToken(LexResult{token, lexeme}); } std::string Lexer::lexEscapeSequence(bool& isValid) { std::string lexeme(1, nextChar()); const char c = peekChar(); if (c == '\'' || c == '"' || c == '?' || c == '\\' || c == 'a' || c == 'b' || c == 'f' || c == 'n' || c == 'r' || c == 't' || c == 'v' || c == 'x' || isHexChar(c)) return lexeme + nextChar(); isValid = false; return lexeme; } LexResult Lexer::lexCharConst(const std::string& initial, char delimiter) { bool isValid = true; std::string lexeme = initial + delimiter; while (peekChar() != delimiter && peekChar() != EOF) { if (peekChar() == '\n') isValid = false; if (peekChar() == '\\') { lexeme += lexEscapeSequence(isValid); continue; } lexeme += nextChar(); } lexeme += nextChar(); return LexResult{isValid ? Token::CHAR_CONST : Token::ERROR, lexeme}; } void Lexer::skipLineComment() { nextChar(); nextChar(); while (peekChar() != '\n') nextChar(); } void Lexer::skipBlockComment() { nextChar(); // Skip the first * char c; do c = nextChar(); while (!(c == '*' && peekChar() == '/')); nextChar(); // Pass the last / } LexResult Lexer::next() { char character = skipWhitespace(); while (character == '/' && peekChar() == '/') { skipLineComment(); character = skipWhitespace(); } while (character == '/' && peekChar() == '*') { skipBlockComment(); character = skipWhitespace(); } if (isdigit(character)) return lexConstant(character); if (character == '\'' || character == '"') return lexCharConst("", character); if (character == 'L') if (peekChar() == '\'' || peekChar() == '"') { return lexCharConst("L", nextChar()); } if (startsIdentifier(character)) return lexIdentifier(character); switch (character) { case '+': if (peekWord("++")) return LexResult{Token::PUNCTUATOR, "++"}; if (peekWord("+=")) return LexResult{Token::PUNCTUATOR, "+="}; return LexResult{Token::PUNCTUATOR, "+"}; case '-': if (peekWord("->")) return LexResult{Token::PUNCTUATOR, "->"}; if (peekWord("--")) return LexResult{Token::PUNCTUATOR, "--"}; if (peekWord("-=")) return LexResult{Token::PUNCTUATOR, "-="}; return LexResult{Token::PUNCTUATOR, "-"}; case '*': if (peekWord("*=")) return LexResult{Token::PUNCTUATOR, "*="}; return LexResult{Token::PUNCTUATOR, "*"}; case '/': if (peekWord("/=")) return LexResult{Token::PUNCTUATOR, "/="}; return LexResult{Token::PUNCTUATOR, "/"}; case '=': if (peekWord("==")) return LexResult{Token::PUNCTUATOR, "=="}; return LexResult{Token::PUNCTUATOR, "="}; case '[': return LexResult{Token::PUNCTUATOR, "["}; case ']': return LexResult{Token::PUNCTUATOR, "]"}; case '(': return LexResult{Token::PUNCTUATOR, "("}; case ')': return LexResult{Token::PUNCTUATOR, ")"}; case '.': if (peekWord("...")) return LexResult{Token::PUNCTUATOR, "..."}; return LexResult{Token::PUNCTUATOR, "."}; case '&': if (peekWord("&&")) return LexResult{Token::PUNCTUATOR, "&&"}; if (peekWord("&=")) return LexResult{Token::PUNCTUATOR, "&="}; return LexResult{Token::PUNCTUATOR, "&"}; case '~': return LexResult{Token::PUNCTUATOR, "~"}; case '!': if (peekWord("!=")) return LexResult{Token::PUNCTUATOR, "!="}; return LexResult{Token::PUNCTUATOR, "!"}; case '%': if (peekWord("%:%:")) return LexResult{Token::PUNCTUATOR, "%:%:"}; if (peekWord("%:")) return LexResult{Token::PUNCTUATOR, "%:"}; if (peekWord("%=")) return LexResult{Token::PUNCTUATOR, "%="}; if (peekWord("%>")) return LexResult{Token::PUNCTUATOR, "%>"}; return LexResult{Token::PUNCTUATOR, "%"}; case '<': if (peekWord("<<=")) return LexResult{Token::PUNCTUATOR, "<<="}; if (peekWord("<:")) return LexResult{Token::PUNCTUATOR, "<:"}; if (peekWord("<%")) return LexResult{Token::PUNCTUATOR, "<%"}; if (peekWord("<<")) return LexResult{Token::PUNCTUATOR, "<<"}; if (peekWord("<=")) return LexResult{Token::PUNCTUATOR, "<="}; return LexResult{Token::PUNCTUATOR, "<"}; case '>': if (peekWord(">>=")) return LexResult{Token::PUNCTUATOR, ">>="}; if (peekWord(">>")) return LexResult{Token::PUNCTUATOR, ">>"}; if (peekWord(">=")) return LexResult{Token::PUNCTUATOR, ">="}; return LexResult{Token::PUNCTUATOR, ">"}; case '^': if (peekWord("^=")) return LexResult{Token::PUNCTUATOR, "^="}; return LexResult{Token::PUNCTUATOR, "^"}; case '|': if (peekWord("||")) return LexResult{Token::PUNCTUATOR, "||"}; if (peekWord("|=")) return LexResult{Token::PUNCTUATOR, "|="}; return LexResult{Token::PUNCTUATOR, "|"}; case '?': return LexResult{Token::PUNCTUATOR, "?"}; case ':': if (peekWord(":>")) return LexResult{Token::PUNCTUATOR, ":>"}; return LexResult{Token::PUNCTUATOR, ":"}; case ';': return LexResult{Token::PUNCTUATOR, ";"}; case ',': return LexResult{Token::PUNCTUATOR, ","}; case '#': if (peekWord("##")) return LexResult{Token::PUNCTUATOR, "##"}; return LexResult{Token::PUNCTUATOR, "#"}; default: return LexResult{Token::ERROR, std::string(character, 1)}; } } ```
2018/05/06
[ "https://codereview.stackexchange.com/questions/193787", "https://codereview.stackexchange.com", "https://codereview.stackexchange.com/users/41400/" ]
``` #ifndef __JMCOMP_LEXER_H__ ``` That is a symbol [reserved for use by the implementation](https://timsong-cpp.github.io/cppwp/n4659/lex.name#3). Writing your this in a way that mimics what you see in the compiler-supplied headers is exactly wrong! For this and other reasons I suggest leaving that out completely and just use #pragma once. If you *ever* find a platform were this pragma is not supported, a script can add the one-time-include symbols automatically and *correctly*. --- > > I rarely get to use anything beyond C++98. Are there any more recent language features I could be taking advantage of? Anything that could be more idiomatic? > > > Aye, scribing ye olde language doest appeareth que’r. C++11 *is like a whole new language*. > > I think there has to be a more maintainable way of structuring this. > > > Have you *studied* parsing? LR(k), Chomsky Normal Form, pumping lemmings, … any of that sound familiar? Lexing only, not *parsing* — OK, so you just want to classify tokens. BTW, have you looked at Boost.Spirit.lex? That would make quick work of it, if you just wanted a lexer. If you are doing this as an experience in itself, carry on! --- I would (do) start with a comment block that gives the grammar in EBNF pseudo-code. That gives the relationship of the hierarchy of symbols and which ones are terminal or non-terminal. The names in this doc will match the names in the code. I’m doing something not entirely dissimilar, hearing that `string_view` is particularly good for writing parsers, as you can whack pieces off the front end efficiently and pass them around as lightweight un-owning containers. So I have functions like this: ``` std::optional<std::string> read_identifier (std::string_view&); std::optional<Value_type> read_number (std::string_view&); ``` Each function will return the thing it read. In a pure lexer, you can either make that a `string_view` holding just the chars comprising that token, or a token structure containing information such as its kind and “value” which will be used by the parsing stage. Even if you don’t want to use it, read the tutorial for Boost.Spirit’s lex class to get ideas on that. You might want to track the original *position* of the token in the source, too. In my functions shown above, if the thing was read, the `string_view` is updated to remove it from the beginning of the line. If not successful, the `optional` returns no-value and the view is *not changed*. This lets me write non-terminals by calling one and if that didn't work call the next possibility. For more complex cases, the `string_view` can be saved in a temporary and restored, cheaply — that allows for backing up to try something else. --- > > The C99 standard just breaks things down into punctuators, keywords, identifiers, and constants. But the compilers I've looked at tend to be more granular. Is there a better approach to choosing tokens? Why? > > > You can get a hint from my comments above. Consider: you can chop up the source into individual “words” only, but when you feed this to the next stage the only thing it saves you is not having to skip white space and mess with comments getting in the middle of grammatical productions. If the tokens have a rough type, like number, identifier, punctuation; it saves the parser *some* work. The parser grammar uses these, so starting with that dovetails nicely. If the tokens have a type and a value, it can save the next stage from having to figure that out. But *someone* has to figure it out, and if you do it in the lexer you’re guessing at what the parser will find useful. If you need to figure out something *to do the lexical analysis*, then preserve that information and pass it along. If you write the lexer and parser grammars together, the lexer is just the bottom end of the whole grammar and it is clear what you need, because you are choosing which terminals to handle (or partially handle) in the lexing step. Why two phases? Well, the designer or Perl 6 patterns say “why indeed?”. The tutorial and overview of Boost.Spirit goes into it, and lets you use a separate lexer or not. One good reason is to get rid of comments. Eating whitespace after each nonterminal in a monolithic parser isn’t that bad, but consider comments and backslash continuation lines and other stuff that lives in the *text*, that makes it hard to deal with that mixed in with the parser grammar. Other reasons historically are memory usage and machine capacity; that matters less today. And there is the adage that “If you put a team of three people onto a compiler-writing project, you’ll get a three-pass compiler.” --- ``` enum class Token { PUNCTUATOR, KEYWORD, IDENTIFIER, INT_CONST, FLOAT_CONST, CHAR_CONST, ERROR }; ``` See [⧺ES.9](https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#es9-avoid-all_caps-names): *Avoid `ALL_CAPS` names*. I see you *are* using a new language feature of `enum class`, so your code might not be as out-of-date as you feared. You might consider reserving a value of 0 for ERROR or empty or something like that. --- ``` std::string lexeme = ""; ``` Don’t, in general, assign `""` to clear out a `std::string`. In this case, there is no need to say anything at all because `string` has a constructor. So just leave the default initializer off of the member. ``` Token token = Token::ERROR; ``` Since that is what you want for the default, I would make ERROR the first thing in the enumeration list so it has the value zero. It doesn’t matter ideally, but it’s nice and might be helpful later. --- ``` return std::isalpha(character) || character == '_'); ``` See [cppreference](https://en.cppreference.com/w/cpp/string/byte/isalpha) for the Kosher way to call these ancient functions. ``` ⋯ isalpha(static_cast<unsigned char>(character)) ⋯ ``` Since you already wrapped these calls in your own helpers, it will only appear within those helpers, once. ``` bool isWhitespace(char character) ``` not using std::isspace? You are not covering as many characters as it does. --- ``` static std::set<std::string> keywords = { "auto", "break", "case", "char", "const", "continue", "default", ⋯ ``` Two things: the `set` is rather slow for lookup! A sorted `vector` would be faster! Boost.Container has a `flat_set` etc. Second, you are copying the statically-allocated lexical string literals into string objects. Do you really need `string` here? I would ([did](https://codereview.stackexchange.com/questions/192030/fizzbuzz-17-style), actually) just use a plain (pre-sorted) array of `const char*`, and make the whole thing `constexpr`. If [`std::string` literals](https://en.cppreference.com/w/cpp/string/basic_string/operator%22%22s) were `constexpr` you could at least save the redundant copy and run-time copying, but pondering why it isn’t `constexpr` will show you why I’m mentioning this — it needs to allocate memory, and do run-time work to set up the set. Keeping the entire table in one contiguous lump of memory will not only save memory for all those pointers and nodes, but will be *much* faster. --- ``` Lexer::Lexer(std::istream& stream): mStream(stream) { } ``` Use *uniform initialization*. So now you write curly braces instead of parens: ``` : mStream{stream} ``` and this short one-liner could go inline in the header. --- ``` char Lexer::nextChar() { char c; mStream.read(&c, 1); return c; } ``` What if no character was read? The stream could be `bad` you know. Maybe you hit the end of the file, or the network glitched. --- ``` char c; do c = nextChar(); while (isWhitespace(c)); return c; ``` You *can* write this in a way to prevent defining `c` outside the loop. More importantly, deal with errors from `nextChar`. It would be cleaner if it *only* skipped whitespace and did not also read (and return) the next char beyond that! (Hint: see the next function in your file) ``` // We've already read the first character, so set that in advance lexedWord.resize(word.size()); lexedWord[0] = word[0]; ``` Exactly! Don’t read one ahead. Skipws should do its one job only. It is especially confusing since peekWord does not do the skipws call. It seems that it is called when you already know that the first character of the word does match the input? This is going to lead to maintenance problems, believe me. (later: I see a lot of the functions take the read-ahead character as another parameter and have special code to deal with that first. Get rid of all that. The function should be called with the input set to the first position of the thing it wants to read. You already know to peek and to rewind, so there is no reason to have this one-off getting in the way.) Did you know that a stream can give you an input iterator? So rather than allocating a string and reading into it and then comparing (oh, and you didn't check to see if it read as many bytes as you asked for), you can use something like `std::equal` directly between `word` and the input stream. If/when you *do* need to access the contents of a `std::string` as a buffer for writing into, use `.data()`. --- ``` std::string Lexer::lexUniversalCharacter(char character, bool& isValid) ``` Two return values, no problem. Don’t use an “out” parameter for the second one! ([⧺F.21](https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#f21-to-return-multiple-out-values-prefer-returning-a-tuple-or-struct)). However, I think you don’t have two values here, but an `optional` result. --- ``` return LexResult{Token::IDENTIFIER, identifier}; ``` You don’t need to name the type, as it is automatically picked up from the function’s return type. ``` return {Token::IDENTIFIER, identifier}; ``` this can be much nicer. --- ``` if (peekChar() == 'L' || peekChar() == 'l') { suffix += readLongSuffix(); if (peekChar() == 'u' || peekChar() == 'U') suffix += nextChar(); } else if (peekChar() == 'u' || peekChar() == 'U') { suffix += nextChar(); if (peekChar() == 'l' || peekChar() == 'L') ``` Avoid calling `peelChar` twice on the same character. You can use a new feature in `if` statements here: ``` if (auto ch=peekChar(); ch=='L' || ch=='l') { ``` or (since you are checking a lot of cap/lower pairs) define a helper function: ``` if (mAtCh(peekChar(),'L') { ``` but later you need more general: ``` if (match_any_of(peekChar(), "+-") { if (match_any_of(peekChar(), "fFlL") { lexEscapeSequence ``` and to write that, use `std::any_of` in a one-line wrapper. --- ``` if (peekWord("<<=")) return LexResult{Token::PUNCTUATOR, "<<="}; if (peekWord("<:")) return LexResult{Token::PUNCTUATOR, "<:"}; if (peekWord("<%")) return LexResult{Token::PUNCTUATOR, "<%"}; if (peekWord("<<")) return LexResult{Token::PUNCTUATOR, "<<"}; if (peekWord("<=")) return LexResult{Token::PUNCTUATOR, "<="}; ``` --- There are a lot of these. Notice how the thing you peek always matches the parameter to the return value? What is the `if` even doing? You need a function that *returns* the peeked word. In fact, all of this reads one char, switches on it, then peeks on the rest of the word. Why does it need to do it in two parts like that? You are just classifying the “word” a PUNCTUATOR. Make a list (static array, as discussed with keywords) and treat it in the same way. Just as there are keywords spelled with letters, these are a list of legal words spelled with non-letter/non-digit characters. The trick is you have to take the longest sequence that is a valid token. But **use a table**, *not* dozens of duplicated snippets of code. ``` peek a character and append to the token is that on the list? no: return with what you had. yes: advance the input and append to the token. ``` Good! ===== I see you used an anonomous namespace for the helpers in the CPP file. Using `std::stream` for reading the text: most people just use a string.
### Include what you use You use `isdigit`, `isalpha`, etc. -- but you haven't included `ctype.h` or `cctype` to assure that they're declared. ### Broken Code ``` bool startsIdentifier(char character) { return std::isalpha(character) || character == '_'); } ``` I'm not sure if you accidentally messed this up while cutting and pasting to the browser, but this won't even compile as it is right now. You have mis-matched parentheses. ### hex characters Checking for hexadecimal characters. ``` bool isHexChar(char character) { return std::isdigit(character) || (character >= 'A' && character <= 'F') || (character >= 'a' && character <= 'f'); } ``` The standard library provides `std::isxdigit` to check for hexadecimal characters, so you could call that instead of inventing your own like this. ### Conversion to unsigned char As @JDługosz pointed out, you want to convert the argument to unsigned char before calling. Since you're writing a wrapper anyway, you can pretty easily avoid casting though--just change the argument to your function to unsigned char: ``` bool isHexChar(unsigned char ch) { return isxdigit(ch); } ``` ### Ignore: pure pedanticism On a purely pedantic note, your code to check whether a character is between 'A' and 'F' inclusive isn't actually required to work. Digits are required to be contiguous, but letters aren't. That is purely pedantic though--the only character encoding I know of that has non-contiguous characters is EBCDIC, which is contiguous from A to F (its first non-contiguous section is after `i`)--and even if it was a problem with EBCDIC, you probably wouldn't care anyway. ### Language Compliance As it stands right now, your lexer doesn't implement a number of things the way the C and C++ standards say it should. For example, adjacent string literals: ``` "char literal 1" " char literal 2" ``` ...should be spliced into a single string literal. There are phases of translation, however, so you have to do things in the right order. Splicing string literals happens at phase 6, so it's only done *after* conversion of universal character names (to give only one example). Removal of trigraphs from the language makes the order *somewhat* less important than it used to be though. For example, `"??!"` contains a trigraph that needs to be converted to a single character--but `"?" "?!"` does not contain a trigraph, and must remain as three separate characters. ### Clumsy Usage The interface to the code looks pretty clumsy and uninformative, at least to me. For example, if there's an error, you'd typically want to report its location to the user--but the `LexResult` doesn't contain the error location, so that's essentially impossible to do. In fact, there doesn't even seem to be a way to distinguish between an error, and simply the end of the input. Either way, I seem to just get a `LexResult` with its `Token` set to `ERROR`. At least to me, it *seems* like when you tried to use a lexer, it would be fairly important to know whether you'd encountered an error, or successfully lexed the entire input.
193,787
I'm building a compiler for fun, and this is my first pass at the lexer for it. It should handle all tokens for C99, with the exception of the preprocessor tokens. It's very minimal, only grabbing the token and lexeme. Specific areas where I'd like feedback: * I rarely get to use anything beyond C++98. Are there any more recent language features I could be taking advantage of? Anything that could be more idiomatic? * My error-handling seems a little ad-hoc. Is there a smarter approach? * I'm not sure about the best way to choose tokens. The C99 standard just breaks things down into punctuators, keywords, identifiers, and constants. But the compilers I've looked at tend to be more granular. Is there a better approach to choosing tokens? Why? * I think there has to be a more maintainable way of structuring this. Things I don't care about: * I know there are more efficient ways to handle input. My aim here is to just do the simplest thing and worry about fancy stuff/performance tuning later. * You probably don't like my brace style. Sorry. :/ Lexer.h ``` #ifndef __JMCOMP_LEXER_H__ #define __JMCOMP_LEXER_H__ #include <iostream> #include <string> enum class Token { PUNCTUATOR, KEYWORD, IDENTIFIER, INT_CONST, FLOAT_CONST, CHAR_CONST, ERROR }; struct LexResult { Token token = Token::ERROR; std::string lexeme = ""; }; class Lexer { public: explicit Lexer(std::istream& stream); LexResult next(); private: std::istream& mStream; char nextChar(); char skipWhitespace(); char peekChar(); bool peekWord(const std::string& word); std::string nextIdentifier(char character, bool& isValid); std::string lexUniversalCharacter(char character, bool& isValid); std::string lexHexQuad(bool& isValid); LexResult lexIdentifier(char character); LexResult lexConstant(char character); std::string readIntConstant(bool predicate(char)); std::string readIntSuffix(); std::string readLongSuffix(); LexResult checkInvalidToken(const LexResult& token); LexResult readFloatConstant(const std::string& prefix, bool predicate(char)); LexResult lexCharConst(const std::string& initial, char delimiter); std::string lexEscapeSequence(bool& isValid); void skipLineComment(); void skipBlockComment(); }; #endif ``` Lexer.cpp ``` #include <set> #include "Lexer.h" namespace { bool startsIdentifier(char character) { return std::isalpha(character) || character == '_'); } bool isInIdentifier(char character) { return startsIdentifier(character) || std::isdigit(character); } bool isHexChar(char character) { return std::isdigit(character) || (character >= 'A' && character <= 'F') || (character >= 'a' && character <= 'f'); } bool isOctal(char character) { return std::isdigit(character) && '8' - character > 0; } bool isDigit(char character) { return std::isdigit(character); } bool isWhitespace(char character) { return character == ' ' || character == '\n' || character == '\t'; } bool isExponentChar(char character) { return (character == 'e' || character == 'E' || character == 'p' || character == 'P'); } static std::set<std::string> keywords = { "auto", "break", "case", "char", "const", "continue", "default", "do", "double", "else", "enum", "extern", "float", "for", "goto", "if", "inline", "int", "long", "register", "restrict", "return", "short", "signed", "sizeof", "static", "struct", "switch", "typedef", "union", "unsigned", "void", "volatile", "while", "_Bool", "_Complex", "_Imaginary" }; } Lexer::Lexer(std::istream& stream): mStream(stream) { } char Lexer::nextChar() { char c; mStream.read(&c, 1); return c; } char Lexer::skipWhitespace() { char c; do c = nextChar(); while (isWhitespace(c)); return c; } char Lexer::peekChar() { return mStream.peek(); } bool Lexer::peekWord(const std::string& word) { std::string lexedWord; // We've already read the first character, so set that in advance lexedWord.resize(word.size()); lexedWord[0] = word[0]; const size_t bytesToRead = lexedWord.size() - 1; mStream.read(&lexedWord[1], bytesToRead); if (lexedWord == word) { return true; } else { mStream.seekg((-bytesToRead), std::ios::cur); return false; } } std::string Lexer::lexHexQuad(bool& isValid) { std::string hexQuad; for (size_t i = 0; i < 4; ++i) { char character = nextChar(); // First check if the character is okay if (!isHexChar(character)) isValid = false; // Get a better error message (I hope?) if we keep reading // until a "logical" break, so no matter what keep going unless // there's whitespace if (!isWhitespace(character) && mStream) hexQuad += character; // If we hit whitespace, there's no way it's valid // We're at a logical boundary, so just return here else return hexQuad; } return hexQuad; } std::string Lexer::lexUniversalCharacter(char character, bool& isValid) { std::string universalCharacter(1, character); character = nextChar(); universalCharacter += character; if (character == 'u') universalCharacter += lexHexQuad(isValid); else if (character == 'U') { universalCharacter += lexHexQuad(isValid); universalCharacter += lexHexQuad(isValid); } else isValid = false; return universalCharacter; } std::string Lexer::nextIdentifier(char character, bool& isValid) { std::string identifier; while (isInIdentifier(character)) { if (character == '\\') { identifier += lexUniversalCharacter(character, isValid); if (!isValid) return identifier; } else identifier += character; character = nextChar(); } mStream.seekg(-1, std::ios::cur); return identifier; } LexResult Lexer::checkInvalidToken(const LexResult& token) { std::string lexeme = token.lexeme; if (!isWhitespace(peekChar()) && peekChar() != EOF) { while (!isWhitespace(peekChar()) && peekChar() != EOF) lexeme += nextChar(); return LexResult{Token::ERROR, lexeme}; } return token; } LexResult Lexer::lexIdentifier(char character) { bool isValid = true; const std::string identifier = nextIdentifier(character, isValid); if (!isValid) return LexResult{Token::ERROR, identifier}; if (keywords.count(identifier)) return LexResult{Token::KEYWORD, identifier}; return LexResult{Token::IDENTIFIER, identifier}; } std::string Lexer::readIntConstant(bool predicate(char)) { std::string lexeme; while (predicate(peekChar())) lexeme += nextChar(); return lexeme; } std::string Lexer::readLongSuffix() { std::string suffix; suffix += nextChar(); if (peekChar() == 'L' || peekChar() == 'l') suffix += nextChar(); return suffix; } std::string Lexer::readIntSuffix() { std::string suffix; if (peekChar() == 'L' || peekChar() == 'l') { suffix += readLongSuffix(); if (peekChar() == 'u' || peekChar() == 'U') suffix += nextChar(); } else if (peekChar() == 'u' || peekChar() == 'U') { suffix += nextChar(); if (peekChar() == 'l' || peekChar() == 'L') suffix += readLongSuffix(); } return suffix; } LexResult Lexer::readFloatConstant(const std::string& prefix, bool predicate(char)) { std::string result = prefix; if (peekChar() == '.') { result += nextChar(); result += readIntConstant(predicate); } if (isExponentChar(peekChar())) { result += nextChar(); if (peekChar() == '+' || peekChar() == '-') result += nextChar(); const std::string exponentSequence = readIntConstant(predicate); if (exponentSequence.empty()) return checkInvalidToken(LexResult{Token::ERROR, result}); result += exponentSequence; } const char suffix = peekChar(); if (suffix == 'f' || suffix == 'F' || suffix == 'l' || suffix == 'L') result += nextChar(); return checkInvalidToken(LexResult{Token::FLOAT_CONST, result}); } LexResult Lexer::lexConstant(char character) { std::string lexeme(1, character); Token token = Token::INT_CONST; auto predicate = isDigit; if (peekChar() == 'x' || peekChar() == 'X') { predicate = isHexChar; lexeme += nextChar(); } else if (character == '0') predicate = isOctal; lexeme += readIntConstant(predicate); if (peekChar() == '.' || isExponentChar(peekChar())) return readFloatConstant(lexeme, predicate); lexeme += readIntSuffix(); return checkInvalidToken(LexResult{token, lexeme}); } std::string Lexer::lexEscapeSequence(bool& isValid) { std::string lexeme(1, nextChar()); const char c = peekChar(); if (c == '\'' || c == '"' || c == '?' || c == '\\' || c == 'a' || c == 'b' || c == 'f' || c == 'n' || c == 'r' || c == 't' || c == 'v' || c == 'x' || isHexChar(c)) return lexeme + nextChar(); isValid = false; return lexeme; } LexResult Lexer::lexCharConst(const std::string& initial, char delimiter) { bool isValid = true; std::string lexeme = initial + delimiter; while (peekChar() != delimiter && peekChar() != EOF) { if (peekChar() == '\n') isValid = false; if (peekChar() == '\\') { lexeme += lexEscapeSequence(isValid); continue; } lexeme += nextChar(); } lexeme += nextChar(); return LexResult{isValid ? Token::CHAR_CONST : Token::ERROR, lexeme}; } void Lexer::skipLineComment() { nextChar(); nextChar(); while (peekChar() != '\n') nextChar(); } void Lexer::skipBlockComment() { nextChar(); // Skip the first * char c; do c = nextChar(); while (!(c == '*' && peekChar() == '/')); nextChar(); // Pass the last / } LexResult Lexer::next() { char character = skipWhitespace(); while (character == '/' && peekChar() == '/') { skipLineComment(); character = skipWhitespace(); } while (character == '/' && peekChar() == '*') { skipBlockComment(); character = skipWhitespace(); } if (isdigit(character)) return lexConstant(character); if (character == '\'' || character == '"') return lexCharConst("", character); if (character == 'L') if (peekChar() == '\'' || peekChar() == '"') { return lexCharConst("L", nextChar()); } if (startsIdentifier(character)) return lexIdentifier(character); switch (character) { case '+': if (peekWord("++")) return LexResult{Token::PUNCTUATOR, "++"}; if (peekWord("+=")) return LexResult{Token::PUNCTUATOR, "+="}; return LexResult{Token::PUNCTUATOR, "+"}; case '-': if (peekWord("->")) return LexResult{Token::PUNCTUATOR, "->"}; if (peekWord("--")) return LexResult{Token::PUNCTUATOR, "--"}; if (peekWord("-=")) return LexResult{Token::PUNCTUATOR, "-="}; return LexResult{Token::PUNCTUATOR, "-"}; case '*': if (peekWord("*=")) return LexResult{Token::PUNCTUATOR, "*="}; return LexResult{Token::PUNCTUATOR, "*"}; case '/': if (peekWord("/=")) return LexResult{Token::PUNCTUATOR, "/="}; return LexResult{Token::PUNCTUATOR, "/"}; case '=': if (peekWord("==")) return LexResult{Token::PUNCTUATOR, "=="}; return LexResult{Token::PUNCTUATOR, "="}; case '[': return LexResult{Token::PUNCTUATOR, "["}; case ']': return LexResult{Token::PUNCTUATOR, "]"}; case '(': return LexResult{Token::PUNCTUATOR, "("}; case ')': return LexResult{Token::PUNCTUATOR, ")"}; case '.': if (peekWord("...")) return LexResult{Token::PUNCTUATOR, "..."}; return LexResult{Token::PUNCTUATOR, "."}; case '&': if (peekWord("&&")) return LexResult{Token::PUNCTUATOR, "&&"}; if (peekWord("&=")) return LexResult{Token::PUNCTUATOR, "&="}; return LexResult{Token::PUNCTUATOR, "&"}; case '~': return LexResult{Token::PUNCTUATOR, "~"}; case '!': if (peekWord("!=")) return LexResult{Token::PUNCTUATOR, "!="}; return LexResult{Token::PUNCTUATOR, "!"}; case '%': if (peekWord("%:%:")) return LexResult{Token::PUNCTUATOR, "%:%:"}; if (peekWord("%:")) return LexResult{Token::PUNCTUATOR, "%:"}; if (peekWord("%=")) return LexResult{Token::PUNCTUATOR, "%="}; if (peekWord("%>")) return LexResult{Token::PUNCTUATOR, "%>"}; return LexResult{Token::PUNCTUATOR, "%"}; case '<': if (peekWord("<<=")) return LexResult{Token::PUNCTUATOR, "<<="}; if (peekWord("<:")) return LexResult{Token::PUNCTUATOR, "<:"}; if (peekWord("<%")) return LexResult{Token::PUNCTUATOR, "<%"}; if (peekWord("<<")) return LexResult{Token::PUNCTUATOR, "<<"}; if (peekWord("<=")) return LexResult{Token::PUNCTUATOR, "<="}; return LexResult{Token::PUNCTUATOR, "<"}; case '>': if (peekWord(">>=")) return LexResult{Token::PUNCTUATOR, ">>="}; if (peekWord(">>")) return LexResult{Token::PUNCTUATOR, ">>"}; if (peekWord(">=")) return LexResult{Token::PUNCTUATOR, ">="}; return LexResult{Token::PUNCTUATOR, ">"}; case '^': if (peekWord("^=")) return LexResult{Token::PUNCTUATOR, "^="}; return LexResult{Token::PUNCTUATOR, "^"}; case '|': if (peekWord("||")) return LexResult{Token::PUNCTUATOR, "||"}; if (peekWord("|=")) return LexResult{Token::PUNCTUATOR, "|="}; return LexResult{Token::PUNCTUATOR, "|"}; case '?': return LexResult{Token::PUNCTUATOR, "?"}; case ':': if (peekWord(":>")) return LexResult{Token::PUNCTUATOR, ":>"}; return LexResult{Token::PUNCTUATOR, ":"}; case ';': return LexResult{Token::PUNCTUATOR, ";"}; case ',': return LexResult{Token::PUNCTUATOR, ","}; case '#': if (peekWord("##")) return LexResult{Token::PUNCTUATOR, "##"}; return LexResult{Token::PUNCTUATOR, "#"}; default: return LexResult{Token::ERROR, std::string(character, 1)}; } } ```
2018/05/06
[ "https://codereview.stackexchange.com/questions/193787", "https://codereview.stackexchange.com", "https://codereview.stackexchange.com/users/41400/" ]
``` #ifndef __JMCOMP_LEXER_H__ ``` That is a symbol [reserved for use by the implementation](https://timsong-cpp.github.io/cppwp/n4659/lex.name#3). Writing your this in a way that mimics what you see in the compiler-supplied headers is exactly wrong! For this and other reasons I suggest leaving that out completely and just use #pragma once. If you *ever* find a platform were this pragma is not supported, a script can add the one-time-include symbols automatically and *correctly*. --- > > I rarely get to use anything beyond C++98. Are there any more recent language features I could be taking advantage of? Anything that could be more idiomatic? > > > Aye, scribing ye olde language doest appeareth que’r. C++11 *is like a whole new language*. > > I think there has to be a more maintainable way of structuring this. > > > Have you *studied* parsing? LR(k), Chomsky Normal Form, pumping lemmings, … any of that sound familiar? Lexing only, not *parsing* — OK, so you just want to classify tokens. BTW, have you looked at Boost.Spirit.lex? That would make quick work of it, if you just wanted a lexer. If you are doing this as an experience in itself, carry on! --- I would (do) start with a comment block that gives the grammar in EBNF pseudo-code. That gives the relationship of the hierarchy of symbols and which ones are terminal or non-terminal. The names in this doc will match the names in the code. I’m doing something not entirely dissimilar, hearing that `string_view` is particularly good for writing parsers, as you can whack pieces off the front end efficiently and pass them around as lightweight un-owning containers. So I have functions like this: ``` std::optional<std::string> read_identifier (std::string_view&); std::optional<Value_type> read_number (std::string_view&); ``` Each function will return the thing it read. In a pure lexer, you can either make that a `string_view` holding just the chars comprising that token, or a token structure containing information such as its kind and “value” which will be used by the parsing stage. Even if you don’t want to use it, read the tutorial for Boost.Spirit’s lex class to get ideas on that. You might want to track the original *position* of the token in the source, too. In my functions shown above, if the thing was read, the `string_view` is updated to remove it from the beginning of the line. If not successful, the `optional` returns no-value and the view is *not changed*. This lets me write non-terminals by calling one and if that didn't work call the next possibility. For more complex cases, the `string_view` can be saved in a temporary and restored, cheaply — that allows for backing up to try something else. --- > > The C99 standard just breaks things down into punctuators, keywords, identifiers, and constants. But the compilers I've looked at tend to be more granular. Is there a better approach to choosing tokens? Why? > > > You can get a hint from my comments above. Consider: you can chop up the source into individual “words” only, but when you feed this to the next stage the only thing it saves you is not having to skip white space and mess with comments getting in the middle of grammatical productions. If the tokens have a rough type, like number, identifier, punctuation; it saves the parser *some* work. The parser grammar uses these, so starting with that dovetails nicely. If the tokens have a type and a value, it can save the next stage from having to figure that out. But *someone* has to figure it out, and if you do it in the lexer you’re guessing at what the parser will find useful. If you need to figure out something *to do the lexical analysis*, then preserve that information and pass it along. If you write the lexer and parser grammars together, the lexer is just the bottom end of the whole grammar and it is clear what you need, because you are choosing which terminals to handle (or partially handle) in the lexing step. Why two phases? Well, the designer or Perl 6 patterns say “why indeed?”. The tutorial and overview of Boost.Spirit goes into it, and lets you use a separate lexer or not. One good reason is to get rid of comments. Eating whitespace after each nonterminal in a monolithic parser isn’t that bad, but consider comments and backslash continuation lines and other stuff that lives in the *text*, that makes it hard to deal with that mixed in with the parser grammar. Other reasons historically are memory usage and machine capacity; that matters less today. And there is the adage that “If you put a team of three people onto a compiler-writing project, you’ll get a three-pass compiler.” --- ``` enum class Token { PUNCTUATOR, KEYWORD, IDENTIFIER, INT_CONST, FLOAT_CONST, CHAR_CONST, ERROR }; ``` See [⧺ES.9](https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#es9-avoid-all_caps-names): *Avoid `ALL_CAPS` names*. I see you *are* using a new language feature of `enum class`, so your code might not be as out-of-date as you feared. You might consider reserving a value of 0 for ERROR or empty or something like that. --- ``` std::string lexeme = ""; ``` Don’t, in general, assign `""` to clear out a `std::string`. In this case, there is no need to say anything at all because `string` has a constructor. So just leave the default initializer off of the member. ``` Token token = Token::ERROR; ``` Since that is what you want for the default, I would make ERROR the first thing in the enumeration list so it has the value zero. It doesn’t matter ideally, but it’s nice and might be helpful later. --- ``` return std::isalpha(character) || character == '_'); ``` See [cppreference](https://en.cppreference.com/w/cpp/string/byte/isalpha) for the Kosher way to call these ancient functions. ``` ⋯ isalpha(static_cast<unsigned char>(character)) ⋯ ``` Since you already wrapped these calls in your own helpers, it will only appear within those helpers, once. ``` bool isWhitespace(char character) ``` not using std::isspace? You are not covering as many characters as it does. --- ``` static std::set<std::string> keywords = { "auto", "break", "case", "char", "const", "continue", "default", ⋯ ``` Two things: the `set` is rather slow for lookup! A sorted `vector` would be faster! Boost.Container has a `flat_set` etc. Second, you are copying the statically-allocated lexical string literals into string objects. Do you really need `string` here? I would ([did](https://codereview.stackexchange.com/questions/192030/fizzbuzz-17-style), actually) just use a plain (pre-sorted) array of `const char*`, and make the whole thing `constexpr`. If [`std::string` literals](https://en.cppreference.com/w/cpp/string/basic_string/operator%22%22s) were `constexpr` you could at least save the redundant copy and run-time copying, but pondering why it isn’t `constexpr` will show you why I’m mentioning this — it needs to allocate memory, and do run-time work to set up the set. Keeping the entire table in one contiguous lump of memory will not only save memory for all those pointers and nodes, but will be *much* faster. --- ``` Lexer::Lexer(std::istream& stream): mStream(stream) { } ``` Use *uniform initialization*. So now you write curly braces instead of parens: ``` : mStream{stream} ``` and this short one-liner could go inline in the header. --- ``` char Lexer::nextChar() { char c; mStream.read(&c, 1); return c; } ``` What if no character was read? The stream could be `bad` you know. Maybe you hit the end of the file, or the network glitched. --- ``` char c; do c = nextChar(); while (isWhitespace(c)); return c; ``` You *can* write this in a way to prevent defining `c` outside the loop. More importantly, deal with errors from `nextChar`. It would be cleaner if it *only* skipped whitespace and did not also read (and return) the next char beyond that! (Hint: see the next function in your file) ``` // We've already read the first character, so set that in advance lexedWord.resize(word.size()); lexedWord[0] = word[0]; ``` Exactly! Don’t read one ahead. Skipws should do its one job only. It is especially confusing since peekWord does not do the skipws call. It seems that it is called when you already know that the first character of the word does match the input? This is going to lead to maintenance problems, believe me. (later: I see a lot of the functions take the read-ahead character as another parameter and have special code to deal with that first. Get rid of all that. The function should be called with the input set to the first position of the thing it wants to read. You already know to peek and to rewind, so there is no reason to have this one-off getting in the way.) Did you know that a stream can give you an input iterator? So rather than allocating a string and reading into it and then comparing (oh, and you didn't check to see if it read as many bytes as you asked for), you can use something like `std::equal` directly between `word` and the input stream. If/when you *do* need to access the contents of a `std::string` as a buffer for writing into, use `.data()`. --- ``` std::string Lexer::lexUniversalCharacter(char character, bool& isValid) ``` Two return values, no problem. Don’t use an “out” parameter for the second one! ([⧺F.21](https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#f21-to-return-multiple-out-values-prefer-returning-a-tuple-or-struct)). However, I think you don’t have two values here, but an `optional` result. --- ``` return LexResult{Token::IDENTIFIER, identifier}; ``` You don’t need to name the type, as it is automatically picked up from the function’s return type. ``` return {Token::IDENTIFIER, identifier}; ``` this can be much nicer. --- ``` if (peekChar() == 'L' || peekChar() == 'l') { suffix += readLongSuffix(); if (peekChar() == 'u' || peekChar() == 'U') suffix += nextChar(); } else if (peekChar() == 'u' || peekChar() == 'U') { suffix += nextChar(); if (peekChar() == 'l' || peekChar() == 'L') ``` Avoid calling `peelChar` twice on the same character. You can use a new feature in `if` statements here: ``` if (auto ch=peekChar(); ch=='L' || ch=='l') { ``` or (since you are checking a lot of cap/lower pairs) define a helper function: ``` if (mAtCh(peekChar(),'L') { ``` but later you need more general: ``` if (match_any_of(peekChar(), "+-") { if (match_any_of(peekChar(), "fFlL") { lexEscapeSequence ``` and to write that, use `std::any_of` in a one-line wrapper. --- ``` if (peekWord("<<=")) return LexResult{Token::PUNCTUATOR, "<<="}; if (peekWord("<:")) return LexResult{Token::PUNCTUATOR, "<:"}; if (peekWord("<%")) return LexResult{Token::PUNCTUATOR, "<%"}; if (peekWord("<<")) return LexResult{Token::PUNCTUATOR, "<<"}; if (peekWord("<=")) return LexResult{Token::PUNCTUATOR, "<="}; ``` --- There are a lot of these. Notice how the thing you peek always matches the parameter to the return value? What is the `if` even doing? You need a function that *returns* the peeked word. In fact, all of this reads one char, switches on it, then peeks on the rest of the word. Why does it need to do it in two parts like that? You are just classifying the “word” a PUNCTUATOR. Make a list (static array, as discussed with keywords) and treat it in the same way. Just as there are keywords spelled with letters, these are a list of legal words spelled with non-letter/non-digit characters. The trick is you have to take the longest sequence that is a valid token. But **use a table**, *not* dozens of duplicated snippets of code. ``` peek a character and append to the token is that on the list? no: return with what you had. yes: advance the input and append to the token. ``` Good! ===== I see you used an anonomous namespace for the helpers in the CPP file. Using `std::stream` for reading the text: most people just use a string.
Same Lexer writtern in Lex. Lexer.l ------- ``` startsIdentifier [a-zA-Z_] isInIdentifier {startsIdentifier}|[0-9] isHexChar [0-9A-Fa-f] isOctal [0-7] isDigit [0-9] isWhitespace [ \n\t] skipWhitespace {isWhitespace}* isHexNumber 0x{isHexChar}+ isOctalNumber 0{isOctal}* isDecNumber [1-9]{isDigit}* isIntegerNumber {isHexNumber}|{isOctalNumber}|{isDecNumber} isFloatLeadingDigit {isDigit}+\.{isDigit}* isFloatTrailingDigit \.{isDigit}+ isExponentDec [eE][+-]?{isDecNumber} isExponentHex [xX][+-]?{isHexNumber} isExponent {isExponentHex}|{isExponentDec} isFloatNumber {isFloatLeadingDigit}{isExponent}?|{isFloatTrailingDigit}{isExponent}?|{isIntegerNumber}{isExponent} escapeCharacter ['"\?\\abfnrtvx{isHexChar}] isCharCharacter \\{escapeCharacter}|[^'\n\\] isStringCharacter \\{escapeCharacter}|[^"\n\\] isCharLiteral '{isCharCharacter}*' isStringLiteral "{isStringCharacter}*" isLiteralChar L?{isCharLiteral}|L?{isStringLiteral} Identifier {startsIdentifier}{isInIdentifier}* LineComment \/\/[^\n]* BlockCommentStart \/\* BlockCommentEnd \*\/ %x COMMENT_BLOCK %% <COMMENT_BLOCK>{BlockCommentEnd} {BEGIN(INITIAL);} <COMMENT_BLOCK>\*[^/] {/* Ignore Star followed by anything except slash */} <COMMENT_BLOCK>[^*]+ {/* Ignore blocks of text */} {skipWhitespace} {/* Ignore */} {LineComment} {/* Ignore */} {BlockCommentStart} {BEGIN(COMMENT_BLOCK);} {isIntegerNumber} {return COST_INT;} {isFloatNumber} {return CONST_FLOAT;} {isLiteralChar} {return CONST_CHAR;} auto {return KEYWORD_AUTO;} break {return KEYWORD_BREAK;} case {return KEYWORD_CASE;} char {return KEYWORD_CHAR;} const {return KEYWORD_CONST;} continue {return KEYWORD_CONTINUE;} default {return KEYWORD_DEFAULT;} do {return KEYWORD_DO;} double {return KEYWORD_DOUBLE;} else {return KEYWORD_ELSE;} enum {return KEYWORD_ENUM;} extern {return KEYWORD_EXTERN;} float {return KEYWORD_FLOAT;} for {return KEYWORD_FOR;} goto {return KEYWORD_GOTO;} if {return KEYWORD_IF;} inline {return KEYWORD_INLINE;} int {return KEYWORD_INT;} long {return KEYWORD_LONG;} register {return KEYWORD_REGISTER;} restrict {return KEYWORD_RESTRICT;} return {return KEYWORD_RETURN;} short {return KEYWORD_SHORT;} signed {return KEYWORD_SIGNED;} sizeof {return KEYWORD_SIZEOF;} static {return KEYWORD_STATIC;} struct {return KEYWORD_STRUCT;} switch {return KEYWORD_SWITCH;} typedef {return KEYWORD_TYPEDEF;} union {return KEYWORD_UNION;} unsigned {return KEYWORD_UNSIGNED;} void {return KEYWORD_VOID;} volatile {return KEYWORD_VOLATILE;} while {return KEYWORD_WHILE;} _Bool {return KEYWORD_BOOL;} _Complex {return KEYWORD_COMPLEX;} _Imaginary {return KEYWORD_IMAGINARY;} {Identifier} {return IDENTIFIER;} \+ {return '+';} \- {return '-';} \* {return '*';} \/ {return '/';} \% {return '%';} \= {return '=';} \[ {return '[';} \] {return ']';} \( {return '(';} \) {return ')';} \. {return '.';} \^ {return '^';} \| {return '|';} \& {return '&';} \~ {return '~';} \! {return '!';} \< {return '<';} \> {return '>';} \? {return '&';} \: {return ':';} \; {return ';';} \, {return ',';} \# {return '#';} \+\+ {return PUNCT_INC;} \-\- {return PUNCT_DEC;} \-\> {return PUNCT_POINT;} \<\< {return PUNCT_SHIFT_LEFT;} \>\> {return PUNCT_SHIFT_RIGHT;} \+\= {return PUNCT_ASSIGN_PLUS;} \-\= {return PUNCT_ASSIGN_MINUS;} \*\= {return PUNCT_ASSIGN_MUL;} \/\= {return PUNCT_ASSIGN_DIV;} \%\= {return PUNCT_ASSIGN_MOD;} \|\= {return PUNCT_ASSIGN_OR;} \&\= {return PUNCT_ASSIGN_AND;} \^\= {return PUNCT_ASSIGN_NOT;} \<\<\= {return PUNCT_ASSIGN_SHIFT_LEFT;} \>\>\= {return PUNCT_ASSIGN_SHIFT_RIGHT;} \<\= {return PUNCT_COMP_LESS_EQUAL;} \>\= {return PUNCT_COMP_GREAT_EQUAL;} \=\= {return PUNCT_COMP_EQUAL;} \!\= {return PUNCT_COMP_NOT_EQUAL;} \&\& {return PUNCT_BOOL_AND;} \|\| {return PUNCT_BOOL_OR;} \%\> {return PUNCT_PERCENT_GREAT;} \%\: {return PUNCT_PERCENT_COLON;} \<\: {return PUNCT_LESS_COLON;} \<\% {return PUNCT_LESS_PERCENT;} \:\> {return PUNCT_COLON_PERCENT;} \#\# {return PUNCT_HASH_HASH;} \.\.\. {return PUNCT_DOT_DOT_DOT;} \%\:\%\: {return PUNCT_PERCENT_COLON_PERCENT_COLON;} . {/* No other rule matched. ERROR */} %% ``` Built with: ``` > flex Lexer.l ``` Note: Not tested.
193,787
I'm building a compiler for fun, and this is my first pass at the lexer for it. It should handle all tokens for C99, with the exception of the preprocessor tokens. It's very minimal, only grabbing the token and lexeme. Specific areas where I'd like feedback: * I rarely get to use anything beyond C++98. Are there any more recent language features I could be taking advantage of? Anything that could be more idiomatic? * My error-handling seems a little ad-hoc. Is there a smarter approach? * I'm not sure about the best way to choose tokens. The C99 standard just breaks things down into punctuators, keywords, identifiers, and constants. But the compilers I've looked at tend to be more granular. Is there a better approach to choosing tokens? Why? * I think there has to be a more maintainable way of structuring this. Things I don't care about: * I know there are more efficient ways to handle input. My aim here is to just do the simplest thing and worry about fancy stuff/performance tuning later. * You probably don't like my brace style. Sorry. :/ Lexer.h ``` #ifndef __JMCOMP_LEXER_H__ #define __JMCOMP_LEXER_H__ #include <iostream> #include <string> enum class Token { PUNCTUATOR, KEYWORD, IDENTIFIER, INT_CONST, FLOAT_CONST, CHAR_CONST, ERROR }; struct LexResult { Token token = Token::ERROR; std::string lexeme = ""; }; class Lexer { public: explicit Lexer(std::istream& stream); LexResult next(); private: std::istream& mStream; char nextChar(); char skipWhitespace(); char peekChar(); bool peekWord(const std::string& word); std::string nextIdentifier(char character, bool& isValid); std::string lexUniversalCharacter(char character, bool& isValid); std::string lexHexQuad(bool& isValid); LexResult lexIdentifier(char character); LexResult lexConstant(char character); std::string readIntConstant(bool predicate(char)); std::string readIntSuffix(); std::string readLongSuffix(); LexResult checkInvalidToken(const LexResult& token); LexResult readFloatConstant(const std::string& prefix, bool predicate(char)); LexResult lexCharConst(const std::string& initial, char delimiter); std::string lexEscapeSequence(bool& isValid); void skipLineComment(); void skipBlockComment(); }; #endif ``` Lexer.cpp ``` #include <set> #include "Lexer.h" namespace { bool startsIdentifier(char character) { return std::isalpha(character) || character == '_'); } bool isInIdentifier(char character) { return startsIdentifier(character) || std::isdigit(character); } bool isHexChar(char character) { return std::isdigit(character) || (character >= 'A' && character <= 'F') || (character >= 'a' && character <= 'f'); } bool isOctal(char character) { return std::isdigit(character) && '8' - character > 0; } bool isDigit(char character) { return std::isdigit(character); } bool isWhitespace(char character) { return character == ' ' || character == '\n' || character == '\t'; } bool isExponentChar(char character) { return (character == 'e' || character == 'E' || character == 'p' || character == 'P'); } static std::set<std::string> keywords = { "auto", "break", "case", "char", "const", "continue", "default", "do", "double", "else", "enum", "extern", "float", "for", "goto", "if", "inline", "int", "long", "register", "restrict", "return", "short", "signed", "sizeof", "static", "struct", "switch", "typedef", "union", "unsigned", "void", "volatile", "while", "_Bool", "_Complex", "_Imaginary" }; } Lexer::Lexer(std::istream& stream): mStream(stream) { } char Lexer::nextChar() { char c; mStream.read(&c, 1); return c; } char Lexer::skipWhitespace() { char c; do c = nextChar(); while (isWhitespace(c)); return c; } char Lexer::peekChar() { return mStream.peek(); } bool Lexer::peekWord(const std::string& word) { std::string lexedWord; // We've already read the first character, so set that in advance lexedWord.resize(word.size()); lexedWord[0] = word[0]; const size_t bytesToRead = lexedWord.size() - 1; mStream.read(&lexedWord[1], bytesToRead); if (lexedWord == word) { return true; } else { mStream.seekg((-bytesToRead), std::ios::cur); return false; } } std::string Lexer::lexHexQuad(bool& isValid) { std::string hexQuad; for (size_t i = 0; i < 4; ++i) { char character = nextChar(); // First check if the character is okay if (!isHexChar(character)) isValid = false; // Get a better error message (I hope?) if we keep reading // until a "logical" break, so no matter what keep going unless // there's whitespace if (!isWhitespace(character) && mStream) hexQuad += character; // If we hit whitespace, there's no way it's valid // We're at a logical boundary, so just return here else return hexQuad; } return hexQuad; } std::string Lexer::lexUniversalCharacter(char character, bool& isValid) { std::string universalCharacter(1, character); character = nextChar(); universalCharacter += character; if (character == 'u') universalCharacter += lexHexQuad(isValid); else if (character == 'U') { universalCharacter += lexHexQuad(isValid); universalCharacter += lexHexQuad(isValid); } else isValid = false; return universalCharacter; } std::string Lexer::nextIdentifier(char character, bool& isValid) { std::string identifier; while (isInIdentifier(character)) { if (character == '\\') { identifier += lexUniversalCharacter(character, isValid); if (!isValid) return identifier; } else identifier += character; character = nextChar(); } mStream.seekg(-1, std::ios::cur); return identifier; } LexResult Lexer::checkInvalidToken(const LexResult& token) { std::string lexeme = token.lexeme; if (!isWhitespace(peekChar()) && peekChar() != EOF) { while (!isWhitespace(peekChar()) && peekChar() != EOF) lexeme += nextChar(); return LexResult{Token::ERROR, lexeme}; } return token; } LexResult Lexer::lexIdentifier(char character) { bool isValid = true; const std::string identifier = nextIdentifier(character, isValid); if (!isValid) return LexResult{Token::ERROR, identifier}; if (keywords.count(identifier)) return LexResult{Token::KEYWORD, identifier}; return LexResult{Token::IDENTIFIER, identifier}; } std::string Lexer::readIntConstant(bool predicate(char)) { std::string lexeme; while (predicate(peekChar())) lexeme += nextChar(); return lexeme; } std::string Lexer::readLongSuffix() { std::string suffix; suffix += nextChar(); if (peekChar() == 'L' || peekChar() == 'l') suffix += nextChar(); return suffix; } std::string Lexer::readIntSuffix() { std::string suffix; if (peekChar() == 'L' || peekChar() == 'l') { suffix += readLongSuffix(); if (peekChar() == 'u' || peekChar() == 'U') suffix += nextChar(); } else if (peekChar() == 'u' || peekChar() == 'U') { suffix += nextChar(); if (peekChar() == 'l' || peekChar() == 'L') suffix += readLongSuffix(); } return suffix; } LexResult Lexer::readFloatConstant(const std::string& prefix, bool predicate(char)) { std::string result = prefix; if (peekChar() == '.') { result += nextChar(); result += readIntConstant(predicate); } if (isExponentChar(peekChar())) { result += nextChar(); if (peekChar() == '+' || peekChar() == '-') result += nextChar(); const std::string exponentSequence = readIntConstant(predicate); if (exponentSequence.empty()) return checkInvalidToken(LexResult{Token::ERROR, result}); result += exponentSequence; } const char suffix = peekChar(); if (suffix == 'f' || suffix == 'F' || suffix == 'l' || suffix == 'L') result += nextChar(); return checkInvalidToken(LexResult{Token::FLOAT_CONST, result}); } LexResult Lexer::lexConstant(char character) { std::string lexeme(1, character); Token token = Token::INT_CONST; auto predicate = isDigit; if (peekChar() == 'x' || peekChar() == 'X') { predicate = isHexChar; lexeme += nextChar(); } else if (character == '0') predicate = isOctal; lexeme += readIntConstant(predicate); if (peekChar() == '.' || isExponentChar(peekChar())) return readFloatConstant(lexeme, predicate); lexeme += readIntSuffix(); return checkInvalidToken(LexResult{token, lexeme}); } std::string Lexer::lexEscapeSequence(bool& isValid) { std::string lexeme(1, nextChar()); const char c = peekChar(); if (c == '\'' || c == '"' || c == '?' || c == '\\' || c == 'a' || c == 'b' || c == 'f' || c == 'n' || c == 'r' || c == 't' || c == 'v' || c == 'x' || isHexChar(c)) return lexeme + nextChar(); isValid = false; return lexeme; } LexResult Lexer::lexCharConst(const std::string& initial, char delimiter) { bool isValid = true; std::string lexeme = initial + delimiter; while (peekChar() != delimiter && peekChar() != EOF) { if (peekChar() == '\n') isValid = false; if (peekChar() == '\\') { lexeme += lexEscapeSequence(isValid); continue; } lexeme += nextChar(); } lexeme += nextChar(); return LexResult{isValid ? Token::CHAR_CONST : Token::ERROR, lexeme}; } void Lexer::skipLineComment() { nextChar(); nextChar(); while (peekChar() != '\n') nextChar(); } void Lexer::skipBlockComment() { nextChar(); // Skip the first * char c; do c = nextChar(); while (!(c == '*' && peekChar() == '/')); nextChar(); // Pass the last / } LexResult Lexer::next() { char character = skipWhitespace(); while (character == '/' && peekChar() == '/') { skipLineComment(); character = skipWhitespace(); } while (character == '/' && peekChar() == '*') { skipBlockComment(); character = skipWhitespace(); } if (isdigit(character)) return lexConstant(character); if (character == '\'' || character == '"') return lexCharConst("", character); if (character == 'L') if (peekChar() == '\'' || peekChar() == '"') { return lexCharConst("L", nextChar()); } if (startsIdentifier(character)) return lexIdentifier(character); switch (character) { case '+': if (peekWord("++")) return LexResult{Token::PUNCTUATOR, "++"}; if (peekWord("+=")) return LexResult{Token::PUNCTUATOR, "+="}; return LexResult{Token::PUNCTUATOR, "+"}; case '-': if (peekWord("->")) return LexResult{Token::PUNCTUATOR, "->"}; if (peekWord("--")) return LexResult{Token::PUNCTUATOR, "--"}; if (peekWord("-=")) return LexResult{Token::PUNCTUATOR, "-="}; return LexResult{Token::PUNCTUATOR, "-"}; case '*': if (peekWord("*=")) return LexResult{Token::PUNCTUATOR, "*="}; return LexResult{Token::PUNCTUATOR, "*"}; case '/': if (peekWord("/=")) return LexResult{Token::PUNCTUATOR, "/="}; return LexResult{Token::PUNCTUATOR, "/"}; case '=': if (peekWord("==")) return LexResult{Token::PUNCTUATOR, "=="}; return LexResult{Token::PUNCTUATOR, "="}; case '[': return LexResult{Token::PUNCTUATOR, "["}; case ']': return LexResult{Token::PUNCTUATOR, "]"}; case '(': return LexResult{Token::PUNCTUATOR, "("}; case ')': return LexResult{Token::PUNCTUATOR, ")"}; case '.': if (peekWord("...")) return LexResult{Token::PUNCTUATOR, "..."}; return LexResult{Token::PUNCTUATOR, "."}; case '&': if (peekWord("&&")) return LexResult{Token::PUNCTUATOR, "&&"}; if (peekWord("&=")) return LexResult{Token::PUNCTUATOR, "&="}; return LexResult{Token::PUNCTUATOR, "&"}; case '~': return LexResult{Token::PUNCTUATOR, "~"}; case '!': if (peekWord("!=")) return LexResult{Token::PUNCTUATOR, "!="}; return LexResult{Token::PUNCTUATOR, "!"}; case '%': if (peekWord("%:%:")) return LexResult{Token::PUNCTUATOR, "%:%:"}; if (peekWord("%:")) return LexResult{Token::PUNCTUATOR, "%:"}; if (peekWord("%=")) return LexResult{Token::PUNCTUATOR, "%="}; if (peekWord("%>")) return LexResult{Token::PUNCTUATOR, "%>"}; return LexResult{Token::PUNCTUATOR, "%"}; case '<': if (peekWord("<<=")) return LexResult{Token::PUNCTUATOR, "<<="}; if (peekWord("<:")) return LexResult{Token::PUNCTUATOR, "<:"}; if (peekWord("<%")) return LexResult{Token::PUNCTUATOR, "<%"}; if (peekWord("<<")) return LexResult{Token::PUNCTUATOR, "<<"}; if (peekWord("<=")) return LexResult{Token::PUNCTUATOR, "<="}; return LexResult{Token::PUNCTUATOR, "<"}; case '>': if (peekWord(">>=")) return LexResult{Token::PUNCTUATOR, ">>="}; if (peekWord(">>")) return LexResult{Token::PUNCTUATOR, ">>"}; if (peekWord(">=")) return LexResult{Token::PUNCTUATOR, ">="}; return LexResult{Token::PUNCTUATOR, ">"}; case '^': if (peekWord("^=")) return LexResult{Token::PUNCTUATOR, "^="}; return LexResult{Token::PUNCTUATOR, "^"}; case '|': if (peekWord("||")) return LexResult{Token::PUNCTUATOR, "||"}; if (peekWord("|=")) return LexResult{Token::PUNCTUATOR, "|="}; return LexResult{Token::PUNCTUATOR, "|"}; case '?': return LexResult{Token::PUNCTUATOR, "?"}; case ':': if (peekWord(":>")) return LexResult{Token::PUNCTUATOR, ":>"}; return LexResult{Token::PUNCTUATOR, ":"}; case ';': return LexResult{Token::PUNCTUATOR, ";"}; case ',': return LexResult{Token::PUNCTUATOR, ","}; case '#': if (peekWord("##")) return LexResult{Token::PUNCTUATOR, "##"}; return LexResult{Token::PUNCTUATOR, "#"}; default: return LexResult{Token::ERROR, std::string(character, 1)}; } } ```
2018/05/06
[ "https://codereview.stackexchange.com/questions/193787", "https://codereview.stackexchange.com", "https://codereview.stackexchange.com/users/41400/" ]
``` #ifndef __JMCOMP_LEXER_H__ ``` That is a symbol [reserved for use by the implementation](https://timsong-cpp.github.io/cppwp/n4659/lex.name#3). Writing your this in a way that mimics what you see in the compiler-supplied headers is exactly wrong! For this and other reasons I suggest leaving that out completely and just use #pragma once. If you *ever* find a platform were this pragma is not supported, a script can add the one-time-include symbols automatically and *correctly*. --- > > I rarely get to use anything beyond C++98. Are there any more recent language features I could be taking advantage of? Anything that could be more idiomatic? > > > Aye, scribing ye olde language doest appeareth que’r. C++11 *is like a whole new language*. > > I think there has to be a more maintainable way of structuring this. > > > Have you *studied* parsing? LR(k), Chomsky Normal Form, pumping lemmings, … any of that sound familiar? Lexing only, not *parsing* — OK, so you just want to classify tokens. BTW, have you looked at Boost.Spirit.lex? That would make quick work of it, if you just wanted a lexer. If you are doing this as an experience in itself, carry on! --- I would (do) start with a comment block that gives the grammar in EBNF pseudo-code. That gives the relationship of the hierarchy of symbols and which ones are terminal or non-terminal. The names in this doc will match the names in the code. I’m doing something not entirely dissimilar, hearing that `string_view` is particularly good for writing parsers, as you can whack pieces off the front end efficiently and pass them around as lightweight un-owning containers. So I have functions like this: ``` std::optional<std::string> read_identifier (std::string_view&); std::optional<Value_type> read_number (std::string_view&); ``` Each function will return the thing it read. In a pure lexer, you can either make that a `string_view` holding just the chars comprising that token, or a token structure containing information such as its kind and “value” which will be used by the parsing stage. Even if you don’t want to use it, read the tutorial for Boost.Spirit’s lex class to get ideas on that. You might want to track the original *position* of the token in the source, too. In my functions shown above, if the thing was read, the `string_view` is updated to remove it from the beginning of the line. If not successful, the `optional` returns no-value and the view is *not changed*. This lets me write non-terminals by calling one and if that didn't work call the next possibility. For more complex cases, the `string_view` can be saved in a temporary and restored, cheaply — that allows for backing up to try something else. --- > > The C99 standard just breaks things down into punctuators, keywords, identifiers, and constants. But the compilers I've looked at tend to be more granular. Is there a better approach to choosing tokens? Why? > > > You can get a hint from my comments above. Consider: you can chop up the source into individual “words” only, but when you feed this to the next stage the only thing it saves you is not having to skip white space and mess with comments getting in the middle of grammatical productions. If the tokens have a rough type, like number, identifier, punctuation; it saves the parser *some* work. The parser grammar uses these, so starting with that dovetails nicely. If the tokens have a type and a value, it can save the next stage from having to figure that out. But *someone* has to figure it out, and if you do it in the lexer you’re guessing at what the parser will find useful. If you need to figure out something *to do the lexical analysis*, then preserve that information and pass it along. If you write the lexer and parser grammars together, the lexer is just the bottom end of the whole grammar and it is clear what you need, because you are choosing which terminals to handle (or partially handle) in the lexing step. Why two phases? Well, the designer or Perl 6 patterns say “why indeed?”. The tutorial and overview of Boost.Spirit goes into it, and lets you use a separate lexer or not. One good reason is to get rid of comments. Eating whitespace after each nonterminal in a monolithic parser isn’t that bad, but consider comments and backslash continuation lines and other stuff that lives in the *text*, that makes it hard to deal with that mixed in with the parser grammar. Other reasons historically are memory usage and machine capacity; that matters less today. And there is the adage that “If you put a team of three people onto a compiler-writing project, you’ll get a three-pass compiler.” --- ``` enum class Token { PUNCTUATOR, KEYWORD, IDENTIFIER, INT_CONST, FLOAT_CONST, CHAR_CONST, ERROR }; ``` See [⧺ES.9](https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#es9-avoid-all_caps-names): *Avoid `ALL_CAPS` names*. I see you *are* using a new language feature of `enum class`, so your code might not be as out-of-date as you feared. You might consider reserving a value of 0 for ERROR or empty or something like that. --- ``` std::string lexeme = ""; ``` Don’t, in general, assign `""` to clear out a `std::string`. In this case, there is no need to say anything at all because `string` has a constructor. So just leave the default initializer off of the member. ``` Token token = Token::ERROR; ``` Since that is what you want for the default, I would make ERROR the first thing in the enumeration list so it has the value zero. It doesn’t matter ideally, but it’s nice and might be helpful later. --- ``` return std::isalpha(character) || character == '_'); ``` See [cppreference](https://en.cppreference.com/w/cpp/string/byte/isalpha) for the Kosher way to call these ancient functions. ``` ⋯ isalpha(static_cast<unsigned char>(character)) ⋯ ``` Since you already wrapped these calls in your own helpers, it will only appear within those helpers, once. ``` bool isWhitespace(char character) ``` not using std::isspace? You are not covering as many characters as it does. --- ``` static std::set<std::string> keywords = { "auto", "break", "case", "char", "const", "continue", "default", ⋯ ``` Two things: the `set` is rather slow for lookup! A sorted `vector` would be faster! Boost.Container has a `flat_set` etc. Second, you are copying the statically-allocated lexical string literals into string objects. Do you really need `string` here? I would ([did](https://codereview.stackexchange.com/questions/192030/fizzbuzz-17-style), actually) just use a plain (pre-sorted) array of `const char*`, and make the whole thing `constexpr`. If [`std::string` literals](https://en.cppreference.com/w/cpp/string/basic_string/operator%22%22s) were `constexpr` you could at least save the redundant copy and run-time copying, but pondering why it isn’t `constexpr` will show you why I’m mentioning this — it needs to allocate memory, and do run-time work to set up the set. Keeping the entire table in one contiguous lump of memory will not only save memory for all those pointers and nodes, but will be *much* faster. --- ``` Lexer::Lexer(std::istream& stream): mStream(stream) { } ``` Use *uniform initialization*. So now you write curly braces instead of parens: ``` : mStream{stream} ``` and this short one-liner could go inline in the header. --- ``` char Lexer::nextChar() { char c; mStream.read(&c, 1); return c; } ``` What if no character was read? The stream could be `bad` you know. Maybe you hit the end of the file, or the network glitched. --- ``` char c; do c = nextChar(); while (isWhitespace(c)); return c; ``` You *can* write this in a way to prevent defining `c` outside the loop. More importantly, deal with errors from `nextChar`. It would be cleaner if it *only* skipped whitespace and did not also read (and return) the next char beyond that! (Hint: see the next function in your file) ``` // We've already read the first character, so set that in advance lexedWord.resize(word.size()); lexedWord[0] = word[0]; ``` Exactly! Don’t read one ahead. Skipws should do its one job only. It is especially confusing since peekWord does not do the skipws call. It seems that it is called when you already know that the first character of the word does match the input? This is going to lead to maintenance problems, believe me. (later: I see a lot of the functions take the read-ahead character as another parameter and have special code to deal with that first. Get rid of all that. The function should be called with the input set to the first position of the thing it wants to read. You already know to peek and to rewind, so there is no reason to have this one-off getting in the way.) Did you know that a stream can give you an input iterator? So rather than allocating a string and reading into it and then comparing (oh, and you didn't check to see if it read as many bytes as you asked for), you can use something like `std::equal` directly between `word` and the input stream. If/when you *do* need to access the contents of a `std::string` as a buffer for writing into, use `.data()`. --- ``` std::string Lexer::lexUniversalCharacter(char character, bool& isValid) ``` Two return values, no problem. Don’t use an “out” parameter for the second one! ([⧺F.21](https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#f21-to-return-multiple-out-values-prefer-returning-a-tuple-or-struct)). However, I think you don’t have two values here, but an `optional` result. --- ``` return LexResult{Token::IDENTIFIER, identifier}; ``` You don’t need to name the type, as it is automatically picked up from the function’s return type. ``` return {Token::IDENTIFIER, identifier}; ``` this can be much nicer. --- ``` if (peekChar() == 'L' || peekChar() == 'l') { suffix += readLongSuffix(); if (peekChar() == 'u' || peekChar() == 'U') suffix += nextChar(); } else if (peekChar() == 'u' || peekChar() == 'U') { suffix += nextChar(); if (peekChar() == 'l' || peekChar() == 'L') ``` Avoid calling `peelChar` twice on the same character. You can use a new feature in `if` statements here: ``` if (auto ch=peekChar(); ch=='L' || ch=='l') { ``` or (since you are checking a lot of cap/lower pairs) define a helper function: ``` if (mAtCh(peekChar(),'L') { ``` but later you need more general: ``` if (match_any_of(peekChar(), "+-") { if (match_any_of(peekChar(), "fFlL") { lexEscapeSequence ``` and to write that, use `std::any_of` in a one-line wrapper. --- ``` if (peekWord("<<=")) return LexResult{Token::PUNCTUATOR, "<<="}; if (peekWord("<:")) return LexResult{Token::PUNCTUATOR, "<:"}; if (peekWord("<%")) return LexResult{Token::PUNCTUATOR, "<%"}; if (peekWord("<<")) return LexResult{Token::PUNCTUATOR, "<<"}; if (peekWord("<=")) return LexResult{Token::PUNCTUATOR, "<="}; ``` --- There are a lot of these. Notice how the thing you peek always matches the parameter to the return value? What is the `if` even doing? You need a function that *returns* the peeked word. In fact, all of this reads one char, switches on it, then peeks on the rest of the word. Why does it need to do it in two parts like that? You are just classifying the “word” a PUNCTUATOR. Make a list (static array, as discussed with keywords) and treat it in the same way. Just as there are keywords spelled with letters, these are a list of legal words spelled with non-letter/non-digit characters. The trick is you have to take the longest sequence that is a valid token. But **use a table**, *not* dozens of duplicated snippets of code. ``` peek a character and append to the token is that on the list? no: return with what you had. yes: advance the input and append to the token. ``` Good! ===== I see you used an anonomous namespace for the helpers in the CPP file. Using `std::stream` for reading the text: most people just use a string.
Spotted a bug: ``` while (character == '/' && peekChar() == '/') { skipLineComment(); character = skipWhitespace(); } while (character == '/' && peekChar() == '*') { skipBlockComment(); character = skipWhitespace(); } ``` If you get a block comment followed by a line comment then you will hit the rest of the code with a `/` as the `character` which will look like a division rather than a comment. To fix I would do this: ``` while (character == '/') { if (peekChar() == '/') { skipLineComment(); character = skipWhitespace(); } else if (peekChar() == '*') { skipBlockComment(); character = skipWhitespace(); } else { break; } } ```
193,787
I'm building a compiler for fun, and this is my first pass at the lexer for it. It should handle all tokens for C99, with the exception of the preprocessor tokens. It's very minimal, only grabbing the token and lexeme. Specific areas where I'd like feedback: * I rarely get to use anything beyond C++98. Are there any more recent language features I could be taking advantage of? Anything that could be more idiomatic? * My error-handling seems a little ad-hoc. Is there a smarter approach? * I'm not sure about the best way to choose tokens. The C99 standard just breaks things down into punctuators, keywords, identifiers, and constants. But the compilers I've looked at tend to be more granular. Is there a better approach to choosing tokens? Why? * I think there has to be a more maintainable way of structuring this. Things I don't care about: * I know there are more efficient ways to handle input. My aim here is to just do the simplest thing and worry about fancy stuff/performance tuning later. * You probably don't like my brace style. Sorry. :/ Lexer.h ``` #ifndef __JMCOMP_LEXER_H__ #define __JMCOMP_LEXER_H__ #include <iostream> #include <string> enum class Token { PUNCTUATOR, KEYWORD, IDENTIFIER, INT_CONST, FLOAT_CONST, CHAR_CONST, ERROR }; struct LexResult { Token token = Token::ERROR; std::string lexeme = ""; }; class Lexer { public: explicit Lexer(std::istream& stream); LexResult next(); private: std::istream& mStream; char nextChar(); char skipWhitespace(); char peekChar(); bool peekWord(const std::string& word); std::string nextIdentifier(char character, bool& isValid); std::string lexUniversalCharacter(char character, bool& isValid); std::string lexHexQuad(bool& isValid); LexResult lexIdentifier(char character); LexResult lexConstant(char character); std::string readIntConstant(bool predicate(char)); std::string readIntSuffix(); std::string readLongSuffix(); LexResult checkInvalidToken(const LexResult& token); LexResult readFloatConstant(const std::string& prefix, bool predicate(char)); LexResult lexCharConst(const std::string& initial, char delimiter); std::string lexEscapeSequence(bool& isValid); void skipLineComment(); void skipBlockComment(); }; #endif ``` Lexer.cpp ``` #include <set> #include "Lexer.h" namespace { bool startsIdentifier(char character) { return std::isalpha(character) || character == '_'); } bool isInIdentifier(char character) { return startsIdentifier(character) || std::isdigit(character); } bool isHexChar(char character) { return std::isdigit(character) || (character >= 'A' && character <= 'F') || (character >= 'a' && character <= 'f'); } bool isOctal(char character) { return std::isdigit(character) && '8' - character > 0; } bool isDigit(char character) { return std::isdigit(character); } bool isWhitespace(char character) { return character == ' ' || character == '\n' || character == '\t'; } bool isExponentChar(char character) { return (character == 'e' || character == 'E' || character == 'p' || character == 'P'); } static std::set<std::string> keywords = { "auto", "break", "case", "char", "const", "continue", "default", "do", "double", "else", "enum", "extern", "float", "for", "goto", "if", "inline", "int", "long", "register", "restrict", "return", "short", "signed", "sizeof", "static", "struct", "switch", "typedef", "union", "unsigned", "void", "volatile", "while", "_Bool", "_Complex", "_Imaginary" }; } Lexer::Lexer(std::istream& stream): mStream(stream) { } char Lexer::nextChar() { char c; mStream.read(&c, 1); return c; } char Lexer::skipWhitespace() { char c; do c = nextChar(); while (isWhitespace(c)); return c; } char Lexer::peekChar() { return mStream.peek(); } bool Lexer::peekWord(const std::string& word) { std::string lexedWord; // We've already read the first character, so set that in advance lexedWord.resize(word.size()); lexedWord[0] = word[0]; const size_t bytesToRead = lexedWord.size() - 1; mStream.read(&lexedWord[1], bytesToRead); if (lexedWord == word) { return true; } else { mStream.seekg((-bytesToRead), std::ios::cur); return false; } } std::string Lexer::lexHexQuad(bool& isValid) { std::string hexQuad; for (size_t i = 0; i < 4; ++i) { char character = nextChar(); // First check if the character is okay if (!isHexChar(character)) isValid = false; // Get a better error message (I hope?) if we keep reading // until a "logical" break, so no matter what keep going unless // there's whitespace if (!isWhitespace(character) && mStream) hexQuad += character; // If we hit whitespace, there's no way it's valid // We're at a logical boundary, so just return here else return hexQuad; } return hexQuad; } std::string Lexer::lexUniversalCharacter(char character, bool& isValid) { std::string universalCharacter(1, character); character = nextChar(); universalCharacter += character; if (character == 'u') universalCharacter += lexHexQuad(isValid); else if (character == 'U') { universalCharacter += lexHexQuad(isValid); universalCharacter += lexHexQuad(isValid); } else isValid = false; return universalCharacter; } std::string Lexer::nextIdentifier(char character, bool& isValid) { std::string identifier; while (isInIdentifier(character)) { if (character == '\\') { identifier += lexUniversalCharacter(character, isValid); if (!isValid) return identifier; } else identifier += character; character = nextChar(); } mStream.seekg(-1, std::ios::cur); return identifier; } LexResult Lexer::checkInvalidToken(const LexResult& token) { std::string lexeme = token.lexeme; if (!isWhitespace(peekChar()) && peekChar() != EOF) { while (!isWhitespace(peekChar()) && peekChar() != EOF) lexeme += nextChar(); return LexResult{Token::ERROR, lexeme}; } return token; } LexResult Lexer::lexIdentifier(char character) { bool isValid = true; const std::string identifier = nextIdentifier(character, isValid); if (!isValid) return LexResult{Token::ERROR, identifier}; if (keywords.count(identifier)) return LexResult{Token::KEYWORD, identifier}; return LexResult{Token::IDENTIFIER, identifier}; } std::string Lexer::readIntConstant(bool predicate(char)) { std::string lexeme; while (predicate(peekChar())) lexeme += nextChar(); return lexeme; } std::string Lexer::readLongSuffix() { std::string suffix; suffix += nextChar(); if (peekChar() == 'L' || peekChar() == 'l') suffix += nextChar(); return suffix; } std::string Lexer::readIntSuffix() { std::string suffix; if (peekChar() == 'L' || peekChar() == 'l') { suffix += readLongSuffix(); if (peekChar() == 'u' || peekChar() == 'U') suffix += nextChar(); } else if (peekChar() == 'u' || peekChar() == 'U') { suffix += nextChar(); if (peekChar() == 'l' || peekChar() == 'L') suffix += readLongSuffix(); } return suffix; } LexResult Lexer::readFloatConstant(const std::string& prefix, bool predicate(char)) { std::string result = prefix; if (peekChar() == '.') { result += nextChar(); result += readIntConstant(predicate); } if (isExponentChar(peekChar())) { result += nextChar(); if (peekChar() == '+' || peekChar() == '-') result += nextChar(); const std::string exponentSequence = readIntConstant(predicate); if (exponentSequence.empty()) return checkInvalidToken(LexResult{Token::ERROR, result}); result += exponentSequence; } const char suffix = peekChar(); if (suffix == 'f' || suffix == 'F' || suffix == 'l' || suffix == 'L') result += nextChar(); return checkInvalidToken(LexResult{Token::FLOAT_CONST, result}); } LexResult Lexer::lexConstant(char character) { std::string lexeme(1, character); Token token = Token::INT_CONST; auto predicate = isDigit; if (peekChar() == 'x' || peekChar() == 'X') { predicate = isHexChar; lexeme += nextChar(); } else if (character == '0') predicate = isOctal; lexeme += readIntConstant(predicate); if (peekChar() == '.' || isExponentChar(peekChar())) return readFloatConstant(lexeme, predicate); lexeme += readIntSuffix(); return checkInvalidToken(LexResult{token, lexeme}); } std::string Lexer::lexEscapeSequence(bool& isValid) { std::string lexeme(1, nextChar()); const char c = peekChar(); if (c == '\'' || c == '"' || c == '?' || c == '\\' || c == 'a' || c == 'b' || c == 'f' || c == 'n' || c == 'r' || c == 't' || c == 'v' || c == 'x' || isHexChar(c)) return lexeme + nextChar(); isValid = false; return lexeme; } LexResult Lexer::lexCharConst(const std::string& initial, char delimiter) { bool isValid = true; std::string lexeme = initial + delimiter; while (peekChar() != delimiter && peekChar() != EOF) { if (peekChar() == '\n') isValid = false; if (peekChar() == '\\') { lexeme += lexEscapeSequence(isValid); continue; } lexeme += nextChar(); } lexeme += nextChar(); return LexResult{isValid ? Token::CHAR_CONST : Token::ERROR, lexeme}; } void Lexer::skipLineComment() { nextChar(); nextChar(); while (peekChar() != '\n') nextChar(); } void Lexer::skipBlockComment() { nextChar(); // Skip the first * char c; do c = nextChar(); while (!(c == '*' && peekChar() == '/')); nextChar(); // Pass the last / } LexResult Lexer::next() { char character = skipWhitespace(); while (character == '/' && peekChar() == '/') { skipLineComment(); character = skipWhitespace(); } while (character == '/' && peekChar() == '*') { skipBlockComment(); character = skipWhitespace(); } if (isdigit(character)) return lexConstant(character); if (character == '\'' || character == '"') return lexCharConst("", character); if (character == 'L') if (peekChar() == '\'' || peekChar() == '"') { return lexCharConst("L", nextChar()); } if (startsIdentifier(character)) return lexIdentifier(character); switch (character) { case '+': if (peekWord("++")) return LexResult{Token::PUNCTUATOR, "++"}; if (peekWord("+=")) return LexResult{Token::PUNCTUATOR, "+="}; return LexResult{Token::PUNCTUATOR, "+"}; case '-': if (peekWord("->")) return LexResult{Token::PUNCTUATOR, "->"}; if (peekWord("--")) return LexResult{Token::PUNCTUATOR, "--"}; if (peekWord("-=")) return LexResult{Token::PUNCTUATOR, "-="}; return LexResult{Token::PUNCTUATOR, "-"}; case '*': if (peekWord("*=")) return LexResult{Token::PUNCTUATOR, "*="}; return LexResult{Token::PUNCTUATOR, "*"}; case '/': if (peekWord("/=")) return LexResult{Token::PUNCTUATOR, "/="}; return LexResult{Token::PUNCTUATOR, "/"}; case '=': if (peekWord("==")) return LexResult{Token::PUNCTUATOR, "=="}; return LexResult{Token::PUNCTUATOR, "="}; case '[': return LexResult{Token::PUNCTUATOR, "["}; case ']': return LexResult{Token::PUNCTUATOR, "]"}; case '(': return LexResult{Token::PUNCTUATOR, "("}; case ')': return LexResult{Token::PUNCTUATOR, ")"}; case '.': if (peekWord("...")) return LexResult{Token::PUNCTUATOR, "..."}; return LexResult{Token::PUNCTUATOR, "."}; case '&': if (peekWord("&&")) return LexResult{Token::PUNCTUATOR, "&&"}; if (peekWord("&=")) return LexResult{Token::PUNCTUATOR, "&="}; return LexResult{Token::PUNCTUATOR, "&"}; case '~': return LexResult{Token::PUNCTUATOR, "~"}; case '!': if (peekWord("!=")) return LexResult{Token::PUNCTUATOR, "!="}; return LexResult{Token::PUNCTUATOR, "!"}; case '%': if (peekWord("%:%:")) return LexResult{Token::PUNCTUATOR, "%:%:"}; if (peekWord("%:")) return LexResult{Token::PUNCTUATOR, "%:"}; if (peekWord("%=")) return LexResult{Token::PUNCTUATOR, "%="}; if (peekWord("%>")) return LexResult{Token::PUNCTUATOR, "%>"}; return LexResult{Token::PUNCTUATOR, "%"}; case '<': if (peekWord("<<=")) return LexResult{Token::PUNCTUATOR, "<<="}; if (peekWord("<:")) return LexResult{Token::PUNCTUATOR, "<:"}; if (peekWord("<%")) return LexResult{Token::PUNCTUATOR, "<%"}; if (peekWord("<<")) return LexResult{Token::PUNCTUATOR, "<<"}; if (peekWord("<=")) return LexResult{Token::PUNCTUATOR, "<="}; return LexResult{Token::PUNCTUATOR, "<"}; case '>': if (peekWord(">>=")) return LexResult{Token::PUNCTUATOR, ">>="}; if (peekWord(">>")) return LexResult{Token::PUNCTUATOR, ">>"}; if (peekWord(">=")) return LexResult{Token::PUNCTUATOR, ">="}; return LexResult{Token::PUNCTUATOR, ">"}; case '^': if (peekWord("^=")) return LexResult{Token::PUNCTUATOR, "^="}; return LexResult{Token::PUNCTUATOR, "^"}; case '|': if (peekWord("||")) return LexResult{Token::PUNCTUATOR, "||"}; if (peekWord("|=")) return LexResult{Token::PUNCTUATOR, "|="}; return LexResult{Token::PUNCTUATOR, "|"}; case '?': return LexResult{Token::PUNCTUATOR, "?"}; case ':': if (peekWord(":>")) return LexResult{Token::PUNCTUATOR, ":>"}; return LexResult{Token::PUNCTUATOR, ":"}; case ';': return LexResult{Token::PUNCTUATOR, ";"}; case ',': return LexResult{Token::PUNCTUATOR, ","}; case '#': if (peekWord("##")) return LexResult{Token::PUNCTUATOR, "##"}; return LexResult{Token::PUNCTUATOR, "#"}; default: return LexResult{Token::ERROR, std::string(character, 1)}; } } ```
2018/05/06
[ "https://codereview.stackexchange.com/questions/193787", "https://codereview.stackexchange.com", "https://codereview.stackexchange.com/users/41400/" ]
### Include what you use You use `isdigit`, `isalpha`, etc. -- but you haven't included `ctype.h` or `cctype` to assure that they're declared. ### Broken Code ``` bool startsIdentifier(char character) { return std::isalpha(character) || character == '_'); } ``` I'm not sure if you accidentally messed this up while cutting and pasting to the browser, but this won't even compile as it is right now. You have mis-matched parentheses. ### hex characters Checking for hexadecimal characters. ``` bool isHexChar(char character) { return std::isdigit(character) || (character >= 'A' && character <= 'F') || (character >= 'a' && character <= 'f'); } ``` The standard library provides `std::isxdigit` to check for hexadecimal characters, so you could call that instead of inventing your own like this. ### Conversion to unsigned char As @JDługosz pointed out, you want to convert the argument to unsigned char before calling. Since you're writing a wrapper anyway, you can pretty easily avoid casting though--just change the argument to your function to unsigned char: ``` bool isHexChar(unsigned char ch) { return isxdigit(ch); } ``` ### Ignore: pure pedanticism On a purely pedantic note, your code to check whether a character is between 'A' and 'F' inclusive isn't actually required to work. Digits are required to be contiguous, but letters aren't. That is purely pedantic though--the only character encoding I know of that has non-contiguous characters is EBCDIC, which is contiguous from A to F (its first non-contiguous section is after `i`)--and even if it was a problem with EBCDIC, you probably wouldn't care anyway. ### Language Compliance As it stands right now, your lexer doesn't implement a number of things the way the C and C++ standards say it should. For example, adjacent string literals: ``` "char literal 1" " char literal 2" ``` ...should be spliced into a single string literal. There are phases of translation, however, so you have to do things in the right order. Splicing string literals happens at phase 6, so it's only done *after* conversion of universal character names (to give only one example). Removal of trigraphs from the language makes the order *somewhat* less important than it used to be though. For example, `"??!"` contains a trigraph that needs to be converted to a single character--but `"?" "?!"` does not contain a trigraph, and must remain as three separate characters. ### Clumsy Usage The interface to the code looks pretty clumsy and uninformative, at least to me. For example, if there's an error, you'd typically want to report its location to the user--but the `LexResult` doesn't contain the error location, so that's essentially impossible to do. In fact, there doesn't even seem to be a way to distinguish between an error, and simply the end of the input. Either way, I seem to just get a `LexResult` with its `Token` set to `ERROR`. At least to me, it *seems* like when you tried to use a lexer, it would be fairly important to know whether you'd encountered an error, or successfully lexed the entire input.
Spotted a bug: ``` while (character == '/' && peekChar() == '/') { skipLineComment(); character = skipWhitespace(); } while (character == '/' && peekChar() == '*') { skipBlockComment(); character = skipWhitespace(); } ``` If you get a block comment followed by a line comment then you will hit the rest of the code with a `/` as the `character` which will look like a division rather than a comment. To fix I would do this: ``` while (character == '/') { if (peekChar() == '/') { skipLineComment(); character = skipWhitespace(); } else if (peekChar() == '*') { skipBlockComment(); character = skipWhitespace(); } else { break; } } ```
193,787
I'm building a compiler for fun, and this is my first pass at the lexer for it. It should handle all tokens for C99, with the exception of the preprocessor tokens. It's very minimal, only grabbing the token and lexeme. Specific areas where I'd like feedback: * I rarely get to use anything beyond C++98. Are there any more recent language features I could be taking advantage of? Anything that could be more idiomatic? * My error-handling seems a little ad-hoc. Is there a smarter approach? * I'm not sure about the best way to choose tokens. The C99 standard just breaks things down into punctuators, keywords, identifiers, and constants. But the compilers I've looked at tend to be more granular. Is there a better approach to choosing tokens? Why? * I think there has to be a more maintainable way of structuring this. Things I don't care about: * I know there are more efficient ways to handle input. My aim here is to just do the simplest thing and worry about fancy stuff/performance tuning later. * You probably don't like my brace style. Sorry. :/ Lexer.h ``` #ifndef __JMCOMP_LEXER_H__ #define __JMCOMP_LEXER_H__ #include <iostream> #include <string> enum class Token { PUNCTUATOR, KEYWORD, IDENTIFIER, INT_CONST, FLOAT_CONST, CHAR_CONST, ERROR }; struct LexResult { Token token = Token::ERROR; std::string lexeme = ""; }; class Lexer { public: explicit Lexer(std::istream& stream); LexResult next(); private: std::istream& mStream; char nextChar(); char skipWhitespace(); char peekChar(); bool peekWord(const std::string& word); std::string nextIdentifier(char character, bool& isValid); std::string lexUniversalCharacter(char character, bool& isValid); std::string lexHexQuad(bool& isValid); LexResult lexIdentifier(char character); LexResult lexConstant(char character); std::string readIntConstant(bool predicate(char)); std::string readIntSuffix(); std::string readLongSuffix(); LexResult checkInvalidToken(const LexResult& token); LexResult readFloatConstant(const std::string& prefix, bool predicate(char)); LexResult lexCharConst(const std::string& initial, char delimiter); std::string lexEscapeSequence(bool& isValid); void skipLineComment(); void skipBlockComment(); }; #endif ``` Lexer.cpp ``` #include <set> #include "Lexer.h" namespace { bool startsIdentifier(char character) { return std::isalpha(character) || character == '_'); } bool isInIdentifier(char character) { return startsIdentifier(character) || std::isdigit(character); } bool isHexChar(char character) { return std::isdigit(character) || (character >= 'A' && character <= 'F') || (character >= 'a' && character <= 'f'); } bool isOctal(char character) { return std::isdigit(character) && '8' - character > 0; } bool isDigit(char character) { return std::isdigit(character); } bool isWhitespace(char character) { return character == ' ' || character == '\n' || character == '\t'; } bool isExponentChar(char character) { return (character == 'e' || character == 'E' || character == 'p' || character == 'P'); } static std::set<std::string> keywords = { "auto", "break", "case", "char", "const", "continue", "default", "do", "double", "else", "enum", "extern", "float", "for", "goto", "if", "inline", "int", "long", "register", "restrict", "return", "short", "signed", "sizeof", "static", "struct", "switch", "typedef", "union", "unsigned", "void", "volatile", "while", "_Bool", "_Complex", "_Imaginary" }; } Lexer::Lexer(std::istream& stream): mStream(stream) { } char Lexer::nextChar() { char c; mStream.read(&c, 1); return c; } char Lexer::skipWhitespace() { char c; do c = nextChar(); while (isWhitespace(c)); return c; } char Lexer::peekChar() { return mStream.peek(); } bool Lexer::peekWord(const std::string& word) { std::string lexedWord; // We've already read the first character, so set that in advance lexedWord.resize(word.size()); lexedWord[0] = word[0]; const size_t bytesToRead = lexedWord.size() - 1; mStream.read(&lexedWord[1], bytesToRead); if (lexedWord == word) { return true; } else { mStream.seekg((-bytesToRead), std::ios::cur); return false; } } std::string Lexer::lexHexQuad(bool& isValid) { std::string hexQuad; for (size_t i = 0; i < 4; ++i) { char character = nextChar(); // First check if the character is okay if (!isHexChar(character)) isValid = false; // Get a better error message (I hope?) if we keep reading // until a "logical" break, so no matter what keep going unless // there's whitespace if (!isWhitespace(character) && mStream) hexQuad += character; // If we hit whitespace, there's no way it's valid // We're at a logical boundary, so just return here else return hexQuad; } return hexQuad; } std::string Lexer::lexUniversalCharacter(char character, bool& isValid) { std::string universalCharacter(1, character); character = nextChar(); universalCharacter += character; if (character == 'u') universalCharacter += lexHexQuad(isValid); else if (character == 'U') { universalCharacter += lexHexQuad(isValid); universalCharacter += lexHexQuad(isValid); } else isValid = false; return universalCharacter; } std::string Lexer::nextIdentifier(char character, bool& isValid) { std::string identifier; while (isInIdentifier(character)) { if (character == '\\') { identifier += lexUniversalCharacter(character, isValid); if (!isValid) return identifier; } else identifier += character; character = nextChar(); } mStream.seekg(-1, std::ios::cur); return identifier; } LexResult Lexer::checkInvalidToken(const LexResult& token) { std::string lexeme = token.lexeme; if (!isWhitespace(peekChar()) && peekChar() != EOF) { while (!isWhitespace(peekChar()) && peekChar() != EOF) lexeme += nextChar(); return LexResult{Token::ERROR, lexeme}; } return token; } LexResult Lexer::lexIdentifier(char character) { bool isValid = true; const std::string identifier = nextIdentifier(character, isValid); if (!isValid) return LexResult{Token::ERROR, identifier}; if (keywords.count(identifier)) return LexResult{Token::KEYWORD, identifier}; return LexResult{Token::IDENTIFIER, identifier}; } std::string Lexer::readIntConstant(bool predicate(char)) { std::string lexeme; while (predicate(peekChar())) lexeme += nextChar(); return lexeme; } std::string Lexer::readLongSuffix() { std::string suffix; suffix += nextChar(); if (peekChar() == 'L' || peekChar() == 'l') suffix += nextChar(); return suffix; } std::string Lexer::readIntSuffix() { std::string suffix; if (peekChar() == 'L' || peekChar() == 'l') { suffix += readLongSuffix(); if (peekChar() == 'u' || peekChar() == 'U') suffix += nextChar(); } else if (peekChar() == 'u' || peekChar() == 'U') { suffix += nextChar(); if (peekChar() == 'l' || peekChar() == 'L') suffix += readLongSuffix(); } return suffix; } LexResult Lexer::readFloatConstant(const std::string& prefix, bool predicate(char)) { std::string result = prefix; if (peekChar() == '.') { result += nextChar(); result += readIntConstant(predicate); } if (isExponentChar(peekChar())) { result += nextChar(); if (peekChar() == '+' || peekChar() == '-') result += nextChar(); const std::string exponentSequence = readIntConstant(predicate); if (exponentSequence.empty()) return checkInvalidToken(LexResult{Token::ERROR, result}); result += exponentSequence; } const char suffix = peekChar(); if (suffix == 'f' || suffix == 'F' || suffix == 'l' || suffix == 'L') result += nextChar(); return checkInvalidToken(LexResult{Token::FLOAT_CONST, result}); } LexResult Lexer::lexConstant(char character) { std::string lexeme(1, character); Token token = Token::INT_CONST; auto predicate = isDigit; if (peekChar() == 'x' || peekChar() == 'X') { predicate = isHexChar; lexeme += nextChar(); } else if (character == '0') predicate = isOctal; lexeme += readIntConstant(predicate); if (peekChar() == '.' || isExponentChar(peekChar())) return readFloatConstant(lexeme, predicate); lexeme += readIntSuffix(); return checkInvalidToken(LexResult{token, lexeme}); } std::string Lexer::lexEscapeSequence(bool& isValid) { std::string lexeme(1, nextChar()); const char c = peekChar(); if (c == '\'' || c == '"' || c == '?' || c == '\\' || c == 'a' || c == 'b' || c == 'f' || c == 'n' || c == 'r' || c == 't' || c == 'v' || c == 'x' || isHexChar(c)) return lexeme + nextChar(); isValid = false; return lexeme; } LexResult Lexer::lexCharConst(const std::string& initial, char delimiter) { bool isValid = true; std::string lexeme = initial + delimiter; while (peekChar() != delimiter && peekChar() != EOF) { if (peekChar() == '\n') isValid = false; if (peekChar() == '\\') { lexeme += lexEscapeSequence(isValid); continue; } lexeme += nextChar(); } lexeme += nextChar(); return LexResult{isValid ? Token::CHAR_CONST : Token::ERROR, lexeme}; } void Lexer::skipLineComment() { nextChar(); nextChar(); while (peekChar() != '\n') nextChar(); } void Lexer::skipBlockComment() { nextChar(); // Skip the first * char c; do c = nextChar(); while (!(c == '*' && peekChar() == '/')); nextChar(); // Pass the last / } LexResult Lexer::next() { char character = skipWhitespace(); while (character == '/' && peekChar() == '/') { skipLineComment(); character = skipWhitespace(); } while (character == '/' && peekChar() == '*') { skipBlockComment(); character = skipWhitespace(); } if (isdigit(character)) return lexConstant(character); if (character == '\'' || character == '"') return lexCharConst("", character); if (character == 'L') if (peekChar() == '\'' || peekChar() == '"') { return lexCharConst("L", nextChar()); } if (startsIdentifier(character)) return lexIdentifier(character); switch (character) { case '+': if (peekWord("++")) return LexResult{Token::PUNCTUATOR, "++"}; if (peekWord("+=")) return LexResult{Token::PUNCTUATOR, "+="}; return LexResult{Token::PUNCTUATOR, "+"}; case '-': if (peekWord("->")) return LexResult{Token::PUNCTUATOR, "->"}; if (peekWord("--")) return LexResult{Token::PUNCTUATOR, "--"}; if (peekWord("-=")) return LexResult{Token::PUNCTUATOR, "-="}; return LexResult{Token::PUNCTUATOR, "-"}; case '*': if (peekWord("*=")) return LexResult{Token::PUNCTUATOR, "*="}; return LexResult{Token::PUNCTUATOR, "*"}; case '/': if (peekWord("/=")) return LexResult{Token::PUNCTUATOR, "/="}; return LexResult{Token::PUNCTUATOR, "/"}; case '=': if (peekWord("==")) return LexResult{Token::PUNCTUATOR, "=="}; return LexResult{Token::PUNCTUATOR, "="}; case '[': return LexResult{Token::PUNCTUATOR, "["}; case ']': return LexResult{Token::PUNCTUATOR, "]"}; case '(': return LexResult{Token::PUNCTUATOR, "("}; case ')': return LexResult{Token::PUNCTUATOR, ")"}; case '.': if (peekWord("...")) return LexResult{Token::PUNCTUATOR, "..."}; return LexResult{Token::PUNCTUATOR, "."}; case '&': if (peekWord("&&")) return LexResult{Token::PUNCTUATOR, "&&"}; if (peekWord("&=")) return LexResult{Token::PUNCTUATOR, "&="}; return LexResult{Token::PUNCTUATOR, "&"}; case '~': return LexResult{Token::PUNCTUATOR, "~"}; case '!': if (peekWord("!=")) return LexResult{Token::PUNCTUATOR, "!="}; return LexResult{Token::PUNCTUATOR, "!"}; case '%': if (peekWord("%:%:")) return LexResult{Token::PUNCTUATOR, "%:%:"}; if (peekWord("%:")) return LexResult{Token::PUNCTUATOR, "%:"}; if (peekWord("%=")) return LexResult{Token::PUNCTUATOR, "%="}; if (peekWord("%>")) return LexResult{Token::PUNCTUATOR, "%>"}; return LexResult{Token::PUNCTUATOR, "%"}; case '<': if (peekWord("<<=")) return LexResult{Token::PUNCTUATOR, "<<="}; if (peekWord("<:")) return LexResult{Token::PUNCTUATOR, "<:"}; if (peekWord("<%")) return LexResult{Token::PUNCTUATOR, "<%"}; if (peekWord("<<")) return LexResult{Token::PUNCTUATOR, "<<"}; if (peekWord("<=")) return LexResult{Token::PUNCTUATOR, "<="}; return LexResult{Token::PUNCTUATOR, "<"}; case '>': if (peekWord(">>=")) return LexResult{Token::PUNCTUATOR, ">>="}; if (peekWord(">>")) return LexResult{Token::PUNCTUATOR, ">>"}; if (peekWord(">=")) return LexResult{Token::PUNCTUATOR, ">="}; return LexResult{Token::PUNCTUATOR, ">"}; case '^': if (peekWord("^=")) return LexResult{Token::PUNCTUATOR, "^="}; return LexResult{Token::PUNCTUATOR, "^"}; case '|': if (peekWord("||")) return LexResult{Token::PUNCTUATOR, "||"}; if (peekWord("|=")) return LexResult{Token::PUNCTUATOR, "|="}; return LexResult{Token::PUNCTUATOR, "|"}; case '?': return LexResult{Token::PUNCTUATOR, "?"}; case ':': if (peekWord(":>")) return LexResult{Token::PUNCTUATOR, ":>"}; return LexResult{Token::PUNCTUATOR, ":"}; case ';': return LexResult{Token::PUNCTUATOR, ";"}; case ',': return LexResult{Token::PUNCTUATOR, ","}; case '#': if (peekWord("##")) return LexResult{Token::PUNCTUATOR, "##"}; return LexResult{Token::PUNCTUATOR, "#"}; default: return LexResult{Token::ERROR, std::string(character, 1)}; } } ```
2018/05/06
[ "https://codereview.stackexchange.com/questions/193787", "https://codereview.stackexchange.com", "https://codereview.stackexchange.com/users/41400/" ]
Same Lexer writtern in Lex. Lexer.l ------- ``` startsIdentifier [a-zA-Z_] isInIdentifier {startsIdentifier}|[0-9] isHexChar [0-9A-Fa-f] isOctal [0-7] isDigit [0-9] isWhitespace [ \n\t] skipWhitespace {isWhitespace}* isHexNumber 0x{isHexChar}+ isOctalNumber 0{isOctal}* isDecNumber [1-9]{isDigit}* isIntegerNumber {isHexNumber}|{isOctalNumber}|{isDecNumber} isFloatLeadingDigit {isDigit}+\.{isDigit}* isFloatTrailingDigit \.{isDigit}+ isExponentDec [eE][+-]?{isDecNumber} isExponentHex [xX][+-]?{isHexNumber} isExponent {isExponentHex}|{isExponentDec} isFloatNumber {isFloatLeadingDigit}{isExponent}?|{isFloatTrailingDigit}{isExponent}?|{isIntegerNumber}{isExponent} escapeCharacter ['"\?\\abfnrtvx{isHexChar}] isCharCharacter \\{escapeCharacter}|[^'\n\\] isStringCharacter \\{escapeCharacter}|[^"\n\\] isCharLiteral '{isCharCharacter}*' isStringLiteral "{isStringCharacter}*" isLiteralChar L?{isCharLiteral}|L?{isStringLiteral} Identifier {startsIdentifier}{isInIdentifier}* LineComment \/\/[^\n]* BlockCommentStart \/\* BlockCommentEnd \*\/ %x COMMENT_BLOCK %% <COMMENT_BLOCK>{BlockCommentEnd} {BEGIN(INITIAL);} <COMMENT_BLOCK>\*[^/] {/* Ignore Star followed by anything except slash */} <COMMENT_BLOCK>[^*]+ {/* Ignore blocks of text */} {skipWhitespace} {/* Ignore */} {LineComment} {/* Ignore */} {BlockCommentStart} {BEGIN(COMMENT_BLOCK);} {isIntegerNumber} {return COST_INT;} {isFloatNumber} {return CONST_FLOAT;} {isLiteralChar} {return CONST_CHAR;} auto {return KEYWORD_AUTO;} break {return KEYWORD_BREAK;} case {return KEYWORD_CASE;} char {return KEYWORD_CHAR;} const {return KEYWORD_CONST;} continue {return KEYWORD_CONTINUE;} default {return KEYWORD_DEFAULT;} do {return KEYWORD_DO;} double {return KEYWORD_DOUBLE;} else {return KEYWORD_ELSE;} enum {return KEYWORD_ENUM;} extern {return KEYWORD_EXTERN;} float {return KEYWORD_FLOAT;} for {return KEYWORD_FOR;} goto {return KEYWORD_GOTO;} if {return KEYWORD_IF;} inline {return KEYWORD_INLINE;} int {return KEYWORD_INT;} long {return KEYWORD_LONG;} register {return KEYWORD_REGISTER;} restrict {return KEYWORD_RESTRICT;} return {return KEYWORD_RETURN;} short {return KEYWORD_SHORT;} signed {return KEYWORD_SIGNED;} sizeof {return KEYWORD_SIZEOF;} static {return KEYWORD_STATIC;} struct {return KEYWORD_STRUCT;} switch {return KEYWORD_SWITCH;} typedef {return KEYWORD_TYPEDEF;} union {return KEYWORD_UNION;} unsigned {return KEYWORD_UNSIGNED;} void {return KEYWORD_VOID;} volatile {return KEYWORD_VOLATILE;} while {return KEYWORD_WHILE;} _Bool {return KEYWORD_BOOL;} _Complex {return KEYWORD_COMPLEX;} _Imaginary {return KEYWORD_IMAGINARY;} {Identifier} {return IDENTIFIER;} \+ {return '+';} \- {return '-';} \* {return '*';} \/ {return '/';} \% {return '%';} \= {return '=';} \[ {return '[';} \] {return ']';} \( {return '(';} \) {return ')';} \. {return '.';} \^ {return '^';} \| {return '|';} \& {return '&';} \~ {return '~';} \! {return '!';} \< {return '<';} \> {return '>';} \? {return '&';} \: {return ':';} \; {return ';';} \, {return ',';} \# {return '#';} \+\+ {return PUNCT_INC;} \-\- {return PUNCT_DEC;} \-\> {return PUNCT_POINT;} \<\< {return PUNCT_SHIFT_LEFT;} \>\> {return PUNCT_SHIFT_RIGHT;} \+\= {return PUNCT_ASSIGN_PLUS;} \-\= {return PUNCT_ASSIGN_MINUS;} \*\= {return PUNCT_ASSIGN_MUL;} \/\= {return PUNCT_ASSIGN_DIV;} \%\= {return PUNCT_ASSIGN_MOD;} \|\= {return PUNCT_ASSIGN_OR;} \&\= {return PUNCT_ASSIGN_AND;} \^\= {return PUNCT_ASSIGN_NOT;} \<\<\= {return PUNCT_ASSIGN_SHIFT_LEFT;} \>\>\= {return PUNCT_ASSIGN_SHIFT_RIGHT;} \<\= {return PUNCT_COMP_LESS_EQUAL;} \>\= {return PUNCT_COMP_GREAT_EQUAL;} \=\= {return PUNCT_COMP_EQUAL;} \!\= {return PUNCT_COMP_NOT_EQUAL;} \&\& {return PUNCT_BOOL_AND;} \|\| {return PUNCT_BOOL_OR;} \%\> {return PUNCT_PERCENT_GREAT;} \%\: {return PUNCT_PERCENT_COLON;} \<\: {return PUNCT_LESS_COLON;} \<\% {return PUNCT_LESS_PERCENT;} \:\> {return PUNCT_COLON_PERCENT;} \#\# {return PUNCT_HASH_HASH;} \.\.\. {return PUNCT_DOT_DOT_DOT;} \%\:\%\: {return PUNCT_PERCENT_COLON_PERCENT_COLON;} . {/* No other rule matched. ERROR */} %% ``` Built with: ``` > flex Lexer.l ``` Note: Not tested.
Spotted a bug: ``` while (character == '/' && peekChar() == '/') { skipLineComment(); character = skipWhitespace(); } while (character == '/' && peekChar() == '*') { skipBlockComment(); character = skipWhitespace(); } ``` If you get a block comment followed by a line comment then you will hit the rest of the code with a `/` as the `character` which will look like a division rather than a comment. To fix I would do this: ``` while (character == '/') { if (peekChar() == '/') { skipLineComment(); character = skipWhitespace(); } else if (peekChar() == '*') { skipBlockComment(); character = skipWhitespace(); } else { break; } } ```
51,851,535
I'd like to pass other arguments to my on\_failure\_callback function but it only seems to want "context". How do I pass other arguments to that function...especially since I'd like to define that function in a separate module so it can be used in all my DAGS. My current default\_args looks like this: ``` default_args = { 'owner': 'Me', 'depends_on_past': True, 'start_date': datetime(2016,01,01), 'email': ['[email protected]'], 'email_on_failure': False, 'email_on_retry': False, 'retries': 1, 'retry_delay': timedelta(minutes=1), 'on_failure_callback': notify_failure, 'max_active_runs': 1 } ``` If I try something like this airflow complains: ``` default_args = { 'owner': 'Me', 'depends_on_past': True, 'start_date': datetime(2016,01,01), 'email': ['[email protected]'], 'email_on_failure': False, 'email_on_retry': False, 'retries': 1, 'retry_delay': timedelta(minutes=1), 'on_failure_callback': notify_failure(context,arg1,arg2), 'max_active_runs': 1 } ``` so not sure how to pass arg1 and arg2 to my notify\_failure fuction that I would like to define in a separate module that I can simply import into my DAG
2018/08/15
[ "https://Stackoverflow.com/questions/51851535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5428968/" ]
Assuming the args are something you can define at the DAG level, then you can use the partials package. ie: ``` from functools import partial def generic_failure(arg1, arg2, context): # do whatever default_args = { 'owner': 'Me', 'depends_on_past': True, 'start_date': datetime(2016,01,01), 'email': ['[email protected]'], 'email_on_failure': False, 'email_on_retry': False, 'retries': 1, 'retry_delay': timedelta(minutes=1), 'on_failure_callback': partial(generic_failure, arg1, arg2), 'max_active_runs': 1 } ``` Calling `partial(generic_failure, arg1, arg2)` will return a function expecting however many arguments are remaining in `generic_failure`, which in the above example is just the single param `context`
you can use a nested function for this ``` def generic_failure(arg1, arg2): def failure(context): message = 'we have a function that failed witg args : {ARG1}, {ARG2}'.format(ARG1=arg1,ARG2=arg2) print(message) return message return failure arg1 = 'arg1' arg2 = 'arg2' default_args = { 'owner': 'Me', 'on_failure_callback': generic_failure(arg1, arg2), } ```
69,252,761
I just made the change from Qt 5.12.3 to 6.1.2. Having done this I went ahead and compiled a very simple QML App: ``` import QtQuick 2.12 import QtQuick.Window 2.12 import QtQuick.Controls 2.0 ApplicationWindow { id: mainWindow visible: true //visibility: Window.Maximized width: 1000 height: 800 title: qsTr("Hello World") Button { id: myTestButton width: 100 height: 50 text: "Click Me!" anchors.centerIn: parent //hoverEnabled: false onClicked: { console.log("Button Was clicked") } } } ``` When I hover over the button now, it slowñy gets covered by a slight blue tranparent overlay while the mouse is over the button. I can disable this by setting hoverEnabled to false, but I much rather change it to something that I can use. How can change the color of this hover overlay?
2021/09/20
[ "https://Stackoverflow.com/questions/69252761", "https://Stackoverflow.com", "https://Stackoverflow.com/users/792589/" ]
> > whenever this workflow runs on a PR that was issued by Dependabot - it fails as Dependabot PRs don't have the same secret access as other pull requests do. > > > This should no longer (Nov./Dec. 2021) be the case: > > GitHub Actions: Workflows triggered by Dependabot receive **[dependabot secrets](https://docs.github.com/en/code-security/supply-chain-security/keeping-your-dependencies-updated-automatically/managing-encrypted-secrets-for-dependabot)**. > --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > > > GitHub Actions workflows triggered by Dependabot will now be sent the Dependabot secrets. > > > This change will enable you to pull from private package registries in your CI using the same secrets you have configured for Dependabot to use and will improve how Actions and Dependabot work together. > > > Learn more about [using Actions and Dependabot together](https://docs.github.com/en/code-security/supply-chain-security/keeping-your-dependencies-updated-automatically/automating-dependabot-with-github-actions). > > >
[This issue](https://github.com/dependabot/dependabot-core/issues/3253) describes some options to solve your problem
36,684,006
I checked all around the internet to find out how to check if a tree is a subSet of another. By subset, I mean The `issubset` function should `return 1` if all the elements of the first tree appear in the second tree and `0` otherwise. Note that depending on the insertion order of elements, two trees with same set of elements can have very different shapes. Given the following examples as trees: ``` Elements of the first tree 4 / \ 2 6 / \ / \ 1 2 5 7 Elements of the second Tree 6 / \ 4 7 / \ 2 5 / \ 1 2 ``` The following code traverses the trees then checks the values: ``` int issubset(nodeT **tree1, nodeT **tree2) { if( *tree2 == NULL) return TRUE; if(*tree1 == NULL) return FALSE; if(are_identical(&(*tree1),&(*tree2))) return TRUE; return issubset(&(*tree1)->pLeft, &(*tree2)) || issubset(&(*tree2)->pRight, &(*tree2)); } int are_identical(nodeT **tree1, nodeT **tree2) { nodeT **temp; int iFound = 0, i, r; if( *tree2 == NULL) return TRUE; if( (*tree1) ==NULL && (*tree2) ==NULL) { return FALSE; } if( (*tree1)->iValue != (*tree2)->iValue) { if(iFound = 0) return TRUE; i = issubset(&(*tree1)->pLeft, &(*tree2)); if( i ==0) { r = issubset(&(*tree1)->pRight, &(*tree2)); return r; } return i; } return((*tree1)->iValue == (*tree2)->iValue && are_identical(&(*tree1)->pLeft, &(*tree2)->pLeft) && are_identical(&(*tree1)->pRight, &(*tree2)->pRight) ); } ``` After running my code with the given examples, my output gives back that the first tree is not a subset of the second, when it actually is a subset.
2016/04/18
[ "https://Stackoverflow.com/questions/36684006", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5890251/" ]
I'm not sure I understand your question, but I still try to give you my answer. From your example I assume that you're working with binary search trees. But I do not know what kind of binary tree you're using, I will assume a general scheme. I guess if the tree is somewhat balanced then maybe you can get a better algorithm. Since you have binary search trees, you could assume a function `search(root, key)` which returns a valid pointer to a node containing `key` if this one is found or `NULL` otherwise. Also, I assume that you know the numbers of nodes of each tree. So you could return `0` if `tree1` has lesser nodes than `tree`. Otherwise the approach is as follows: ``` int tree1_contained_in_tree2(node * tree1, node * tree2) { if (tree1 == NULL) // am I visiting a empty tree? return 1; // so I am sure tree1 is not NULL ==> I search the contained key in tree2 if (search(tree2, tree1->key) == NULL) return 0; // tree1->key does not belong to tree2 // here tree1->key belongs to tree1 so, I test with the subtrees of root return tree1_contained_in_tree2(tree1->left, tree2) && tree1_contained_in_tree2(tree1->right, tree2); } ``` I prefer to use simple pointers to nodes instead of double pointers. I think you can adapt my approach to yours. The algorithm is `O(n log m)` if `tree2` is balanced (`O(n m)` otherwise) where `n` is the number of nodes of `tree1` and `m` the number of nodes of `tree2`
Binary search trees support efficient sorted iteration over the elements. Maintain an iterator over the elements of each tree in nondecreasing order. Repeat the following until a result is determined: * If the first iterator has no more elements, return `TRUE`. * If the second iterator has no more elements, return `FALSE`. * If the current element of the first iterator is less than that of the second, return `FALSE`. * If the current element of the first iterator is equal to that of the second, update both iterators. * If the current element of the first tree is greater than that of the second, update the second iterator. (You can optimize this by skipping some elements.) The basic implementation is worst case `O(n + m)` where `n` and `m` are the respective sizes of the two trees. With the optimization mentioned you can additionally bound this by `O(n log m)` if the larger tree is balanced, which is useful if the second tree is much larger than the first. (Whether or not the trees are balanced, the `O(n + m)` bound still applies.)
40,697
Is condensed milk the same as sweetened condensed milk? I have a fudge recipe that calls for condensed milk and I can not find strictly condensed milk, only sweetened condensed milk or evaporated milk.
2013/12/30
[ "https://cooking.stackexchange.com/questions/40697", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/22232/" ]
Just to add to Marti's answer: If the recipe was written in the US within the past 30-40 (maybe more) years, "condensed" almost *certainly* means sweetened condensed. Sweetened is just assumed if the milk is described as "condensed". At least in the US, unsweetened condensed milk is never called "condensed", it is called "evaporated". To boost my confidence in this answer before I posted it, I searched Amazon for "Condensed Milk". In 22 pages of results, I was not able to find a *SINGLE* product described as "condensed milk" that was unsweetened. I did however find several that used "condensed milk" without the word sweetened in the name of the product page, but without fail, these *ALL* turned out to be sweetened. ![1](https://i.stack.imgur.com/bHfZZm.jpg) ![2](https://i.stack.imgur.com/GFKDRm.jpg) ![3](https://i.stack.imgur.com/KGEh3m.jpg) ![4](https://i.stack.imgur.com/fxY0fm.jpg) ![5](https://i.stack.imgur.com/dEEDgm.jpg) ![6](https://i.stack.imgur.com/MEB6Qm.jpg) ![7](https://i.stack.imgur.com/Dqmmgm.jpg) ![8](https://i.stack.imgur.com/DIMk4m.jpg) If the recipe is old or if its origins are outside of the US, I can't be absolutely positive what the author intended, but I have *never* seen "unsweetened condensed milk".
No, the two products are different. As the names imply: * *Condensed milk* is strictly reduced milk * *Sweetened condensed milk* is reduced milk with considerable sugar added See [Can evaporated milk be converted to sweetened condensed?](https://cooking.stackexchange.com/questions/4600/can-evaporated-milk-be-converted-to-sweetened-condensed) You can easily modify the condensed milk with additional sugar.
40,697
Is condensed milk the same as sweetened condensed milk? I have a fudge recipe that calls for condensed milk and I can not find strictly condensed milk, only sweetened condensed milk or evaporated milk.
2013/12/30
[ "https://cooking.stackexchange.com/questions/40697", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/22232/" ]
Technically no, it is not the same thing. Sweetened condensed milk has a very high sugar content, something like 40%, while just condensed milk has no sugar at all. But this still doesn't tell us what the recipe author meant. The availability of different types of condensed and evaporated milks seem to differ a lot in different parts of the world. This being a fudge recipe, I can imagine that it is an American one, because fudge is not as common in other places. If unsweetened condensed milk is unusual in the States, I can also imagine that the recipe author was not aware of the difference and just shortened it to "condensed milk" without knowing that it has a difference in meaning. Your best strategy is finding a different recipe, which uses a different dairy product. Not only will be there no doubt what the author meant, it will also be much easier for you to make it as it is, instead of having to mess around with substitutes. Candy recipes are generally sensitive when it comes to small differences in ingredients. If you hang to your recipe very much, you can try looking online for non-sweetened condensed milk, it is possible that you will find products your brick and mortar stores don't carry.
No, the two products are different. As the names imply: * *Condensed milk* is strictly reduced milk * *Sweetened condensed milk* is reduced milk with considerable sugar added See [Can evaporated milk be converted to sweetened condensed?](https://cooking.stackexchange.com/questions/4600/can-evaporated-milk-be-converted-to-sweetened-condensed) You can easily modify the condensed milk with additional sugar.
40,697
Is condensed milk the same as sweetened condensed milk? I have a fudge recipe that calls for condensed milk and I can not find strictly condensed milk, only sweetened condensed milk or evaporated milk.
2013/12/30
[ "https://cooking.stackexchange.com/questions/40697", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/22232/" ]
Technically no, it is not the same thing. Sweetened condensed milk has a very high sugar content, something like 40%, while just condensed milk has no sugar at all. But this still doesn't tell us what the recipe author meant. The availability of different types of condensed and evaporated milks seem to differ a lot in different parts of the world. This being a fudge recipe, I can imagine that it is an American one, because fudge is not as common in other places. If unsweetened condensed milk is unusual in the States, I can also imagine that the recipe author was not aware of the difference and just shortened it to "condensed milk" without knowing that it has a difference in meaning. Your best strategy is finding a different recipe, which uses a different dairy product. Not only will be there no doubt what the author meant, it will also be much easier for you to make it as it is, instead of having to mess around with substitutes. Candy recipes are generally sensitive when it comes to small differences in ingredients. If you hang to your recipe very much, you can try looking online for non-sweetened condensed milk, it is possible that you will find products your brick and mortar stores don't carry.
As a recipe developer / chef... 'Sweetened Condensed Milk' and 'Condensed Milk' are exactly the same product - just labeled differently by different manufacturers for different markets. Condensed milk is ALWAYS sweetened. Here's the lowdown: Evaporated Milk, is just that. Milk that has been evaporated with 60% of the water removed. This product has NO sugar added. 'Sweetened Condensed Milk' and 'Condensed Milk' take this evaporation process a step further and add up to 40% sugar by volume. So to sum up: * 'Sweetened Condensed Milk' and 'Condensed Milk' are exactly the same product. * 'Sweetened Condensed Milk' and 'Condensed Milk' are both up to 40% sugar by volume. * Evaporated milk contains no added sugar. Really hard to believe that the other answers here are so wrong...
40,697
Is condensed milk the same as sweetened condensed milk? I have a fudge recipe that calls for condensed milk and I can not find strictly condensed milk, only sweetened condensed milk or evaporated milk.
2013/12/30
[ "https://cooking.stackexchange.com/questions/40697", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/22232/" ]
Just to add to Marti's answer: If the recipe was written in the US within the past 30-40 (maybe more) years, "condensed" almost *certainly* means sweetened condensed. Sweetened is just assumed if the milk is described as "condensed". At least in the US, unsweetened condensed milk is never called "condensed", it is called "evaporated". To boost my confidence in this answer before I posted it, I searched Amazon for "Condensed Milk". In 22 pages of results, I was not able to find a *SINGLE* product described as "condensed milk" that was unsweetened. I did however find several that used "condensed milk" without the word sweetened in the name of the product page, but without fail, these *ALL* turned out to be sweetened. ![1](https://i.stack.imgur.com/bHfZZm.jpg) ![2](https://i.stack.imgur.com/GFKDRm.jpg) ![3](https://i.stack.imgur.com/KGEh3m.jpg) ![4](https://i.stack.imgur.com/fxY0fm.jpg) ![5](https://i.stack.imgur.com/dEEDgm.jpg) ![6](https://i.stack.imgur.com/MEB6Qm.jpg) ![7](https://i.stack.imgur.com/Dqmmgm.jpg) ![8](https://i.stack.imgur.com/DIMk4m.jpg) If the recipe is old or if its origins are outside of the US, I can't be absolutely positive what the author intended, but I have *never* seen "unsweetened condensed milk".
As a recipe developer / chef... 'Sweetened Condensed Milk' and 'Condensed Milk' are exactly the same product - just labeled differently by different manufacturers for different markets. Condensed milk is ALWAYS sweetened. Here's the lowdown: Evaporated Milk, is just that. Milk that has been evaporated with 60% of the water removed. This product has NO sugar added. 'Sweetened Condensed Milk' and 'Condensed Milk' take this evaporation process a step further and add up to 40% sugar by volume. So to sum up: * 'Sweetened Condensed Milk' and 'Condensed Milk' are exactly the same product. * 'Sweetened Condensed Milk' and 'Condensed Milk' are both up to 40% sugar by volume. * Evaporated milk contains no added sugar. Really hard to believe that the other answers here are so wrong...
40,697
Is condensed milk the same as sweetened condensed milk? I have a fudge recipe that calls for condensed milk and I can not find strictly condensed milk, only sweetened condensed milk or evaporated milk.
2013/12/30
[ "https://cooking.stackexchange.com/questions/40697", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/22232/" ]
In my experience, "condensed" milk refers to the sweetened product, and "evaporated" milk refers to the unsweetened product. People will often say "sweetened condensed" for clarity, but this is not strictly necessary: if your recipe calls for condensed milk, use the syrupy stuff. However, if this is an older recipe, all bets are off: older casual usage had "condensed" for both meanings. (Hence using the "sweetened condensed" phrasing, even though it's a bit of a tautology.)
As a recipe developer / chef... 'Sweetened Condensed Milk' and 'Condensed Milk' are exactly the same product - just labeled differently by different manufacturers for different markets. Condensed milk is ALWAYS sweetened. Here's the lowdown: Evaporated Milk, is just that. Milk that has been evaporated with 60% of the water removed. This product has NO sugar added. 'Sweetened Condensed Milk' and 'Condensed Milk' take this evaporation process a step further and add up to 40% sugar by volume. So to sum up: * 'Sweetened Condensed Milk' and 'Condensed Milk' are exactly the same product. * 'Sweetened Condensed Milk' and 'Condensed Milk' are both up to 40% sugar by volume. * Evaporated milk contains no added sugar. Really hard to believe that the other answers here are so wrong...
40,697
Is condensed milk the same as sweetened condensed milk? I have a fudge recipe that calls for condensed milk and I can not find strictly condensed milk, only sweetened condensed milk or evaporated milk.
2013/12/30
[ "https://cooking.stackexchange.com/questions/40697", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/22232/" ]
Technically no, it is not the same thing. Sweetened condensed milk has a very high sugar content, something like 40%, while just condensed milk has no sugar at all. But this still doesn't tell us what the recipe author meant. The availability of different types of condensed and evaporated milks seem to differ a lot in different parts of the world. This being a fudge recipe, I can imagine that it is an American one, because fudge is not as common in other places. If unsweetened condensed milk is unusual in the States, I can also imagine that the recipe author was not aware of the difference and just shortened it to "condensed milk" without knowing that it has a difference in meaning. Your best strategy is finding a different recipe, which uses a different dairy product. Not only will be there no doubt what the author meant, it will also be much easier for you to make it as it is, instead of having to mess around with substitutes. Candy recipes are generally sensitive when it comes to small differences in ingredients. If you hang to your recipe very much, you can try looking online for non-sweetened condensed milk, it is possible that you will find products your brick and mortar stores don't carry.
I've found when I run across the phrase "condensed milk" vs. the more specific "sweetened condensed milk," the recipe has its origins in the UK, such as in this recipe: <http://www.saveur.com/article/Recipes/Banoffee-Pie-Classic>. First, look for clues in the recipe: it's not likely 1/2 cup of brown sugar would create a sufficiently sweet toffee layer, although I'm sure that could be argued by people with a less sweet tooth than mine. Next, look up similar recipes. I looked up other banoffee pie recipes and each US based site referenced *sweetened* condensed milk. Finally, this UK site for the Carnation brand product seems to verify it is indeed what we (in the US) call sweetened condensed milk. Check this out: <http://www.carnation.co.uk/recipes/8/Classic-Banoffee-Pie>. Hope that helps.
40,697
Is condensed milk the same as sweetened condensed milk? I have a fudge recipe that calls for condensed milk and I can not find strictly condensed milk, only sweetened condensed milk or evaporated milk.
2013/12/30
[ "https://cooking.stackexchange.com/questions/40697", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/22232/" ]
Just to add to Marti's answer: If the recipe was written in the US within the past 30-40 (maybe more) years, "condensed" almost *certainly* means sweetened condensed. Sweetened is just assumed if the milk is described as "condensed". At least in the US, unsweetened condensed milk is never called "condensed", it is called "evaporated". To boost my confidence in this answer before I posted it, I searched Amazon for "Condensed Milk". In 22 pages of results, I was not able to find a *SINGLE* product described as "condensed milk" that was unsweetened. I did however find several that used "condensed milk" without the word sweetened in the name of the product page, but without fail, these *ALL* turned out to be sweetened. ![1](https://i.stack.imgur.com/bHfZZm.jpg) ![2](https://i.stack.imgur.com/GFKDRm.jpg) ![3](https://i.stack.imgur.com/KGEh3m.jpg) ![4](https://i.stack.imgur.com/fxY0fm.jpg) ![5](https://i.stack.imgur.com/dEEDgm.jpg) ![6](https://i.stack.imgur.com/MEB6Qm.jpg) ![7](https://i.stack.imgur.com/Dqmmgm.jpg) ![8](https://i.stack.imgur.com/DIMk4m.jpg) If the recipe is old or if its origins are outside of the US, I can't be absolutely positive what the author intended, but I have *never* seen "unsweetened condensed milk".
I've found when I run across the phrase "condensed milk" vs. the more specific "sweetened condensed milk," the recipe has its origins in the UK, such as in this recipe: <http://www.saveur.com/article/Recipes/Banoffee-Pie-Classic>. First, look for clues in the recipe: it's not likely 1/2 cup of brown sugar would create a sufficiently sweet toffee layer, although I'm sure that could be argued by people with a less sweet tooth than mine. Next, look up similar recipes. I looked up other banoffee pie recipes and each US based site referenced *sweetened* condensed milk. Finally, this UK site for the Carnation brand product seems to verify it is indeed what we (in the US) call sweetened condensed milk. Check this out: <http://www.carnation.co.uk/recipes/8/Classic-Banoffee-Pie>. Hope that helps.
40,697
Is condensed milk the same as sweetened condensed milk? I have a fudge recipe that calls for condensed milk and I can not find strictly condensed milk, only sweetened condensed milk or evaporated milk.
2013/12/30
[ "https://cooking.stackexchange.com/questions/40697", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/22232/" ]
I've found when I run across the phrase "condensed milk" vs. the more specific "sweetened condensed milk," the recipe has its origins in the UK, such as in this recipe: <http://www.saveur.com/article/Recipes/Banoffee-Pie-Classic>. First, look for clues in the recipe: it's not likely 1/2 cup of brown sugar would create a sufficiently sweet toffee layer, although I'm sure that could be argued by people with a less sweet tooth than mine. Next, look up similar recipes. I looked up other banoffee pie recipes and each US based site referenced *sweetened* condensed milk. Finally, this UK site for the Carnation brand product seems to verify it is indeed what we (in the US) call sweetened condensed milk. Check this out: <http://www.carnation.co.uk/recipes/8/Classic-Banoffee-Pie>. Hope that helps.
No, the two products are different. As the names imply: * *Condensed milk* is strictly reduced milk * *Sweetened condensed milk* is reduced milk with considerable sugar added See [Can evaporated milk be converted to sweetened condensed?](https://cooking.stackexchange.com/questions/4600/can-evaporated-milk-be-converted-to-sweetened-condensed) You can easily modify the condensed milk with additional sugar.
40,697
Is condensed milk the same as sweetened condensed milk? I have a fudge recipe that calls for condensed milk and I can not find strictly condensed milk, only sweetened condensed milk or evaporated milk.
2013/12/30
[ "https://cooking.stackexchange.com/questions/40697", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/22232/" ]
In my experience, "condensed" milk refers to the sweetened product, and "evaporated" milk refers to the unsweetened product. People will often say "sweetened condensed" for clarity, but this is not strictly necessary: if your recipe calls for condensed milk, use the syrupy stuff. However, if this is an older recipe, all bets are off: older casual usage had "condensed" for both meanings. (Hence using the "sweetened condensed" phrasing, even though it's a bit of a tautology.)
No, the two products are different. As the names imply: * *Condensed milk* is strictly reduced milk * *Sweetened condensed milk* is reduced milk with considerable sugar added See [Can evaporated milk be converted to sweetened condensed?](https://cooking.stackexchange.com/questions/4600/can-evaporated-milk-be-converted-to-sweetened-condensed) You can easily modify the condensed milk with additional sugar.
40,697
Is condensed milk the same as sweetened condensed milk? I have a fudge recipe that calls for condensed milk and I can not find strictly condensed milk, only sweetened condensed milk or evaporated milk.
2013/12/30
[ "https://cooking.stackexchange.com/questions/40697", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/22232/" ]
In my experience, "condensed" milk refers to the sweetened product, and "evaporated" milk refers to the unsweetened product. People will often say "sweetened condensed" for clarity, but this is not strictly necessary: if your recipe calls for condensed milk, use the syrupy stuff. However, if this is an older recipe, all bets are off: older casual usage had "condensed" for both meanings. (Hence using the "sweetened condensed" phrasing, even though it's a bit of a tautology.)
I've found when I run across the phrase "condensed milk" vs. the more specific "sweetened condensed milk," the recipe has its origins in the UK, such as in this recipe: <http://www.saveur.com/article/Recipes/Banoffee-Pie-Classic>. First, look for clues in the recipe: it's not likely 1/2 cup of brown sugar would create a sufficiently sweet toffee layer, although I'm sure that could be argued by people with a less sweet tooth than mine. Next, look up similar recipes. I looked up other banoffee pie recipes and each US based site referenced *sweetened* condensed milk. Finally, this UK site for the Carnation brand product seems to verify it is indeed what we (in the US) call sweetened condensed milk. Check this out: <http://www.carnation.co.uk/recipes/8/Classic-Banoffee-Pie>. Hope that helps.
4,360,845
If we look at the equation \begin{align} z = \sqrt{ 8 - 6 i }, \end{align} we will find the solutions \begin{align} z\_1 = -3+i \end{align} \begin{align} z\_2 = 3 - i \end{align} How can they be both correct if we can substitute \begin{align} \sqrt{ 8 - 6 i } = -3+i \end{align} \begin{align} \sqrt{ 8 - 6 i } = 3-i \end{align} and if \begin{align} \sqrt{ 8 - 6 i } = \sqrt{ 8 - 6 i } \end{align} then \begin{align} 3-i = -3+i \end{align} and that is of course false. I can't work out where I'm going wrong.
2022/01/19
[ "https://math.stackexchange.com/questions/4360845", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1016571/" ]
This is because taking square roots in the complex plane is not an injective function. In the case of positive real numbers, the square root function is uniquely defined as the positive square root of the number. Here is a quick explanation of branches in the complex plane: <https://plus.maths.org/content/maths-minute-choosing-square-roots>
Every nonzero complex number $w$ has exactly two distinct square roots (say $z\_1$ and $z\_2$, so that $z\_1^2 = z\_2^2 = w$), and they are opposites (i.e., $z\_2 = -z\_1$). These roots certainly aren't equal; what are equal are their **squares**. Your mistake is calling them both $\sqrt{w}$. It might be more correct to speak of "a square root" instead of "the square root", since there is more than one square root for each nonzero complex number. With complex numbers, you have to be careful about what you mean by $\sqrt{w}$; it will be one of $z\_1$ or $z\_2$, but you have to decide which is which. If you choose to say that $\sqrt{w} = z\_1$, then you have immediately also determined that $z\_2 = -\sqrt{w}$. What is really going on is that you are choosing a *branch of the logarithm*, and using the definition that $\sqrt{w} = e^{\frac12\log w}$. This means that you choose a definition of the argument of $w$, so then $\log w = \ln|w| +i\arg w$ (where $\ln$ is the ordinary real-valued function of a positive real variable). There are several commonly used ranges for $\arg w$: * $(-\pi,\pi]$, so $\sqrt{w}$ has argument in $(-\pi/2,\pi/2]$ * $[0,2\pi)$, so $\sqrt{w}$ has argument in $[0,\pi)$ In the first case, $\sqrt{w}$ would lie in the right open half-plane or on the positive $y$-axis, so its opposite $-\sqrt{w}$ would lie in the left open half-plane or on the negative $y$-axis. In the second case, $\sqrt{w}$ would lie in the upper open half-plane or on the positive $x$-axis, so its opposite $-\sqrt{w}$ would lie in the lower open half-plane or on the negative $x$-axis. Other choices for defining $\arg w$ would lead to other results for which of the two roots is called $\sqrt{w}$ and which is called $-\sqrt{w}$.
4,360,845
If we look at the equation \begin{align} z = \sqrt{ 8 - 6 i }, \end{align} we will find the solutions \begin{align} z\_1 = -3+i \end{align} \begin{align} z\_2 = 3 - i \end{align} How can they be both correct if we can substitute \begin{align} \sqrt{ 8 - 6 i } = -3+i \end{align} \begin{align} \sqrt{ 8 - 6 i } = 3-i \end{align} and if \begin{align} \sqrt{ 8 - 6 i } = \sqrt{ 8 - 6 i } \end{align} then \begin{align} 3-i = -3+i \end{align} and that is of course false. I can't work out where I'm going wrong.
2022/01/19
[ "https://math.stackexchange.com/questions/4360845", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1016571/" ]
This is because taking square roots in the complex plane is not an injective function. In the case of positive real numbers, the square root function is uniquely defined as the positive square root of the number. Here is a quick explanation of branches in the complex plane: <https://plus.maths.org/content/maths-minute-choosing-square-roots>
This is precisely why the use of the radical function is restricted to *positive real numbers*. When looking for the square **rootS** (notice the plural) of a complex number $a$, you actually look for the solutionS of the following equation : $$ \boxed{z^2 = a}$$ For $a \neq 0$, there are exactly $2$ distinct solutions to the equation. Notice that if you already found a solution $z\_1$, then $z\_2 := - z\_1$ is a solution as well since it satisfies ${z\_2}^2 = (-z\_1)^2 = {z\_1}^2 = a$. This is why the equation above does **not** substitute into $z = \sqrt{a}$. > > Writing "$z^2 = a \Rightarrow z = \sqrt{a}$" is hence **not a valid argument**. > > > For example : * $a > 0$ a positive real, eg $a = 2$ The equation $z^2 = 2$ has two solutions : $\sqrt{2}$ **and** $- \sqrt{2}$. * $a < 0$ a negative real, eg $a = - 2$ The equation $z^2 = - 2$ has two solutions : $\sqrt{2} \cdot i $ **and** $- \sqrt{2} \cdot i$. * $a \notin \mathbb{R}$ a strictly complex number, eg $a = 8 - 6 \cdot i$ The equation $z^2 = 8 - 6 \cdot i$ has two solutions : $$\boxed{-3 + i \textbf{ and } 3 - i}$$ All in all, remember that $\sqrt{a}$ only "works" if $a$ is a positive real number, and even so it provides you anly one of the two solutions of $z^2 = a$, namely the positive one.
4,360,845
If we look at the equation \begin{align} z = \sqrt{ 8 - 6 i }, \end{align} we will find the solutions \begin{align} z\_1 = -3+i \end{align} \begin{align} z\_2 = 3 - i \end{align} How can they be both correct if we can substitute \begin{align} \sqrt{ 8 - 6 i } = -3+i \end{align} \begin{align} \sqrt{ 8 - 6 i } = 3-i \end{align} and if \begin{align} \sqrt{ 8 - 6 i } = \sqrt{ 8 - 6 i } \end{align} then \begin{align} 3-i = -3+i \end{align} and that is of course false. I can't work out where I'm going wrong.
2022/01/19
[ "https://math.stackexchange.com/questions/4360845", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1016571/" ]
It isn't right to write $\sqrt{8-6i}=3-i$ (the same for the other solution). When you ask, $\sqrt{z}=?$ you mean to say " what **thing** multiplied by itself gives $z$". It just so happens that this question will not yield an answer that is a single number, but the "thing" is the elements present in the set comprised of two numbers. So what is meant as the solution for this problem is $$z=\sqrt{8-6i} \implies z=\pm(3-i)$$ Some problems even have an infinite number of solutions (even in the real domain), such as trigonometric equations. For example, $\sin{x}=0 $ has an infinite number of solutions on the number line, $x=k\pi$, $k\in \mathbb{Z}$, but it isn't true that they're all the same just because their output through the function "$\sin{x}$" is the same number. Moreover you can see that no contradiction arises, because $(\pm(3-i))^2=8-6i$ (which doesn't mean $3-i=-3+i$.
Every nonzero complex number $w$ has exactly two distinct square roots (say $z\_1$ and $z\_2$, so that $z\_1^2 = z\_2^2 = w$), and they are opposites (i.e., $z\_2 = -z\_1$). These roots certainly aren't equal; what are equal are their **squares**. Your mistake is calling them both $\sqrt{w}$. It might be more correct to speak of "a square root" instead of "the square root", since there is more than one square root for each nonzero complex number. With complex numbers, you have to be careful about what you mean by $\sqrt{w}$; it will be one of $z\_1$ or $z\_2$, but you have to decide which is which. If you choose to say that $\sqrt{w} = z\_1$, then you have immediately also determined that $z\_2 = -\sqrt{w}$. What is really going on is that you are choosing a *branch of the logarithm*, and using the definition that $\sqrt{w} = e^{\frac12\log w}$. This means that you choose a definition of the argument of $w$, so then $\log w = \ln|w| +i\arg w$ (where $\ln$ is the ordinary real-valued function of a positive real variable). There are several commonly used ranges for $\arg w$: * $(-\pi,\pi]$, so $\sqrt{w}$ has argument in $(-\pi/2,\pi/2]$ * $[0,2\pi)$, so $\sqrt{w}$ has argument in $[0,\pi)$ In the first case, $\sqrt{w}$ would lie in the right open half-plane or on the positive $y$-axis, so its opposite $-\sqrt{w}$ would lie in the left open half-plane or on the negative $y$-axis. In the second case, $\sqrt{w}$ would lie in the upper open half-plane or on the positive $x$-axis, so its opposite $-\sqrt{w}$ would lie in the lower open half-plane or on the negative $x$-axis. Other choices for defining $\arg w$ would lead to other results for which of the two roots is called $\sqrt{w}$ and which is called $-\sqrt{w}$.
4,360,845
If we look at the equation \begin{align} z = \sqrt{ 8 - 6 i }, \end{align} we will find the solutions \begin{align} z\_1 = -3+i \end{align} \begin{align} z\_2 = 3 - i \end{align} How can they be both correct if we can substitute \begin{align} \sqrt{ 8 - 6 i } = -3+i \end{align} \begin{align} \sqrt{ 8 - 6 i } = 3-i \end{align} and if \begin{align} \sqrt{ 8 - 6 i } = \sqrt{ 8 - 6 i } \end{align} then \begin{align} 3-i = -3+i \end{align} and that is of course false. I can't work out where I'm going wrong.
2022/01/19
[ "https://math.stackexchange.com/questions/4360845", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1016571/" ]
It isn't right to write $\sqrt{8-6i}=3-i$ (the same for the other solution). When you ask, $\sqrt{z}=?$ you mean to say " what **thing** multiplied by itself gives $z$". It just so happens that this question will not yield an answer that is a single number, but the "thing" is the elements present in the set comprised of two numbers. So what is meant as the solution for this problem is $$z=\sqrt{8-6i} \implies z=\pm(3-i)$$ Some problems even have an infinite number of solutions (even in the real domain), such as trigonometric equations. For example, $\sin{x}=0 $ has an infinite number of solutions on the number line, $x=k\pi$, $k\in \mathbb{Z}$, but it isn't true that they're all the same just because their output through the function "$\sin{x}$" is the same number. Moreover you can see that no contradiction arises, because $(\pm(3-i))^2=8-6i$ (which doesn't mean $3-i=-3+i$.
This is precisely why the use of the radical function is restricted to *positive real numbers*. When looking for the square **rootS** (notice the plural) of a complex number $a$, you actually look for the solutionS of the following equation : $$ \boxed{z^2 = a}$$ For $a \neq 0$, there are exactly $2$ distinct solutions to the equation. Notice that if you already found a solution $z\_1$, then $z\_2 := - z\_1$ is a solution as well since it satisfies ${z\_2}^2 = (-z\_1)^2 = {z\_1}^2 = a$. This is why the equation above does **not** substitute into $z = \sqrt{a}$. > > Writing "$z^2 = a \Rightarrow z = \sqrt{a}$" is hence **not a valid argument**. > > > For example : * $a > 0$ a positive real, eg $a = 2$ The equation $z^2 = 2$ has two solutions : $\sqrt{2}$ **and** $- \sqrt{2}$. * $a < 0$ a negative real, eg $a = - 2$ The equation $z^2 = - 2$ has two solutions : $\sqrt{2} \cdot i $ **and** $- \sqrt{2} \cdot i$. * $a \notin \mathbb{R}$ a strictly complex number, eg $a = 8 - 6 \cdot i$ The equation $z^2 = 8 - 6 \cdot i$ has two solutions : $$\boxed{-3 + i \textbf{ and } 3 - i}$$ All in all, remember that $\sqrt{a}$ only "works" if $a$ is a positive real number, and even so it provides you anly one of the two solutions of $z^2 = a$, namely the positive one.
4,360,845
If we look at the equation \begin{align} z = \sqrt{ 8 - 6 i }, \end{align} we will find the solutions \begin{align} z\_1 = -3+i \end{align} \begin{align} z\_2 = 3 - i \end{align} How can they be both correct if we can substitute \begin{align} \sqrt{ 8 - 6 i } = -3+i \end{align} \begin{align} \sqrt{ 8 - 6 i } = 3-i \end{align} and if \begin{align} \sqrt{ 8 - 6 i } = \sqrt{ 8 - 6 i } \end{align} then \begin{align} 3-i = -3+i \end{align} and that is of course false. I can't work out where I'm going wrong.
2022/01/19
[ "https://math.stackexchange.com/questions/4360845", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1016571/" ]
This is precisely why the use of the radical function is restricted to *positive real numbers*. When looking for the square **rootS** (notice the plural) of a complex number $a$, you actually look for the solutionS of the following equation : $$ \boxed{z^2 = a}$$ For $a \neq 0$, there are exactly $2$ distinct solutions to the equation. Notice that if you already found a solution $z\_1$, then $z\_2 := - z\_1$ is a solution as well since it satisfies ${z\_2}^2 = (-z\_1)^2 = {z\_1}^2 = a$. This is why the equation above does **not** substitute into $z = \sqrt{a}$. > > Writing "$z^2 = a \Rightarrow z = \sqrt{a}$" is hence **not a valid argument**. > > > For example : * $a > 0$ a positive real, eg $a = 2$ The equation $z^2 = 2$ has two solutions : $\sqrt{2}$ **and** $- \sqrt{2}$. * $a < 0$ a negative real, eg $a = - 2$ The equation $z^2 = - 2$ has two solutions : $\sqrt{2} \cdot i $ **and** $- \sqrt{2} \cdot i$. * $a \notin \mathbb{R}$ a strictly complex number, eg $a = 8 - 6 \cdot i$ The equation $z^2 = 8 - 6 \cdot i$ has two solutions : $$\boxed{-3 + i \textbf{ and } 3 - i}$$ All in all, remember that $\sqrt{a}$ only "works" if $a$ is a positive real number, and even so it provides you anly one of the two solutions of $z^2 = a$, namely the positive one.
Every nonzero complex number $w$ has exactly two distinct square roots (say $z\_1$ and $z\_2$, so that $z\_1^2 = z\_2^2 = w$), and they are opposites (i.e., $z\_2 = -z\_1$). These roots certainly aren't equal; what are equal are their **squares**. Your mistake is calling them both $\sqrt{w}$. It might be more correct to speak of "a square root" instead of "the square root", since there is more than one square root for each nonzero complex number. With complex numbers, you have to be careful about what you mean by $\sqrt{w}$; it will be one of $z\_1$ or $z\_2$, but you have to decide which is which. If you choose to say that $\sqrt{w} = z\_1$, then you have immediately also determined that $z\_2 = -\sqrt{w}$. What is really going on is that you are choosing a *branch of the logarithm*, and using the definition that $\sqrt{w} = e^{\frac12\log w}$. This means that you choose a definition of the argument of $w$, so then $\log w = \ln|w| +i\arg w$ (where $\ln$ is the ordinary real-valued function of a positive real variable). There are several commonly used ranges for $\arg w$: * $(-\pi,\pi]$, so $\sqrt{w}$ has argument in $(-\pi/2,\pi/2]$ * $[0,2\pi)$, so $\sqrt{w}$ has argument in $[0,\pi)$ In the first case, $\sqrt{w}$ would lie in the right open half-plane or on the positive $y$-axis, so its opposite $-\sqrt{w}$ would lie in the left open half-plane or on the negative $y$-axis. In the second case, $\sqrt{w}$ would lie in the upper open half-plane or on the positive $x$-axis, so its opposite $-\sqrt{w}$ would lie in the lower open half-plane or on the negative $x$-axis. Other choices for defining $\arg w$ would lead to other results for which of the two roots is called $\sqrt{w}$ and which is called $-\sqrt{w}$.
27,633,864
I have two tables T1 and T2. I want to select all rows from T2 table where t1\_id not equal to any id in T1 table. This is not working: ``` SELECT t2.id, t2.t1_id, t2.data FROM T2.t2, T1.t1 WHERE t2.t1_id != t1.id ```
2014/12/24
[ "https://Stackoverflow.com/questions/27633864", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4159100/" ]
You could use subquery with NOT IN clause as below: ``` SELECT t2.id, t2.t1_id, t2.data FROM T2 t2 WHERE t2.t1_id NOT IN (SELECT DISTINCT id FROM T1) ```
``` SELECT t2.id, t2.t1_id, t2.data FROM T2.t2 LEFT JOIN T1.t1 ON t2.t1_id = t1.id GROUP BY t2.id, t2.t1_id, t2.data HAVING sum(t1.id is not null) = 0 ```
27,633,864
I have two tables T1 and T2. I want to select all rows from T2 table where t1\_id not equal to any id in T1 table. This is not working: ``` SELECT t2.id, t2.t1_id, t2.data FROM T2.t2, T1.t1 WHERE t2.t1_id != t1.id ```
2014/12/24
[ "https://Stackoverflow.com/questions/27633864", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4159100/" ]
Use **LEFT JOIN** to fetch all records from left table and add where condition with checking of null value of second table to find unmatched records of left table Try this: ``` SELECT t2.id, t2.t1_id, t2.data FROM T2.t2 LEFT OUTER JOIN T1.t1 ON t2.t1_id = t1.id WHERE t1.id IS NULL ```
``` SELECT t2.id, t2.t1_id, t2.data FROM T2.t2 LEFT JOIN T1.t1 ON t2.t1_id = t1.id GROUP BY t2.id, t2.t1_id, t2.data HAVING sum(t1.id is not null) = 0 ```
27,633,864
I have two tables T1 and T2. I want to select all rows from T2 table where t1\_id not equal to any id in T1 table. This is not working: ``` SELECT t2.id, t2.t1_id, t2.data FROM T2.t2, T1.t1 WHERE t2.t1_id != t1.id ```
2014/12/24
[ "https://Stackoverflow.com/questions/27633864", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4159100/" ]
You could use subquery with NOT IN clause as below: ``` SELECT t2.id, t2.t1_id, t2.data FROM T2 t2 WHERE t2.t1_id NOT IN (SELECT DISTINCT id FROM T1) ```
Use **LEFT JOIN** to fetch all records from left table and add where condition with checking of null value of second table to find unmatched records of left table Try this: ``` SELECT t2.id, t2.t1_id, t2.data FROM T2.t2 LEFT OUTER JOIN T1.t1 ON t2.t1_id = t1.id WHERE t1.id IS NULL ```
26,781
I came across with this video in which the Falcon Heavy boosters' landing burn seems to deaccelerate the 1st stage until a point their speed is constant right before touchdown. Can the 1st stage maintain a constant speed, and eventually hover? Did they tweaked the 1D enough to throttle deeper than before?
2018/04/19
[ "https://space.stackexchange.com/questions/26781", "https://space.stackexchange.com", "https://space.stackexchange.com/users/18770/" ]
The falcon 9 first stage **cannot** hover as the thrust of one Merlin engine even at its lowest thrust is able to overcome the mass of the almost-empty first stage. The stage appears to decelerate very slowly in the final moments of the landing as a result of the engine firing pattern SpaceX uses. Per Elon Musk: > > Thanks! 3 of 9 engines are lit initially, dropping to 1 near ground. Even w 1 lit, it can't hover, so always land at high g > - Elon Musk (@elonmusk) April 15, 2015 > > > So as soon as the 3 engines are fired, the stage decelerates rapidly, but when the two outer ring engines are shut down for final approach, the deceleration of the stage slows rapidly, giving the illusion that it is hovering. Although this tweet is from early 2015, we know that the engines on the Falcon 9 have only gotten stronger, so it would have no chance of hovering. In order to land, SpaceX uses a hover-slam or suicide burn, so as soon as the stage reaches zero velocity while decelerating, it’s on the ground/ASDS. Blue Origin’s [New Shepard](https://en.m.wikipedia.org/wiki/New_Shepard) rocket, however, can hover above the ground and does so to travel laterally before landing on the pad.
The Merlin 1D claims to be able to throttle to 40%, which would be about 350 kN thrust at sea level. The dry mass of a Falcon 9 first stage is about 24 tons, so it can't hover, but it can get pretty close. Thrust to weight would be about 1.3. All data from wikipedia.
6,587,098
What is the best way to reuse code within projects? Let's say I implemented a UI Element and I want it to be used in both my iphone and ipad application without having to copy the code over and have 2 copies of it.
2011/07/05
[ "https://Stackoverflow.com/questions/6587098", "https://Stackoverflow.com", "https://Stackoverflow.com/users/242769/" ]
Just create a project, which includes all your shared code in XCode and reference this project in your iPhone and iPad application project. Plain and simple.
For me I would make a static library project which contains the shared code (UI Element in your example) in Xcode. Then when I need to use the library in the iPhone or iPad app project, then I can just reference the static library project by drag and drop the project to the Project Navigator and configure the correct dependency, library used and header search path. In this way you always have a single source of library source code for easier maintenance and modification. Certainly you can actually compile the static library into binary and link it to your project, but it just not too flexible when you find bugs in your static library and need to switch to another project to correct the bug, and then do the compile and copy of the binary library file. I have just wrote an article (link below) on how to link up the static library project to an iOS project on Xcode 4.3.2. This maybe useful for you to solve the header not found problem you encountered. Hope this help. <http://vicidi.wordpress.com/2012/04/11/linking-an-ios-static-library-project-in-another-project-in-xcode-4-3-2/>
63,454,223
I had an old Symfony 3.1 site that I upgraded to Symfony 3.4.x then to Symfony 4.4.11 but I didn't upgrade it to symfony flex. I fixed many things and the public sites seem working. I had to rebuild the authentication because the old one was not compatible with sf4. I followed this <https://symfony.com/doc/4.4/security/form_login_setup.html> and this: <https://symfonycasts.com/screencast/symfony-security/make-user> I ended up in a situation that after a successful authentication when it redirects to the admin area then it always checks the LoginFormAuthenticator again which obviously doesn't support the admin area and it redirects back to the login page with anonyous user. There are many discussions about this issue and tried out all what I found but I didn't find the solution. Not even with debugging it. The session saved in the defined path. Its id is same like the PHPSESSID in the browser. Site runs HTTP protocol. security.yml ``` security: encoders: AppBundle\Entity\User: algorithm: bcrypt cost: 12 providers: user_provider: entity: class: AppBundle:User property: email firewalls: dev: pattern: ^/(_(profiler|wdt|error)|css|images|js)/ security: false main: stateless: true pattern: ^/ anonymous: true logout_on_user_change: true guard: authenticators: - AppBundle\Security\LoginFormAuthenticator form_login: provider: user_provider username_parameter: email csrf_token_generator: security.csrf.token_manager login_path: app_login logout: path: app_logout access_control: - { path: ^/admin, roles: ROLE_ADMIN } - { path: ^/, roles: IS_AUTHENTICATED_ANONYMOUSLY } ``` routing: ``` app_login: path: /login defaults: { _controller: AppBundle\Controller\BackendController:loginAction } app_logout: path: /logout defaults: { _controller: AppBundle\Controller\BackendController:logoutAction } app_admin: path: /admin/{page}/{entry} defaults: { _controller: AppBundle\Controller\BackendController:showAction, entry: null } ``` User.php ``` <?php namespace AppBundle\Entity; use Doctrine\ORM\Mapping as ORM; use Symfony\Component\Security\Core\User\EquatableInterface; use Symfony\Component\Security\Core\User\UserInterface; /** * User * * @ORM\Table(name="user") * @ORM\Entity(repositoryClass="AppBundle\Repository\UserRepository") */ class User implements UserInterface, \Serializable, EquatableInterface { private $id; // and so on public function serialize() { return serialize(array( $this->id, $this->email, $this->password )); } public function unserialize($serialized) { list ( $this->id, $this->email, $this->password, ) = unserialize($serialized); } public function getRoles() { return array('ROLE_ADMIN'); } public function getUsername() { return $this->getEmail(); } public function isEqualTo(UserInterface $user) { if (!$user instanceof User) { return false; } if ($this->password !== $user->getPassword()) { return false; } if ($this->salt !== $user->getSalt()) { return false; } if ($this->email !== $user->getUsername()) { return false; } return true; } } ``` backend controller: ``` class BackendController extends AbstractController { public function loginAction(AuthenticationUtils $authenticationUtils) { return $this->render('AppBundle:Backend:page.html.twig', array( 'email' => $authenticationUtils->getLastUsername(), 'error' => $authenticationUtils->getLastAuthenticationError() )); } public function logoutAction() { $this->container->get('security.token_storage')->setToken(null); $this->container->get('request')->getSession()->invalidate(); } public function showAction(Request $request, $page, $entry) { $this->denyAccessUnlessGranted('ROLE_ADMIN', null, 'Unable to access this page!'); // some logic } } ``` LoginFormAuthentication.php looks like the same in the example and it works. It successfully reaches the onAuthenticationSuccess() and redirects to the admin area. dev.log ``` request.INFO: Matched route "app_login". {"route":"app_login"..} security.DEBUG: Checking for guard authentication credentials. {"firewall_key":"main","authenticators":1} [] security.DEBUG: Checking support on guard authenticator. {"firewall_key":"main","authenticator":"AppBundle\\Security\\LoginFormAuthenticator"} [] security.DEBUG: Calling getCredentials() on guard authenticator. {"firewall_key":"main","authenticator":"AppBundle\\Security\\LoginFormAuthenticator"} [] security.DEBUG: Passing guard token information to the GuardAuthenticationProvider {"firewall_key":"main","authenticator":"AppBundle\\Security\\LoginFormAuthenticator"} [] doctrine.DEBUG: SELECT t0.* FROM user t0 WHERE t0.email = ? LIMIT 1 ["[email protected]"] [] security.INFO: Guard authentication successful! {"token":"[object] (Symfony\\Component\\Security\\Guard\\Token\\PostAuthenticationGuardToken: PostAuthenticationGuardToken(user=\"[email protected]\", authenticated=true, roles=\"ROLE_ADMIN\"))","authenticator":"AppBundle\\Security\\LoginFormAuthenticator"} [] security.DEBUG: Guard authenticator set success response. Redirect response security.DEBUG: Remember me skipped: it is not configured for the firewall. security.DEBUG: The "AppBundle\Security\LoginFormAuthenticator" authenticator set the response. Any later authenticator will not be called {"authenticator":"AppBundle\\Security\\LoginFormAuthenticator"} [] ``` after the redirection: ``` request.INFO: Matched route "app_admin". {"route":"app_admin" ..} security.DEBUG: Checking for guard authentication credentials. {"firewall_key":"main","authenticators":1} [] security.DEBUG: Checking support on guard authenticator. {"firewall_key":"main","authenticator":"AppBundle\\Security\\LoginFormAuthenticator"} [] security.DEBUG: Guard authenticator does not support the request. {"firewall_key":"main","authenticator":"AppBundle\\Security\\LoginFormAuthenticator"} [] security.INFO: Populated the TokenStorage with an anonymous Token. [] [] security.DEBUG: Access denied, the user is not fully authenticated; redirecting to authentication entry point. security.DEBUG: Calling Authentication entry point. [] [] ```
2020/08/17
[ "https://Stackoverflow.com/questions/63454223", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5356216/" ]
In HQL you must use properties instead of database column names. Change your HQL to ``` String queryString = "select \r\n" + "cms.myStatusId as 'myStatusId',\r\n" + "cms.statusLabel as 'statusLabel',\r\n" + "csl.status_id as 'companyStatusLabel'\r\n" + "from "+client+".corresponding_status cms \r\n" + "join "+client+".company_status_label csl with csl.status_id = cms.myStatusId"; ``` EDIT: You probably need to change company\_status\_label entity accordingly EDIT2: Changed to WITH
Instead of building JPA queries by hand, I would suggest the [criteria API](https://www.baeldung.com/hibernate-criteria-queries). Your query above would change from: ``` String queryString = "select \r\n" + "cms.my_status_id as 'myStatusId',\r\n" + "cms.status_label as 'statusLabel',\r\n" + "csl.status_id as 'companyStatusLabel'\r\n" + "from "+client+".corresponding_status cms \r\n" + "join "+client+".company_status_label csl on csl.status_id = cms.my_status_id"; ``` to something akin to: ``` Session session = HibernateUtil.getHibernateSession(); CriteriaBuilder cb = session.getCriteriaBuilder(); CriteriaQuery<Entity> cq = cb.createQuery(Entity.class); Root<Entity> root = cq.from(Entity.class); cq.select(root); Query<Entity> query = session.createQuery(cq); List<Entity> results = query.getResultList(); ```
18,347,586
**Update: The code works correctly when the `<auth-constraint>` element is removed completely. Can anyone explain why it doesn't work when present?** I'm writing some code to practice securing a servlet in the deployment descriptor, and I'm getting the following in the browser: ``` HTTP Status 403 - Access to the requested resource has been denied type Status report message Access to the requested resource has been denied description Access to the specified resource has been forbidden. Apache Tomcat/7.0.42 ``` Any thoughts as to what I'm doing wrong? I've done some searching through prior posts, and it seems as though there may have been updates to the role names in Tomcat 7 - I've played with this, but with no success so far. (Code below). **web.xml** ``` <?xml version="1.0" ?> <web-app version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"> <servlet> <servlet-name>CheckedServlet</servlet-name> <servlet-class>webcert.ch05.ex0502J.CheckedServlet</servlet-class> <security-role-ref> <role-name>MGR</role-name> <role-link>manager</role-link> </security-role-ref> </servlet> <servlet-mapping> <servlet-name>CheckedServlet</servlet-name> <url-pattern>/CheckedServlet</url-pattern> </servlet-mapping> <security-constraint> <web-resource-collection> <web-resource-name>CheckedServletConstraint</web-resource-name> <url-pattern>/CheckedServlet</url-pattern> </web-resource-collection> <auth-constraint> <role-name>*</role-name> </auth-constraint> </security-constraint> <security-role> <role-name>manager</role-name> </security-role> ``` **CheckedServlet.java** ``` package webcert.ch05.ex0502J; import java.io.*; import javax.servlet.*; import javax.servlet.http.*; import java.security.*; public class CheckedServlet extends HttpServlet{ protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { doGet(request, response); } protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException{ response.setContentType("text/html"); PrintWriter out = response.getWriter(); out.write("<html><head><title>CheckedServlet</title></head><body>"); String userMessage; Principal user = request.getUserPrincipal(); if(user == null) userMessage = "Access denied."; else userMessage = "Access granted."; out.write("<br>" + userMessage + " Principal name is " + user + "<br>If authorized, you should see some more text below:"); if(request.isUserInRole("manager")) out.write("<br>Here's some super secret extra text since your " + "role is manager."); out.write("</body></html>"); out.flush(); out.close(); } } ```
2013/08/21
[ "https://Stackoverflow.com/questions/18347586", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2488991/" ]
If you enable security for you web application (as you have done by adding the `<security-constraint>` clause to web.xml), you also need to define corresponding users/roles/passwords in tomcat-user.xml. This file is usually located in the conf folder of the tomcat installation. Here are the sample lines that you can add to your installation's tomcat-user.xml file: ``` <role rolename="MGR"/> <user password="mypassword" roles="MGR" username="user1"/> <user password="mypassword2" roles="MGR" username="user2"/> ``` Now, when you access your application, instead of getting the status 403 error message, your application will instead prompt you to enter a userid / password using HTTP Basic Auth. You should then be able to successfully login if you use one of the above userids.
The two remarks from the other members where correct. To summarize the points of adding a role, a login-config and a corresponding user config of the tomcat you may have a look at this post: [How to fix Tomcat HTTP Status 403: Access to the requested resource has been denied?](https://stackoverflow.com/questions/5808206/how-to-fix-tomcat-http-status-403-access-to-the-requested-resource-has-been-den/36615902#36615902)
18,347,586
**Update: The code works correctly when the `<auth-constraint>` element is removed completely. Can anyone explain why it doesn't work when present?** I'm writing some code to practice securing a servlet in the deployment descriptor, and I'm getting the following in the browser: ``` HTTP Status 403 - Access to the requested resource has been denied type Status report message Access to the requested resource has been denied description Access to the specified resource has been forbidden. Apache Tomcat/7.0.42 ``` Any thoughts as to what I'm doing wrong? I've done some searching through prior posts, and it seems as though there may have been updates to the role names in Tomcat 7 - I've played with this, but with no success so far. (Code below). **web.xml** ``` <?xml version="1.0" ?> <web-app version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"> <servlet> <servlet-name>CheckedServlet</servlet-name> <servlet-class>webcert.ch05.ex0502J.CheckedServlet</servlet-class> <security-role-ref> <role-name>MGR</role-name> <role-link>manager</role-link> </security-role-ref> </servlet> <servlet-mapping> <servlet-name>CheckedServlet</servlet-name> <url-pattern>/CheckedServlet</url-pattern> </servlet-mapping> <security-constraint> <web-resource-collection> <web-resource-name>CheckedServletConstraint</web-resource-name> <url-pattern>/CheckedServlet</url-pattern> </web-resource-collection> <auth-constraint> <role-name>*</role-name> </auth-constraint> </security-constraint> <security-role> <role-name>manager</role-name> </security-role> ``` **CheckedServlet.java** ``` package webcert.ch05.ex0502J; import java.io.*; import javax.servlet.*; import javax.servlet.http.*; import java.security.*; public class CheckedServlet extends HttpServlet{ protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { doGet(request, response); } protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException{ response.setContentType("text/html"); PrintWriter out = response.getWriter(); out.write("<html><head><title>CheckedServlet</title></head><body>"); String userMessage; Principal user = request.getUserPrincipal(); if(user == null) userMessage = "Access denied."; else userMessage = "Access granted."; out.write("<br>" + userMessage + " Principal name is " + user + "<br>If authorized, you should see some more text below:"); if(request.isUserInRole("manager")) out.write("<br>Here's some super secret extra text since your " + "role is manager."); out.write("</body></html>"); out.flush(); out.close(); } } ```
2013/08/21
[ "https://Stackoverflow.com/questions/18347586", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2488991/" ]
You need to add the following in your web.xml: ``` <login-config> <auth-method>BASIC</auth-method> </login-config> ``` and then you also edit your `tomcat-user.xml` to add the role and user name. Re-deploy the application after these and this problem should be gone.
The two remarks from the other members where correct. To summarize the points of adding a role, a login-config and a corresponding user config of the tomcat you may have a look at this post: [How to fix Tomcat HTTP Status 403: Access to the requested resource has been denied?](https://stackoverflow.com/questions/5808206/how-to-fix-tomcat-http-status-403-access-to-the-requested-resource-has-been-den/36615902#36615902)
27,080,260
I'm working on a python script to convert full uppercase addresses to Title Case. The issue I'm facing is that when I apply `.title()` to a string like *SOUTH 16TH STREET*, I get *South 16**Th** Street*. The desired conversion would be *South 16**th** Street*, where the abbreviation to the ordinal is lowercase. *What is a simple way in python to accomplish this?* I was thinking about using some kind of regex.
2014/11/22
[ "https://Stackoverflow.com/questions/27080260", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2105596/" ]
It might be easiest to split the string into a list of separate words, capitalize each word and then join them back together: ``` >>> address = "SOUTH 16TH STREET" >>> " ".join([word.capitalize() for word in address.split()]) 'South 16th Street' ``` The `capitalize()` method sets the first character of a string to uppercase and the proceeding characters to lowercase. Since numbers don't have upper/lowercase forms, "16TH" and similar tokens are transformed as required.
Use this Regex-based solution: ``` import re convert = lambda s: " ".join([x.lower() if re.match("^\d+(ST|ND|RD|TH)$", x) is not None else x.title() for x in s.split()]) ``` Basically, I split the string and see for each word if it is an ordinal, then apply the appropriate action.
27,080,260
I'm working on a python script to convert full uppercase addresses to Title Case. The issue I'm facing is that when I apply `.title()` to a string like *SOUTH 16TH STREET*, I get *South 16**Th** Street*. The desired conversion would be *South 16**th** Street*, where the abbreviation to the ordinal is lowercase. *What is a simple way in python to accomplish this?* I was thinking about using some kind of regex.
2014/11/22
[ "https://Stackoverflow.com/questions/27080260", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2105596/" ]
Use this Regex-based solution: ``` import re convert = lambda s: " ".join([x.lower() if re.match("^\d+(ST|ND|RD|TH)$", x) is not None else x.title() for x in s.split()]) ``` Basically, I split the string and see for each word if it is an ordinal, then apply the appropriate action.
``` >>> str_='SOUTH 16TH STREET' >>> ' '.join([i.title() if i.isalpha() else i.lower() for i in str_.split()]) 'South 16th Street' ```
27,080,260
I'm working on a python script to convert full uppercase addresses to Title Case. The issue I'm facing is that when I apply `.title()` to a string like *SOUTH 16TH STREET*, I get *South 16**Th** Street*. The desired conversion would be *South 16**th** Street*, where the abbreviation to the ordinal is lowercase. *What is a simple way in python to accomplish this?* I was thinking about using some kind of regex.
2014/11/22
[ "https://Stackoverflow.com/questions/27080260", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2105596/" ]
To solve your stated problem narrowly, I think you may find `string.capwords()` useful. It encapsulates the split -> capitalize -> join sequence into a single command. ``` >>> address = "SOUTH 16TH STREET" >>> capwords(address) 'South 16th Street' ``` See more info on that command in Python 3.4 at... <https://docs.python.org/3.4/library/string.html#string-functions> It also exists in earlier versions of Python. However, broadening your question to address formatting generally, you may run into trouble with this simplistic approach. More complex (e.g. regex-based) approaches may be required. Using an example from my locale: ``` >>> address = "Highway 99N" # Wanting'Highway 99N' >>> capwords(address) 'Hwy 99n' ``` Address parsing (and formatting) is a wicked problem due to the amount of variation in legitimate addresses as well as the different ways people will write them (abbreviations, etc.). The [pyparsing](http://pyparsing.wikispaces.com/) module might also be a way to go if you don't like the regex approach.
Use this Regex-based solution: ``` import re convert = lambda s: " ".join([x.lower() if re.match("^\d+(ST|ND|RD|TH)$", x) is not None else x.title() for x in s.split()]) ``` Basically, I split the string and see for each word if it is an ordinal, then apply the appropriate action.
27,080,260
I'm working on a python script to convert full uppercase addresses to Title Case. The issue I'm facing is that when I apply `.title()` to a string like *SOUTH 16TH STREET*, I get *South 16**Th** Street*. The desired conversion would be *South 16**th** Street*, where the abbreviation to the ordinal is lowercase. *What is a simple way in python to accomplish this?* I was thinking about using some kind of regex.
2014/11/22
[ "https://Stackoverflow.com/questions/27080260", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2105596/" ]
It might be easiest to split the string into a list of separate words, capitalize each word and then join them back together: ``` >>> address = "SOUTH 16TH STREET" >>> " ".join([word.capitalize() for word in address.split()]) 'South 16th Street' ``` The `capitalize()` method sets the first character of a string to uppercase and the proceeding characters to lowercase. Since numbers don't have upper/lowercase forms, "16TH" and similar tokens are transformed as required.
``` >>> str_='SOUTH 16TH STREET' >>> ' '.join([i.title() if i.isalpha() else i.lower() for i in str_.split()]) 'South 16th Street' ```
27,080,260
I'm working on a python script to convert full uppercase addresses to Title Case. The issue I'm facing is that when I apply `.title()` to a string like *SOUTH 16TH STREET*, I get *South 16**Th** Street*. The desired conversion would be *South 16**th** Street*, where the abbreviation to the ordinal is lowercase. *What is a simple way in python to accomplish this?* I was thinking about using some kind of regex.
2014/11/22
[ "https://Stackoverflow.com/questions/27080260", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2105596/" ]
To solve your stated problem narrowly, I think you may find `string.capwords()` useful. It encapsulates the split -> capitalize -> join sequence into a single command. ``` >>> address = "SOUTH 16TH STREET" >>> capwords(address) 'South 16th Street' ``` See more info on that command in Python 3.4 at... <https://docs.python.org/3.4/library/string.html#string-functions> It also exists in earlier versions of Python. However, broadening your question to address formatting generally, you may run into trouble with this simplistic approach. More complex (e.g. regex-based) approaches may be required. Using an example from my locale: ``` >>> address = "Highway 99N" # Wanting'Highway 99N' >>> capwords(address) 'Hwy 99n' ``` Address parsing (and formatting) is a wicked problem due to the amount of variation in legitimate addresses as well as the different ways people will write them (abbreviations, etc.). The [pyparsing](http://pyparsing.wikispaces.com/) module might also be a way to go if you don't like the regex approach.
``` >>> str_='SOUTH 16TH STREET' >>> ' '.join([i.title() if i.isalpha() else i.lower() for i in str_.split()]) 'South 16th Street' ```
24,319,082
Add.cpp ``` int add(int x, int y) { return x + y; } ``` Main.cpp ``` #include <iostream> int main() { using namespace std; cout << "The sum of 3 and 4 is: " << add(3, 4) << endl; return 0; } ``` When I try to compile this program I get an error message for line 6 of main.cpp that states: "error: 'add' was not declared in this scope".
2014/06/20
[ "https://Stackoverflow.com/questions/24319082", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3705661/" ]
Create a header file Contents: ``` int add(int x, y); ``` Include that file main.cpp i.e. #include "headerfile.h" Then the rest is up to the compiler environment. Basically need to compile each .cpp to object code and then link them. You need to read up about this as this differs between environment. Also read up on header guards and also stuff like graadle, SCONS, Makefiles. Also good to learn about version control systes e.g. mercurial. Guess you going to have a busy day
You need `Add.h` file and include it in your `Main.cpp` **Add.h** ``` int add(int x, int y); ``` **Main.cpp** ``` #include <iostream> #include "Add.h" ... ```
24,319,082
Add.cpp ``` int add(int x, int y) { return x + y; } ``` Main.cpp ``` #include <iostream> int main() { using namespace std; cout << "The sum of 3 and 4 is: " << add(3, 4) << endl; return 0; } ``` When I try to compile this program I get an error message for line 6 of main.cpp that states: "error: 'add' was not declared in this scope".
2014/06/20
[ "https://Stackoverflow.com/questions/24319082", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3705661/" ]
Create a header file Contents: ``` int add(int x, y); ``` Include that file main.cpp i.e. #include "headerfile.h" Then the rest is up to the compiler environment. Basically need to compile each .cpp to object code and then link them. You need to read up about this as this differs between environment. Also read up on header guards and also stuff like graadle, SCONS, Makefiles. Also good to learn about version control systes e.g. mercurial. Guess you going to have a busy day
In c++ the scope is all visible functions/methods and variables. In order for it to be seen in the scope in this instance you would have to create a header file that contains your method "add". One way to do this would be to instead of having it in a .cpp file have it in a .h file, then include that .h file in your main.cpp file like this ``` #include "Add.h" ```
10,558,766
I have a question if I may. I have a variable named `$source` and this *should* contain a relative path to a file, for example `./uploads/2012/some-document.pdf` Now, this `$source` variable will contain user input, via $\_GET and $\_POST. I don't want people to enter URLs and I only want to do something if the file exists **only** on the local server. My question is what is the best way to check if a file exists on the local server only? --- This is what I've got so far: 1) `file_exists` may return true depending on the server configuration, so I could use this alongside stripos to check if the first few charatcers of the string is http:// like so: ``` if( file_exists($source) && stripos($source,'http://')!==0 ) { echo 'File exists on local'; } ``` However, the downside would be I'd have to specify all the different URL types such as https://, http:// and ftp:// just to be safe. 2) I use `realpath` to get the absolute path of the file, and this returns false if it cannot be found. This seems to be pretty solid, but not 100% this is the best application for it 3) Use `preg_replace` to remove all URL mentions in the string first, and then simply use `file_exists`. Whilst this would probably be the most secure, it would most likely be the most intensive and I'd prefer not to use this method.
2012/05/11
[ "https://Stackoverflow.com/questions/10558766", "https://Stackoverflow.com", "https://Stackoverflow.com/users/128161/" ]
In addition to the other answers, you can deny paths that use a scheme simply with: ``` if (parse_url($path, PHP_URL_SCHEME)) { // has something:// so ignore } ``` This would also work with specialty wrappers such as `php://`, `zlib://`, etc... You can also force a local check on the input variable by prepending with `file://`: ``` file_exists('file://' . $path); ```
Go with `realpath()` (in conjunction with `file_exists`). It will filter out URLs for you, and you should get a solid reliable result on whether the file exists or not.
1,199,148
Let $E,F \subset \mathbb{R^n}$ Note that $< . >$ defines the Inner product on $\mathbb{R^n}$ Let $(e\_1,....,e\_k)$ and $(f\_1,.....f\_l$) be Orthonormal bases of E and F respectively. Consider the Matrix $A=(<e\_i,f\_j>)$ of size $k\*l$ Let $\lambda\_i$ be an Eigenvalue of $A^T A$ Prove that $\lambda\_i \in [0, 1] \; \; \forall i$ and that they do not depend on the choice of bases of $E$ and $F$ I tryed experimenting on a small dimension matrix but I'm not able to figure it out. Thoughts?
2015/03/21
[ "https://math.stackexchange.com/questions/1199148", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ]
If $V$ is any inner product space and $W$ is a subspace, then there is a natural orthogonal projection map $P\_W:V \to W$. These projections have several important properties. Among these we can show that $P\_W$ has operator norm equal to $1$ and is positive-semidefinite. (<http://en.wikipedia.org/wiki/Hilbert_space#Orthogonal_complements_and_projections>) We will consider the maps $P\_E: \mathbb{R}^n \to E$ and $P\_F: \mathbb{R}^n \to F$, and in particular we will consider the restrictions of these maps to appropriate subspaces. Specifically, notice that $A^t$ is the matrix for $P\_F|\_E:E \to F$ with respect to the bases $(e\_1,\dots,e\_k)$ and $(f\_1,\dots,f\_l)$. Also notice that $A$ is the matrix for $P\_E|\_F:F \to E$ with respect to the bases $(f\_1,\dots,f\_l)$ and $(e\_1,\dots,e\_k)$. This can be easily computed using the definition of the matrix for a linear map and the properties of these projections. It follows that $A^tA$ is the matrix for the map $P=P\_F \circ P\_E: F \to F$ with respect to the basis $(f\_1, \dots, f\_l)$. Since $P\_E$ and $P\_F$ have operator norm equal to $1$, it follows that $P$ has operator norm less than or equal to 1. Thus, all eigenvalues have absolute value less than or equal to $1$. But $P$ is also a positive semidefinite operator on $F$ (because for any $v \in F$, we have that $\langle Pv,v \rangle = \langle P\_Ev, P\_Fv \rangle = \langle P\_Ev, v \rangle = ||P\_Ev||^2\geq 0$), and thus the eigenvalues of $P$ must be non-negative. Then it immediately follows that the eigenvalues of $P$ (and thus $A^tA$) must be in $[0,1]$. These eigenvalues only depend on the maps $P\_E$ and $P\_F$. In other words these eigenvalues only depend on the subspaces $E$ and $F$, and not the choices of bases. Also notice that this could have been done in any inner product space. Nothing special about $\mathbb{R}^n$ or the dot product.
Write $M\_{E}$ as the matrix with the columns $e\_j$ and $M\_{F}$ as the matrix with the columns $f\_j$, then notice that $A = M\_E^T M\_F$. Notice we are searching for $\left\|A \right\| = \sqrt{\lambda\_{max}(A^TA)}$ and trying to show that it is less than $1$. Now $\left\|A\right\| \leq \left\|M\_E \right\| \left\|M\_F\right\|$ and since $M^T\_E M\_E = I\_k$, $M^T\_F M\_F = I\_l$ we have $\left\|M\_E \right\| = 1 , \left\|M\_F \right\| = 1$ and finally yield $\left\|A\right\| \leq 1 $.
8,961,413
I'm using a VM with the following configuration: * Arch Linux (3.0-ARCH kernel) * GHC 7.0.3 * cabal-install 0.10.2 * Cabal library 1.10.1.0 When I try to build zlib using cabal... ``` $ cabal install zlib ``` I get the following output: ``` Resolving dependencies... Downloading zlib-0.5.3.2... Configuring zlib-0.5.3.2... Preprocessing library zlib-0.5.3.2... Stream.hsc:86:21: error: missing binary operator before token "(" Stream.hsc: In function ‘main’: Stream.hsc:86:21: error: missing binary operator before token "(" Stream.hsc:86:21: error: missing binary operator before token "(" compiling dist/build/Codec/Compression/Zlib/Stream_hsc_make.c failed (exit code 1) command was: /usr/bin/gcc -c dist/build/Codec/Compression/Zlib/Stream_hsc_make.c -o dist/build/Codec/Compression/Zlib/Stream_hsc_make.o -fno-stack-protector -fno-stack-protector -D__GLASGOW_HASKELL__=700 -Dlinux_BUILD_OS -Dlinux_HOST_OS -Dx86_64_BUILD_ARCH -Dx86_64_HOST_ARCH -I/usr/lib/ghc-7.0.3/bytestring-0.9.1.10/include -I/usr/lib/ghc-7.0.3/base-4.3.1.0/include -I/usr/lib/ghc-7.0.3/include -I/usr/lib/ghc-7.0.3/include -I/usr/lib/ghc-7.0.3/include/ cabal: Error: some packages failed to install: zlib-0.5.3.2 failed during the building phase. The exception was: ExitFailure 1 ``` Can anyone shed some light on this build error? --- **Edit:** Here's a snippet from Stream.hsc with line numbers: ``` 82 import Foreign 83 ( Word8, Ptr, nullPtr, plusPtr, peekByteOff, pokeByteOff, mallocBy tes 84 , ForeignPtr, FinalizerPtr, newForeignPtr_, addForeignPtrFinalizer 85 , withForeignPtr, touchForeignPtr ) 86 #if MIN_VERSION_base(4,4,0) 87 import Foreign.ForeignPtr.Unsafe ( unsafeForeignPtrToPtr ) 88 import System.IO.Unsafe ( unsafePerformIO ) 89 #else 90 import Foreign ( unsafeForeignPtrToPtr, unsafePerformIO ) 91 #endif ```
2012/01/22
[ "https://Stackoverflow.com/questions/8961413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/319002/" ]
For some reason, the `MIN_VERSION_base` macro doesn't get expanded, thus the preprocessor sees the condition `MIN_VERSION_base(4,4,0)` which it of course cannot handle. I've not yet found out why the macro isn't expanded, but workarounds are 1. install zlib-0.5.3.1 instead 2. unpack the tarball and edit `Codec/Compression/Zlib/Stream.hsc` to remove the offending macro (you're using 7.0.3, so your base version is 4.3.1.0, you can replace the macro with 0) **Edit:** After poking around a bit, I found out that to hide these preprocessor directives, which aren't intended for `hsc2hs` to process, they have to be masked by an extra '#'. Bug report underway.
The most likely reason is that the zlib C library headers are missing on your machine. You might instead try to use the Arch Linux "Haskell Platform" or [haskell-zlib](http://www.archlinux.org/packages/extra/i686/haskell-zlib/) packages, which resolve C dependencies for you.
8,961,413
I'm using a VM with the following configuration: * Arch Linux (3.0-ARCH kernel) * GHC 7.0.3 * cabal-install 0.10.2 * Cabal library 1.10.1.0 When I try to build zlib using cabal... ``` $ cabal install zlib ``` I get the following output: ``` Resolving dependencies... Downloading zlib-0.5.3.2... Configuring zlib-0.5.3.2... Preprocessing library zlib-0.5.3.2... Stream.hsc:86:21: error: missing binary operator before token "(" Stream.hsc: In function ‘main’: Stream.hsc:86:21: error: missing binary operator before token "(" Stream.hsc:86:21: error: missing binary operator before token "(" compiling dist/build/Codec/Compression/Zlib/Stream_hsc_make.c failed (exit code 1) command was: /usr/bin/gcc -c dist/build/Codec/Compression/Zlib/Stream_hsc_make.c -o dist/build/Codec/Compression/Zlib/Stream_hsc_make.o -fno-stack-protector -fno-stack-protector -D__GLASGOW_HASKELL__=700 -Dlinux_BUILD_OS -Dlinux_HOST_OS -Dx86_64_BUILD_ARCH -Dx86_64_HOST_ARCH -I/usr/lib/ghc-7.0.3/bytestring-0.9.1.10/include -I/usr/lib/ghc-7.0.3/base-4.3.1.0/include -I/usr/lib/ghc-7.0.3/include -I/usr/lib/ghc-7.0.3/include -I/usr/lib/ghc-7.0.3/include/ cabal: Error: some packages failed to install: zlib-0.5.3.2 failed during the building phase. The exception was: ExitFailure 1 ``` Can anyone shed some light on this build error? --- **Edit:** Here's a snippet from Stream.hsc with line numbers: ``` 82 import Foreign 83 ( Word8, Ptr, nullPtr, plusPtr, peekByteOff, pokeByteOff, mallocBy tes 84 , ForeignPtr, FinalizerPtr, newForeignPtr_, addForeignPtrFinalizer 85 , withForeignPtr, touchForeignPtr ) 86 #if MIN_VERSION_base(4,4,0) 87 import Foreign.ForeignPtr.Unsafe ( unsafeForeignPtrToPtr ) 88 import System.IO.Unsafe ( unsafePerformIO ) 89 #else 90 import Foreign ( unsafeForeignPtrToPtr, unsafePerformIO ) 91 #endif ```
2012/01/22
[ "https://Stackoverflow.com/questions/8961413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/319002/" ]
On Ubuntu, I fixed (or really, avoided) a similar error with ```sh sudo apt-get install libghc-zlib-dev libghc-zlib-bindings-dev ``` (I don't know if both are needed.)
The most likely reason is that the zlib C library headers are missing on your machine. You might instead try to use the Arch Linux "Haskell Platform" or [haskell-zlib](http://www.archlinux.org/packages/extra/i686/haskell-zlib/) packages, which resolve C dependencies for you.
8,961,413
I'm using a VM with the following configuration: * Arch Linux (3.0-ARCH kernel) * GHC 7.0.3 * cabal-install 0.10.2 * Cabal library 1.10.1.0 When I try to build zlib using cabal... ``` $ cabal install zlib ``` I get the following output: ``` Resolving dependencies... Downloading zlib-0.5.3.2... Configuring zlib-0.5.3.2... Preprocessing library zlib-0.5.3.2... Stream.hsc:86:21: error: missing binary operator before token "(" Stream.hsc: In function ‘main’: Stream.hsc:86:21: error: missing binary operator before token "(" Stream.hsc:86:21: error: missing binary operator before token "(" compiling dist/build/Codec/Compression/Zlib/Stream_hsc_make.c failed (exit code 1) command was: /usr/bin/gcc -c dist/build/Codec/Compression/Zlib/Stream_hsc_make.c -o dist/build/Codec/Compression/Zlib/Stream_hsc_make.o -fno-stack-protector -fno-stack-protector -D__GLASGOW_HASKELL__=700 -Dlinux_BUILD_OS -Dlinux_HOST_OS -Dx86_64_BUILD_ARCH -Dx86_64_HOST_ARCH -I/usr/lib/ghc-7.0.3/bytestring-0.9.1.10/include -I/usr/lib/ghc-7.0.3/base-4.3.1.0/include -I/usr/lib/ghc-7.0.3/include -I/usr/lib/ghc-7.0.3/include -I/usr/lib/ghc-7.0.3/include/ cabal: Error: some packages failed to install: zlib-0.5.3.2 failed during the building phase. The exception was: ExitFailure 1 ``` Can anyone shed some light on this build error? --- **Edit:** Here's a snippet from Stream.hsc with line numbers: ``` 82 import Foreign 83 ( Word8, Ptr, nullPtr, plusPtr, peekByteOff, pokeByteOff, mallocBy tes 84 , ForeignPtr, FinalizerPtr, newForeignPtr_, addForeignPtrFinalizer 85 , withForeignPtr, touchForeignPtr ) 86 #if MIN_VERSION_base(4,4,0) 87 import Foreign.ForeignPtr.Unsafe ( unsafeForeignPtrToPtr ) 88 import System.IO.Unsafe ( unsafePerformIO ) 89 #else 90 import Foreign ( unsafeForeignPtrToPtr, unsafePerformIO ) 91 #endif ```
2012/01/22
[ "https://Stackoverflow.com/questions/8961413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/319002/" ]
I'm still encountering this with the haskell package zlib-0.5.4.2 on GHC 7.8.4. I think the issue is a non-standard location of the library. I solved it by hand-installing zlib 1.2.8 and then doing: ``` cabal install zlib --extra-lib-dirs=/usr/local/lib --extra-include-dir=/usr/local/include ```
The most likely reason is that the zlib C library headers are missing on your machine. You might instead try to use the Arch Linux "Haskell Platform" or [haskell-zlib](http://www.archlinux.org/packages/extra/i686/haskell-zlib/) packages, which resolve C dependencies for you.
8,961,413
I'm using a VM with the following configuration: * Arch Linux (3.0-ARCH kernel) * GHC 7.0.3 * cabal-install 0.10.2 * Cabal library 1.10.1.0 When I try to build zlib using cabal... ``` $ cabal install zlib ``` I get the following output: ``` Resolving dependencies... Downloading zlib-0.5.3.2... Configuring zlib-0.5.3.2... Preprocessing library zlib-0.5.3.2... Stream.hsc:86:21: error: missing binary operator before token "(" Stream.hsc: In function ‘main’: Stream.hsc:86:21: error: missing binary operator before token "(" Stream.hsc:86:21: error: missing binary operator before token "(" compiling dist/build/Codec/Compression/Zlib/Stream_hsc_make.c failed (exit code 1) command was: /usr/bin/gcc -c dist/build/Codec/Compression/Zlib/Stream_hsc_make.c -o dist/build/Codec/Compression/Zlib/Stream_hsc_make.o -fno-stack-protector -fno-stack-protector -D__GLASGOW_HASKELL__=700 -Dlinux_BUILD_OS -Dlinux_HOST_OS -Dx86_64_BUILD_ARCH -Dx86_64_HOST_ARCH -I/usr/lib/ghc-7.0.3/bytestring-0.9.1.10/include -I/usr/lib/ghc-7.0.3/base-4.3.1.0/include -I/usr/lib/ghc-7.0.3/include -I/usr/lib/ghc-7.0.3/include -I/usr/lib/ghc-7.0.3/include/ cabal: Error: some packages failed to install: zlib-0.5.3.2 failed during the building phase. The exception was: ExitFailure 1 ``` Can anyone shed some light on this build error? --- **Edit:** Here's a snippet from Stream.hsc with line numbers: ``` 82 import Foreign 83 ( Word8, Ptr, nullPtr, plusPtr, peekByteOff, pokeByteOff, mallocBy tes 84 , ForeignPtr, FinalizerPtr, newForeignPtr_, addForeignPtrFinalizer 85 , withForeignPtr, touchForeignPtr ) 86 #if MIN_VERSION_base(4,4,0) 87 import Foreign.ForeignPtr.Unsafe ( unsafeForeignPtrToPtr ) 88 import System.IO.Unsafe ( unsafePerformIO ) 89 #else 90 import Foreign ( unsafeForeignPtrToPtr, unsafePerformIO ) 91 #endif ```
2012/01/22
[ "https://Stackoverflow.com/questions/8961413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/319002/" ]
For some reason, the `MIN_VERSION_base` macro doesn't get expanded, thus the preprocessor sees the condition `MIN_VERSION_base(4,4,0)` which it of course cannot handle. I've not yet found out why the macro isn't expanded, but workarounds are 1. install zlib-0.5.3.1 instead 2. unpack the tarball and edit `Codec/Compression/Zlib/Stream.hsc` to remove the offending macro (you're using 7.0.3, so your base version is 4.3.1.0, you can replace the macro with 0) **Edit:** After poking around a bit, I found out that to hide these preprocessor directives, which aren't intended for `hsc2hs` to process, they have to be masked by an extra '#'. Bug report underway.
I don't understand this error, but it happened to me also earlier today while trying to install Agda 2.3 using GHC 7.4. Saizan from #agda suggested that I try ```sh cabal unpack zlib cd zlib-0.5.3.2/ runghc Setup configure --user; runghc Setup build; runghc Setup install ``` This proved effective. But I'm still in the dark about what's actually the problem. Still, as it worked, I thought I'd share.
8,961,413
I'm using a VM with the following configuration: * Arch Linux (3.0-ARCH kernel) * GHC 7.0.3 * cabal-install 0.10.2 * Cabal library 1.10.1.0 When I try to build zlib using cabal... ``` $ cabal install zlib ``` I get the following output: ``` Resolving dependencies... Downloading zlib-0.5.3.2... Configuring zlib-0.5.3.2... Preprocessing library zlib-0.5.3.2... Stream.hsc:86:21: error: missing binary operator before token "(" Stream.hsc: In function ‘main’: Stream.hsc:86:21: error: missing binary operator before token "(" Stream.hsc:86:21: error: missing binary operator before token "(" compiling dist/build/Codec/Compression/Zlib/Stream_hsc_make.c failed (exit code 1) command was: /usr/bin/gcc -c dist/build/Codec/Compression/Zlib/Stream_hsc_make.c -o dist/build/Codec/Compression/Zlib/Stream_hsc_make.o -fno-stack-protector -fno-stack-protector -D__GLASGOW_HASKELL__=700 -Dlinux_BUILD_OS -Dlinux_HOST_OS -Dx86_64_BUILD_ARCH -Dx86_64_HOST_ARCH -I/usr/lib/ghc-7.0.3/bytestring-0.9.1.10/include -I/usr/lib/ghc-7.0.3/base-4.3.1.0/include -I/usr/lib/ghc-7.0.3/include -I/usr/lib/ghc-7.0.3/include -I/usr/lib/ghc-7.0.3/include/ cabal: Error: some packages failed to install: zlib-0.5.3.2 failed during the building phase. The exception was: ExitFailure 1 ``` Can anyone shed some light on this build error? --- **Edit:** Here's a snippet from Stream.hsc with line numbers: ``` 82 import Foreign 83 ( Word8, Ptr, nullPtr, plusPtr, peekByteOff, pokeByteOff, mallocBy tes 84 , ForeignPtr, FinalizerPtr, newForeignPtr_, addForeignPtrFinalizer 85 , withForeignPtr, touchForeignPtr ) 86 #if MIN_VERSION_base(4,4,0) 87 import Foreign.ForeignPtr.Unsafe ( unsafeForeignPtrToPtr ) 88 import System.IO.Unsafe ( unsafePerformIO ) 89 #else 90 import Foreign ( unsafeForeignPtrToPtr, unsafePerformIO ) 91 #endif ```
2012/01/22
[ "https://Stackoverflow.com/questions/8961413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/319002/" ]
On Ubuntu, I fixed (or really, avoided) a similar error with ```sh sudo apt-get install libghc-zlib-dev libghc-zlib-bindings-dev ``` (I don't know if both are needed.)
I don't understand this error, but it happened to me also earlier today while trying to install Agda 2.3 using GHC 7.4. Saizan from #agda suggested that I try ```sh cabal unpack zlib cd zlib-0.5.3.2/ runghc Setup configure --user; runghc Setup build; runghc Setup install ``` This proved effective. But I'm still in the dark about what's actually the problem. Still, as it worked, I thought I'd share.
8,961,413
I'm using a VM with the following configuration: * Arch Linux (3.0-ARCH kernel) * GHC 7.0.3 * cabal-install 0.10.2 * Cabal library 1.10.1.0 When I try to build zlib using cabal... ``` $ cabal install zlib ``` I get the following output: ``` Resolving dependencies... Downloading zlib-0.5.3.2... Configuring zlib-0.5.3.2... Preprocessing library zlib-0.5.3.2... Stream.hsc:86:21: error: missing binary operator before token "(" Stream.hsc: In function ‘main’: Stream.hsc:86:21: error: missing binary operator before token "(" Stream.hsc:86:21: error: missing binary operator before token "(" compiling dist/build/Codec/Compression/Zlib/Stream_hsc_make.c failed (exit code 1) command was: /usr/bin/gcc -c dist/build/Codec/Compression/Zlib/Stream_hsc_make.c -o dist/build/Codec/Compression/Zlib/Stream_hsc_make.o -fno-stack-protector -fno-stack-protector -D__GLASGOW_HASKELL__=700 -Dlinux_BUILD_OS -Dlinux_HOST_OS -Dx86_64_BUILD_ARCH -Dx86_64_HOST_ARCH -I/usr/lib/ghc-7.0.3/bytestring-0.9.1.10/include -I/usr/lib/ghc-7.0.3/base-4.3.1.0/include -I/usr/lib/ghc-7.0.3/include -I/usr/lib/ghc-7.0.3/include -I/usr/lib/ghc-7.0.3/include/ cabal: Error: some packages failed to install: zlib-0.5.3.2 failed during the building phase. The exception was: ExitFailure 1 ``` Can anyone shed some light on this build error? --- **Edit:** Here's a snippet from Stream.hsc with line numbers: ``` 82 import Foreign 83 ( Word8, Ptr, nullPtr, plusPtr, peekByteOff, pokeByteOff, mallocBy tes 84 , ForeignPtr, FinalizerPtr, newForeignPtr_, addForeignPtrFinalizer 85 , withForeignPtr, touchForeignPtr ) 86 #if MIN_VERSION_base(4,4,0) 87 import Foreign.ForeignPtr.Unsafe ( unsafeForeignPtrToPtr ) 88 import System.IO.Unsafe ( unsafePerformIO ) 89 #else 90 import Foreign ( unsafeForeignPtrToPtr, unsafePerformIO ) 91 #endif ```
2012/01/22
[ "https://Stackoverflow.com/questions/8961413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/319002/" ]
I'm still encountering this with the haskell package zlib-0.5.4.2 on GHC 7.8.4. I think the issue is a non-standard location of the library. I solved it by hand-installing zlib 1.2.8 and then doing: ``` cabal install zlib --extra-lib-dirs=/usr/local/lib --extra-include-dir=/usr/local/include ```
I don't understand this error, but it happened to me also earlier today while trying to install Agda 2.3 using GHC 7.4. Saizan from #agda suggested that I try ```sh cabal unpack zlib cd zlib-0.5.3.2/ runghc Setup configure --user; runghc Setup build; runghc Setup install ``` This proved effective. But I'm still in the dark about what's actually the problem. Still, as it worked, I thought I'd share.
8,961,413
I'm using a VM with the following configuration: * Arch Linux (3.0-ARCH kernel) * GHC 7.0.3 * cabal-install 0.10.2 * Cabal library 1.10.1.0 When I try to build zlib using cabal... ``` $ cabal install zlib ``` I get the following output: ``` Resolving dependencies... Downloading zlib-0.5.3.2... Configuring zlib-0.5.3.2... Preprocessing library zlib-0.5.3.2... Stream.hsc:86:21: error: missing binary operator before token "(" Stream.hsc: In function ‘main’: Stream.hsc:86:21: error: missing binary operator before token "(" Stream.hsc:86:21: error: missing binary operator before token "(" compiling dist/build/Codec/Compression/Zlib/Stream_hsc_make.c failed (exit code 1) command was: /usr/bin/gcc -c dist/build/Codec/Compression/Zlib/Stream_hsc_make.c -o dist/build/Codec/Compression/Zlib/Stream_hsc_make.o -fno-stack-protector -fno-stack-protector -D__GLASGOW_HASKELL__=700 -Dlinux_BUILD_OS -Dlinux_HOST_OS -Dx86_64_BUILD_ARCH -Dx86_64_HOST_ARCH -I/usr/lib/ghc-7.0.3/bytestring-0.9.1.10/include -I/usr/lib/ghc-7.0.3/base-4.3.1.0/include -I/usr/lib/ghc-7.0.3/include -I/usr/lib/ghc-7.0.3/include -I/usr/lib/ghc-7.0.3/include/ cabal: Error: some packages failed to install: zlib-0.5.3.2 failed during the building phase. The exception was: ExitFailure 1 ``` Can anyone shed some light on this build error? --- **Edit:** Here's a snippet from Stream.hsc with line numbers: ``` 82 import Foreign 83 ( Word8, Ptr, nullPtr, plusPtr, peekByteOff, pokeByteOff, mallocBy tes 84 , ForeignPtr, FinalizerPtr, newForeignPtr_, addForeignPtrFinalizer 85 , withForeignPtr, touchForeignPtr ) 86 #if MIN_VERSION_base(4,4,0) 87 import Foreign.ForeignPtr.Unsafe ( unsafeForeignPtrToPtr ) 88 import System.IO.Unsafe ( unsafePerformIO ) 89 #else 90 import Foreign ( unsafeForeignPtrToPtr, unsafePerformIO ) 91 #endif ```
2012/01/22
[ "https://Stackoverflow.com/questions/8961413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/319002/" ]
For some reason, the `MIN_VERSION_base` macro doesn't get expanded, thus the preprocessor sees the condition `MIN_VERSION_base(4,4,0)` which it of course cannot handle. I've not yet found out why the macro isn't expanded, but workarounds are 1. install zlib-0.5.3.1 instead 2. unpack the tarball and edit `Codec/Compression/Zlib/Stream.hsc` to remove the offending macro (you're using 7.0.3, so your base version is 4.3.1.0, you can replace the macro with 0) **Edit:** After poking around a bit, I found out that to hide these preprocessor directives, which aren't intended for `hsc2hs` to process, they have to be masked by an extra '#'. Bug report underway.
On Ubuntu, I fixed (or really, avoided) a similar error with ```sh sudo apt-get install libghc-zlib-dev libghc-zlib-bindings-dev ``` (I don't know if both are needed.)
8,961,413
I'm using a VM with the following configuration: * Arch Linux (3.0-ARCH kernel) * GHC 7.0.3 * cabal-install 0.10.2 * Cabal library 1.10.1.0 When I try to build zlib using cabal... ``` $ cabal install zlib ``` I get the following output: ``` Resolving dependencies... Downloading zlib-0.5.3.2... Configuring zlib-0.5.3.2... Preprocessing library zlib-0.5.3.2... Stream.hsc:86:21: error: missing binary operator before token "(" Stream.hsc: In function ‘main’: Stream.hsc:86:21: error: missing binary operator before token "(" Stream.hsc:86:21: error: missing binary operator before token "(" compiling dist/build/Codec/Compression/Zlib/Stream_hsc_make.c failed (exit code 1) command was: /usr/bin/gcc -c dist/build/Codec/Compression/Zlib/Stream_hsc_make.c -o dist/build/Codec/Compression/Zlib/Stream_hsc_make.o -fno-stack-protector -fno-stack-protector -D__GLASGOW_HASKELL__=700 -Dlinux_BUILD_OS -Dlinux_HOST_OS -Dx86_64_BUILD_ARCH -Dx86_64_HOST_ARCH -I/usr/lib/ghc-7.0.3/bytestring-0.9.1.10/include -I/usr/lib/ghc-7.0.3/base-4.3.1.0/include -I/usr/lib/ghc-7.0.3/include -I/usr/lib/ghc-7.0.3/include -I/usr/lib/ghc-7.0.3/include/ cabal: Error: some packages failed to install: zlib-0.5.3.2 failed during the building phase. The exception was: ExitFailure 1 ``` Can anyone shed some light on this build error? --- **Edit:** Here's a snippet from Stream.hsc with line numbers: ``` 82 import Foreign 83 ( Word8, Ptr, nullPtr, plusPtr, peekByteOff, pokeByteOff, mallocBy tes 84 , ForeignPtr, FinalizerPtr, newForeignPtr_, addForeignPtrFinalizer 85 , withForeignPtr, touchForeignPtr ) 86 #if MIN_VERSION_base(4,4,0) 87 import Foreign.ForeignPtr.Unsafe ( unsafeForeignPtrToPtr ) 88 import System.IO.Unsafe ( unsafePerformIO ) 89 #else 90 import Foreign ( unsafeForeignPtrToPtr, unsafePerformIO ) 91 #endif ```
2012/01/22
[ "https://Stackoverflow.com/questions/8961413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/319002/" ]
For some reason, the `MIN_VERSION_base` macro doesn't get expanded, thus the preprocessor sees the condition `MIN_VERSION_base(4,4,0)` which it of course cannot handle. I've not yet found out why the macro isn't expanded, but workarounds are 1. install zlib-0.5.3.1 instead 2. unpack the tarball and edit `Codec/Compression/Zlib/Stream.hsc` to remove the offending macro (you're using 7.0.3, so your base version is 4.3.1.0, you can replace the macro with 0) **Edit:** After poking around a bit, I found out that to hide these preprocessor directives, which aren't intended for `hsc2hs` to process, they have to be masked by an extra '#'. Bug report underway.
I'm still encountering this with the haskell package zlib-0.5.4.2 on GHC 7.8.4. I think the issue is a non-standard location of the library. I solved it by hand-installing zlib 1.2.8 and then doing: ``` cabal install zlib --extra-lib-dirs=/usr/local/lib --extra-include-dir=/usr/local/include ```
8,961,413
I'm using a VM with the following configuration: * Arch Linux (3.0-ARCH kernel) * GHC 7.0.3 * cabal-install 0.10.2 * Cabal library 1.10.1.0 When I try to build zlib using cabal... ``` $ cabal install zlib ``` I get the following output: ``` Resolving dependencies... Downloading zlib-0.5.3.2... Configuring zlib-0.5.3.2... Preprocessing library zlib-0.5.3.2... Stream.hsc:86:21: error: missing binary operator before token "(" Stream.hsc: In function ‘main’: Stream.hsc:86:21: error: missing binary operator before token "(" Stream.hsc:86:21: error: missing binary operator before token "(" compiling dist/build/Codec/Compression/Zlib/Stream_hsc_make.c failed (exit code 1) command was: /usr/bin/gcc -c dist/build/Codec/Compression/Zlib/Stream_hsc_make.c -o dist/build/Codec/Compression/Zlib/Stream_hsc_make.o -fno-stack-protector -fno-stack-protector -D__GLASGOW_HASKELL__=700 -Dlinux_BUILD_OS -Dlinux_HOST_OS -Dx86_64_BUILD_ARCH -Dx86_64_HOST_ARCH -I/usr/lib/ghc-7.0.3/bytestring-0.9.1.10/include -I/usr/lib/ghc-7.0.3/base-4.3.1.0/include -I/usr/lib/ghc-7.0.3/include -I/usr/lib/ghc-7.0.3/include -I/usr/lib/ghc-7.0.3/include/ cabal: Error: some packages failed to install: zlib-0.5.3.2 failed during the building phase. The exception was: ExitFailure 1 ``` Can anyone shed some light on this build error? --- **Edit:** Here's a snippet from Stream.hsc with line numbers: ``` 82 import Foreign 83 ( Word8, Ptr, nullPtr, plusPtr, peekByteOff, pokeByteOff, mallocBy tes 84 , ForeignPtr, FinalizerPtr, newForeignPtr_, addForeignPtrFinalizer 85 , withForeignPtr, touchForeignPtr ) 86 #if MIN_VERSION_base(4,4,0) 87 import Foreign.ForeignPtr.Unsafe ( unsafeForeignPtrToPtr ) 88 import System.IO.Unsafe ( unsafePerformIO ) 89 #else 90 import Foreign ( unsafeForeignPtrToPtr, unsafePerformIO ) 91 #endif ```
2012/01/22
[ "https://Stackoverflow.com/questions/8961413", "https://Stackoverflow.com", "https://Stackoverflow.com/users/319002/" ]
On Ubuntu, I fixed (or really, avoided) a similar error with ```sh sudo apt-get install libghc-zlib-dev libghc-zlib-bindings-dev ``` (I don't know if both are needed.)
I'm still encountering this with the haskell package zlib-0.5.4.2 on GHC 7.8.4. I think the issue is a non-standard location of the library. I solved it by hand-installing zlib 1.2.8 and then doing: ``` cabal install zlib --extra-lib-dirs=/usr/local/lib --extra-include-dir=/usr/local/include ```
110,190
On page 19 of the textbook [Introduction to Statistical Learning](http://www-bcf.usc.edu/%7Egareth/ISL/) (by James, Witten, Hastie and Tibshirani--it is freely downloadable on the web, and very good), the following is stated: > > Consider a given estimate $$\hat{Y} = \hat{f}(x)$$ Assume for a moment that both $$\hat{f}, X$$ are fixed. Then, it is easy to show that: > > > $$\mathrm{E}(Y - \hat{Y})^2 = \mathrm{E}[f(X) + \epsilon - \hat{f}(X)]^2$$ > $$ = [f(X) - \hat{f}(X)]^2 + \mathrm{Var}(\epsilon)$$ > > > It is further explained that the first term represents the reducible error, and the second term represents the irreducible error. I am not fully understanding how the authors arrive at this answer. I worked through the calculations as follows: $$\mathrm{E}(Y - \hat{Y})^2 = \mathrm{E}[f(X) + \epsilon - \hat{f}(X)]^2$$ This simplifies to $[f(X) - \hat{f}(X) + \mathrm{E}[\epsilon]]^2 = [f(X) - \hat{f}(X)]^2$ assuming that $\mathrm{E}[\epsilon] = 0$. Where is the $\mathrm{Var}(x)$ indicated in the text coming from? Any suggestions would be greatly appreciated.
2014/07/31
[ "https://stats.stackexchange.com/questions/110190", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/49545/" ]
Simply expand the square ... $$[f(X)- \hat{f}(X) + \epsilon ]^2=[f(X)- \hat{f}(X)]^2 +2 [f(X)- \hat{f}(X)]\epsilon+ \epsilon^2$$ ... and use linearity of expectations: $$\mathrm{E}[f(X)- \hat{f}(X) + \epsilon ]^2=E[f(X)- \hat{f}(X)]^2 +2 E[(f(X)- \hat{f}(X))\epsilon]+ E[\epsilon^2]$$ Can you do it from there? (What things remain to be shown?) Hint in response to comments: Show $E(\epsilon^2)=\text{Var}(\epsilon)$
\begin{equation} \ E[(Y−\hat{Y})^2] = E[(f(X)+\epsilon-\hat{f}(X))^2] = E[(f(X)-\hat{f}(X))^2 + \epsilon^2 + 2\epsilon(f(X)-\hat{f}(X))] = E[(f(X)-\hat{f}(X))^2] + E[\epsilon^2] + E[2\epsilon(f(X)-\hat{f}(X))] = E[(f(X)-\hat{f}(X))^2] + E[\epsilon^2] + 2(f(X)-\hat{f}(X))\*E[\epsilon].......(1)\\ \end{equation} The Last term is zero as the expected value of irreducible error is zero. And lets see where variance come from. In general: \begin{equation} \ Var(X) = E[(X−\bar{X})^2] = E[X^2 - 2X\bar{X} + \bar{X}^2] = E[X^2] - E[2X\bar{X}] + E[\bar{X}^2]\\ \end{equation} The mean of X is a constant and so is the square of the mean of X. Therefore equation becomes, \begin{equation} \ Var(X) = E[X^2] - 2\bar{X}\*E[X] + \bar{X}^2 = E[X^2] - 2\bar{X}\*\bar{X} + \bar{X}^2 = E[X^2] - 2\bar{X}^2 + \bar{X}^2 = E[X^2] - \bar{X}^2\\ Hence,\\Var(\epsilon) = E[\epsilon^2] - \bar{\epsilon}^2\\ \end{equation} But mean of $\epsilon$ is zero. So, \begin{equation} \\Var(\epsilon) = E[\epsilon^2].....(2) \\ \end{equation} Now taking equation 1, whose last term is zero & equation 2: \begin{equation} \ E[(Y−\hat{Y})^2] = E[(f(X)-\hat{f}(X))^2] + E[\epsilon^2] = E[(f(X)-\hat{f}(X))^2] + Var(\epsilon) \end{equation}
37,526,041
I was confronted not so long ago to an algorithmic problem. I needed to find if a value stored in an array was at it "place". An example will be easier to understand. Let's take an Array A = {-10, -3, 3, 5, 7}. The algorithm would return 3, because the number 3 is at A[2] (3rd place). On the contrary, if we take an Array B = {5, 7, 9, 10}, the algorithm will return 0 or false or whatever. **The array is always sorted !** I wasn't able to find a solution with a good complexity. (Looking at each value individualy is not good !) Maybe it is possible to resolve that problem by using an approach similar to merge sorting, by cuting in half and verifying on those halves ? Can somebody help me on this one ? *Java algorithm would be the best, but pseudocode would also help me a lot !*
2016/05/30
[ "https://Stackoverflow.com/questions/37526041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5040228/" ]
Here is an algorithm (based on binary search) to find all matching indices that has a best-case complexity of O(log(n)) and a worst case complexity of O(n): 1- Check the element at position m = array.length / 2 2- if the value array[m] is strictly smaller than m, you can forget about the left half of the array (from index 0 to index m-1), and apply recursively to the right half. 3- if array[m]==m, add one to the counter and apply recursively to both halves 4- if array[m]>m, forget about the right half of the array and apply recursively to the left half. Using threads can accelerate things here. I suppose that there is no repetitions in the array.
If you want the find the first number in the array that is at its own place, you just have to iterate the array: ``` static int find_in_place(int[] a) { for (int i=0; i<a.length; i++) { if (a[i] == i+1) { return a[i]; } } return 0; } ``` It has a complexity of O(n), and an average cost of n/2
37,526,041
I was confronted not so long ago to an algorithmic problem. I needed to find if a value stored in an array was at it "place". An example will be easier to understand. Let's take an Array A = {-10, -3, 3, 5, 7}. The algorithm would return 3, because the number 3 is at A[2] (3rd place). On the contrary, if we take an Array B = {5, 7, 9, 10}, the algorithm will return 0 or false or whatever. **The array is always sorted !** I wasn't able to find a solution with a good complexity. (Looking at each value individualy is not good !) Maybe it is possible to resolve that problem by using an approach similar to merge sorting, by cuting in half and verifying on those halves ? Can somebody help me on this one ? *Java algorithm would be the best, but pseudocode would also help me a lot !*
2016/05/30
[ "https://Stackoverflow.com/questions/37526041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5040228/" ]
Since there can be no duplicates, you can use the fact that the function `f(x): A[x] - x` is monotonous and apply binary search to solve the problem in `O(log n)` worst-case complexity. You want to find a point where that function `A[x] - x` takes value zero. This code should work: ``` boolean binarySearch(int[] data, int size) { int low = 0; int high = size - 1; while(high >= low) { int middle = (low + high) / 2; if(data[middle] - 1 == middle) { return true; } if(data[middle] - 1 < middle) { low = middle + 1; } if(data[middle] - 1 > middle) { high = middle - 1; } } return false; } ``` Watch out for the fact that arrays in Java are 0-indexed - that is the reason why I subtract `-1` from the array.
If you want the find the first number in the array that is at its own place, you just have to iterate the array: ``` static int find_in_place(int[] a) { for (int i=0; i<a.length; i++) { if (a[i] == i+1) { return a[i]; } } return 0; } ``` It has a complexity of O(n), and an average cost of n/2
37,526,041
I was confronted not so long ago to an algorithmic problem. I needed to find if a value stored in an array was at it "place". An example will be easier to understand. Let's take an Array A = {-10, -3, 3, 5, 7}. The algorithm would return 3, because the number 3 is at A[2] (3rd place). On the contrary, if we take an Array B = {5, 7, 9, 10}, the algorithm will return 0 or false or whatever. **The array is always sorted !** I wasn't able to find a solution with a good complexity. (Looking at each value individualy is not good !) Maybe it is possible to resolve that problem by using an approach similar to merge sorting, by cuting in half and verifying on those halves ? Can somebody help me on this one ? *Java algorithm would be the best, but pseudocode would also help me a lot !*
2016/05/30
[ "https://Stackoverflow.com/questions/37526041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5040228/" ]
Here is an algorithm (based on binary search) to find all matching indices that has a best-case complexity of O(log(n)) and a worst case complexity of O(n): 1- Check the element at position m = array.length / 2 2- if the value array[m] is strictly smaller than m, you can forget about the left half of the array (from index 0 to index m-1), and apply recursively to the right half. 3- if array[m]==m, add one to the counter and apply recursively to both halves 4- if array[m]>m, forget about the right half of the array and apply recursively to the left half. Using threads can accelerate things here. I suppose that there is no repetitions in the array.
You can skip iterating if there is no such element by adding a special condition ``` if(a[0]>1 && a[a.length-1]>a.length){ //then don't iterate through the array and return false return false; } else { //make a loop here } ```
37,526,041
I was confronted not so long ago to an algorithmic problem. I needed to find if a value stored in an array was at it "place". An example will be easier to understand. Let's take an Array A = {-10, -3, 3, 5, 7}. The algorithm would return 3, because the number 3 is at A[2] (3rd place). On the contrary, if we take an Array B = {5, 7, 9, 10}, the algorithm will return 0 or false or whatever. **The array is always sorted !** I wasn't able to find a solution with a good complexity. (Looking at each value individualy is not good !) Maybe it is possible to resolve that problem by using an approach similar to merge sorting, by cuting in half and verifying on those halves ? Can somebody help me on this one ? *Java algorithm would be the best, but pseudocode would also help me a lot !*
2016/05/30
[ "https://Stackoverflow.com/questions/37526041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5040228/" ]
Here is an algorithm (based on binary search) to find all matching indices that has a best-case complexity of O(log(n)) and a worst case complexity of O(n): 1- Check the element at position m = array.length / 2 2- if the value array[m] is strictly smaller than m, you can forget about the left half of the array (from index 0 to index m-1), and apply recursively to the right half. 3- if array[m]==m, add one to the counter and apply recursively to both halves 4- if array[m]>m, forget about the right half of the array and apply recursively to the left half. Using threads can accelerate things here. I suppose that there is no repetitions in the array.
simply use a binary search for the 0 and use for compare the value in the array minus index of the array. **O(log n)**
37,526,041
I was confronted not so long ago to an algorithmic problem. I needed to find if a value stored in an array was at it "place". An example will be easier to understand. Let's take an Array A = {-10, -3, 3, 5, 7}. The algorithm would return 3, because the number 3 is at A[2] (3rd place). On the contrary, if we take an Array B = {5, 7, 9, 10}, the algorithm will return 0 or false or whatever. **The array is always sorted !** I wasn't able to find a solution with a good complexity. (Looking at each value individualy is not good !) Maybe it is possible to resolve that problem by using an approach similar to merge sorting, by cuting in half and verifying on those halves ? Can somebody help me on this one ? *Java algorithm would be the best, but pseudocode would also help me a lot !*
2016/05/30
[ "https://Stackoverflow.com/questions/37526041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5040228/" ]
Here is an algorithm (based on binary search) to find all matching indices that has a best-case complexity of O(log(n)) and a worst case complexity of O(n): 1- Check the element at position m = array.length / 2 2- if the value array[m] is strictly smaller than m, you can forget about the left half of the array (from index 0 to index m-1), and apply recursively to the right half. 3- if array[m]==m, add one to the counter and apply recursively to both halves 4- if array[m]>m, forget about the right half of the array and apply recursively to the left half. Using threads can accelerate things here. I suppose that there is no repetitions in the array.
Using binary search (or a similar algorithm) you could get better than O(n). Since the array is sorted, we can make the following assumptions: * if the value at index `x` is smaller than `x-1` (`a[x] <= x`), you know that all previous values also must be smaller than their index (because no duplicates are allowed) * if `a[x] > x + 1` all following values must be greater than their index (again no duplicates allowed). Using that you can use a binary approach and pick the center value, check for its index and discard the left/right part if it matches one of the conditions above. Of course you stop when `a[x] = x + 1`.
37,526,041
I was confronted not so long ago to an algorithmic problem. I needed to find if a value stored in an array was at it "place". An example will be easier to understand. Let's take an Array A = {-10, -3, 3, 5, 7}. The algorithm would return 3, because the number 3 is at A[2] (3rd place). On the contrary, if we take an Array B = {5, 7, 9, 10}, the algorithm will return 0 or false or whatever. **The array is always sorted !** I wasn't able to find a solution with a good complexity. (Looking at each value individualy is not good !) Maybe it is possible to resolve that problem by using an approach similar to merge sorting, by cuting in half and verifying on those halves ? Can somebody help me on this one ? *Java algorithm would be the best, but pseudocode would also help me a lot !*
2016/05/30
[ "https://Stackoverflow.com/questions/37526041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5040228/" ]
Since there can be no duplicates, you can use the fact that the function `f(x): A[x] - x` is monotonous and apply binary search to solve the problem in `O(log n)` worst-case complexity. You want to find a point where that function `A[x] - x` takes value zero. This code should work: ``` boolean binarySearch(int[] data, int size) { int low = 0; int high = size - 1; while(high >= low) { int middle = (low + high) / 2; if(data[middle] - 1 == middle) { return true; } if(data[middle] - 1 < middle) { low = middle + 1; } if(data[middle] - 1 > middle) { high = middle - 1; } } return false; } ``` Watch out for the fact that arrays in Java are 0-indexed - that is the reason why I subtract `-1` from the array.
You can skip iterating if there is no such element by adding a special condition ``` if(a[0]>1 && a[a.length-1]>a.length){ //then don't iterate through the array and return false return false; } else { //make a loop here } ```
37,526,041
I was confronted not so long ago to an algorithmic problem. I needed to find if a value stored in an array was at it "place". An example will be easier to understand. Let's take an Array A = {-10, -3, 3, 5, 7}. The algorithm would return 3, because the number 3 is at A[2] (3rd place). On the contrary, if we take an Array B = {5, 7, 9, 10}, the algorithm will return 0 or false or whatever. **The array is always sorted !** I wasn't able to find a solution with a good complexity. (Looking at each value individualy is not good !) Maybe it is possible to resolve that problem by using an approach similar to merge sorting, by cuting in half and verifying on those halves ? Can somebody help me on this one ? *Java algorithm would be the best, but pseudocode would also help me a lot !*
2016/05/30
[ "https://Stackoverflow.com/questions/37526041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5040228/" ]
Since there can be no duplicates, you can use the fact that the function `f(x): A[x] - x` is monotonous and apply binary search to solve the problem in `O(log n)` worst-case complexity. You want to find a point where that function `A[x] - x` takes value zero. This code should work: ``` boolean binarySearch(int[] data, int size) { int low = 0; int high = size - 1; while(high >= low) { int middle = (low + high) / 2; if(data[middle] - 1 == middle) { return true; } if(data[middle] - 1 < middle) { low = middle + 1; } if(data[middle] - 1 > middle) { high = middle - 1; } } return false; } ``` Watch out for the fact that arrays in Java are 0-indexed - that is the reason why I subtract `-1` from the array.
simply use a binary search for the 0 and use for compare the value in the array minus index of the array. **O(log n)**
37,526,041
I was confronted not so long ago to an algorithmic problem. I needed to find if a value stored in an array was at it "place". An example will be easier to understand. Let's take an Array A = {-10, -3, 3, 5, 7}. The algorithm would return 3, because the number 3 is at A[2] (3rd place). On the contrary, if we take an Array B = {5, 7, 9, 10}, the algorithm will return 0 or false or whatever. **The array is always sorted !** I wasn't able to find a solution with a good complexity. (Looking at each value individualy is not good !) Maybe it is possible to resolve that problem by using an approach similar to merge sorting, by cuting in half and verifying on those halves ? Can somebody help me on this one ? *Java algorithm would be the best, but pseudocode would also help me a lot !*
2016/05/30
[ "https://Stackoverflow.com/questions/37526041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5040228/" ]
Since there can be no duplicates, you can use the fact that the function `f(x): A[x] - x` is monotonous and apply binary search to solve the problem in `O(log n)` worst-case complexity. You want to find a point where that function `A[x] - x` takes value zero. This code should work: ``` boolean binarySearch(int[] data, int size) { int low = 0; int high = size - 1; while(high >= low) { int middle = (low + high) / 2; if(data[middle] - 1 == middle) { return true; } if(data[middle] - 1 < middle) { low = middle + 1; } if(data[middle] - 1 > middle) { high = middle - 1; } } return false; } ``` Watch out for the fact that arrays in Java are 0-indexed - that is the reason why I subtract `-1` from the array.
Using binary search (or a similar algorithm) you could get better than O(n). Since the array is sorted, we can make the following assumptions: * if the value at index `x` is smaller than `x-1` (`a[x] <= x`), you know that all previous values also must be smaller than their index (because no duplicates are allowed) * if `a[x] > x + 1` all following values must be greater than their index (again no duplicates allowed). Using that you can use a binary approach and pick the center value, check for its index and discard the left/right part if it matches one of the conditions above. Of course you stop when `a[x] = x + 1`.
1,202,551
I'm working with some very basic jquery code and would like to condense what I've done into one function with passed parameters. I have a few of these: ``` $(".about").hover(function() { $(this).attr("src","_img/nav/about_over.gif"); }, function() { $(this).attr("src","_img/nav/about_off.gif"); }); $(".artists").hover(function() { $(this).attr("src","_img/nav/artists_over.gif"); }, function() { $(this).attr("src","_img/nav/artists_on.gif"); }); $(".help").hover(function() { $(this).attr("src","_img/nav/help_over.gif"); }, function() { $(this).attr("src","_img/nav/help_off.gif"); }); ``` But would obviously like to pass the the title of the image ("about", artists", "help") so that I could cut down on repeated code. Any help much appreciated. Thanks Ronnie
2009/07/29
[ "https://Stackoverflow.com/questions/1202551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/141290/" ]
``` function hover(img) { $("."+img).hover(function() { $(this).attr("src","_img/nav/"+img+"_over.gif"); }, function() { $(this).attr("src","_img/nav/"+img+"_off.gif"); }); } ```
I'm not sure what's going on with the second function in the hover function, but you can do something like this: ``` $(".about .artists .help").hover(function(){ $(this).attr('src','_img/nav/' + $(this).attr('class') + '_over.gif') }); ``` you can apply the same principle to your on/off gifs too.
1,202,551
I'm working with some very basic jquery code and would like to condense what I've done into one function with passed parameters. I have a few of these: ``` $(".about").hover(function() { $(this).attr("src","_img/nav/about_over.gif"); }, function() { $(this).attr("src","_img/nav/about_off.gif"); }); $(".artists").hover(function() { $(this).attr("src","_img/nav/artists_over.gif"); }, function() { $(this).attr("src","_img/nav/artists_on.gif"); }); $(".help").hover(function() { $(this).attr("src","_img/nav/help_over.gif"); }, function() { $(this).attr("src","_img/nav/help_off.gif"); }); ``` But would obviously like to pass the the title of the image ("about", artists", "help") so that I could cut down on repeated code. Any help much appreciated. Thanks Ronnie
2009/07/29
[ "https://Stackoverflow.com/questions/1202551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/141290/" ]
``` function hover(img) { $("."+img).hover(function() { $(this).attr("src","_img/nav/"+img+"_over.gif"); }, function() { $(this).attr("src","_img/nav/"+img+"_off.gif"); }); } ```
You could something like this: ``` function ElementHover(class_name, src_over, src_off) { $("."+class_name+"").hover(function() { $(this).attr("src", src_over); }, function() { $(this).attr("src", src_off); }); } ```
1,202,551
I'm working with some very basic jquery code and would like to condense what I've done into one function with passed parameters. I have a few of these: ``` $(".about").hover(function() { $(this).attr("src","_img/nav/about_over.gif"); }, function() { $(this).attr("src","_img/nav/about_off.gif"); }); $(".artists").hover(function() { $(this).attr("src","_img/nav/artists_over.gif"); }, function() { $(this).attr("src","_img/nav/artists_on.gif"); }); $(".help").hover(function() { $(this).attr("src","_img/nav/help_over.gif"); }, function() { $(this).attr("src","_img/nav/help_off.gif"); }); ``` But would obviously like to pass the the title of the image ("about", artists", "help") so that I could cut down on repeated code. Any help much appreciated. Thanks Ronnie
2009/07/29
[ "https://Stackoverflow.com/questions/1202551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/141290/" ]
``` function hover(img) { $("."+img).hover(function() { $(this).attr("src","_img/nav/"+img+"_over.gif"); }, function() { $(this).attr("src","_img/nav/"+img+"_off.gif"); }); } ```
``` function HoverPic(name){ $("."+name).hover(function() { $(this).attr("src","_img/nav/"+name+"_over.gif"); }, function() { $(this).attr("src","_img/nav/"+name+"_off.gif"); }); } ```
1,202,551
I'm working with some very basic jquery code and would like to condense what I've done into one function with passed parameters. I have a few of these: ``` $(".about").hover(function() { $(this).attr("src","_img/nav/about_over.gif"); }, function() { $(this).attr("src","_img/nav/about_off.gif"); }); $(".artists").hover(function() { $(this).attr("src","_img/nav/artists_over.gif"); }, function() { $(this).attr("src","_img/nav/artists_on.gif"); }); $(".help").hover(function() { $(this).attr("src","_img/nav/help_over.gif"); }, function() { $(this).attr("src","_img/nav/help_off.gif"); }); ``` But would obviously like to pass the the title of the image ("about", artists", "help") so that I could cut down on repeated code. Any help much appreciated. Thanks Ronnie
2009/07/29
[ "https://Stackoverflow.com/questions/1202551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/141290/" ]
You could something like this: ``` function ElementHover(class_name, src_over, src_off) { $("."+class_name+"").hover(function() { $(this).attr("src", src_over); }, function() { $(this).attr("src", src_off); }); } ```
I'm not sure what's going on with the second function in the hover function, but you can do something like this: ``` $(".about .artists .help").hover(function(){ $(this).attr('src','_img/nav/' + $(this).attr('class') + '_over.gif') }); ``` you can apply the same principle to your on/off gifs too.
1,202,551
I'm working with some very basic jquery code and would like to condense what I've done into one function with passed parameters. I have a few of these: ``` $(".about").hover(function() { $(this).attr("src","_img/nav/about_over.gif"); }, function() { $(this).attr("src","_img/nav/about_off.gif"); }); $(".artists").hover(function() { $(this).attr("src","_img/nav/artists_over.gif"); }, function() { $(this).attr("src","_img/nav/artists_on.gif"); }); $(".help").hover(function() { $(this).attr("src","_img/nav/help_over.gif"); }, function() { $(this).attr("src","_img/nav/help_off.gif"); }); ``` But would obviously like to pass the the title of the image ("about", artists", "help") so that I could cut down on repeated code. Any help much appreciated. Thanks Ronnie
2009/07/29
[ "https://Stackoverflow.com/questions/1202551", "https://Stackoverflow.com", "https://Stackoverflow.com/users/141290/" ]
``` function HoverPic(name){ $("."+name).hover(function() { $(this).attr("src","_img/nav/"+name+"_over.gif"); }, function() { $(this).attr("src","_img/nav/"+name+"_off.gif"); }); } ```
I'm not sure what's going on with the second function in the hover function, but you can do something like this: ``` $(".about .artists .help").hover(function(){ $(this).attr('src','_img/nav/' + $(this).attr('class') + '_over.gif') }); ``` you can apply the same principle to your on/off gifs too.