Unnamed: 0
int64
0
0
repo_id
stringlengths
5
186
file_path
stringlengths
15
223
content
stringlengths
1
32.8M
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/TheArtOfHttpScripting.md
# The Art Of Scripting HTTP Requests Using Curl ## Background This document assumes that you are familiar with HTML and general networking. The increasing amount of applications moving to the web has made "HTTP Scripting" more frequently requested and wanted. To be able to automatically extract information from the web, to fake users, to post or upload data to web servers are all important tasks today. Curl is a command line tool for doing all sorts of URL manipulations and transfers, but this particular document will focus on how to use it when doing HTTP requests for fun and profit. I will assume that you know how to invoke `curl --help` or `curl --manual` to get basic information about it. Curl is not written to do everything for you. It makes the requests, it gets the data, it sends data and it retrieves the information. You probably need to glue everything together using some kind of script language or repeated manual invokes. ## The HTTP Protocol HTTP is the protocol used to fetch data from web servers. It is a simple protocol that is built upon TCP/IP. The protocol also allows information to get sent to the server from the client using a few different methods, as will be shown here. HTTP is plain ASCII text lines being sent by the client to a server to request a particular action, and then the server replies a few text lines before the actual requested content is sent to the client. The client, curl, sends a HTTP request. The request contains a method (like GET, POST, HEAD etc), a number of request headers and sometimes a request body. The HTTP server responds with a status line (indicating if things went well), response headers and most often also a response body. The "body" part is the plain data you requested, like the actual HTML or the image etc. ## See the Protocol Using curl's option [`--verbose`](https://curl.se/docs/manpage.html#-v) (`-v` as a short option) will display what kind of commands curl sends to the server, as well as a few other informational texts. `--verbose` is the single most useful option when it comes to debug or even understand the curl<->server interaction. Sometimes even `--verbose` is not enough. Then [`--trace`](https://curl.se/docs/manpage.html#-trace) and [`--trace-ascii`](https://curl.se/docs/manpage.html#--trace-ascii) offer even more details as they show **everything** curl sends and receives. Use it like this: curl --trace-ascii debugdump.txt http://www.example.com/ ## See the Timing Many times you may wonder what exactly is taking all the time, or you just want to know the amount of milliseconds between two points in a transfer. For those, and other similar situations, the [`--trace-time`](https://curl.se/docs/manpage.html#--trace-time) option is what you need. it will prepend the time to each trace output line: curl --trace-ascii d.txt --trace-time http://example.com/ ## See the Response By default curl sends the response to stdout. You need to redirect it somewhere to avoid that, most often that is done with ` -o` or `-O`. # URL ## Spec The Uniform Resource Locator format is how you specify the address of a particular resource on the Internet. You know these, you have seen URLs like https://curl.se or https://yourbank.com a million times. RFC 3986 is the canonical spec. And yeah, the formal name is not URL, it is URI. ## Host The host name is usually resolved using DNS or your /etc/hosts file to an IP address and that is what curl will communicate with. Alternatively you specify the IP address directly in the URL instead of a name. For development and other trying out situations, you can point to a different IP address for a host name than what would otherwise be used, by using curl's [`--resolve`](https://curl.se/docs/manpage.html#--resolve) option: curl --resolve www.example.org:80:127.0.0.1 http://www.example.org/ ## Port number Each protocol curl supports operates on a default port number, be it over TCP or in some cases UDP. Normally you do not have to take that into consideration, but at times you run test servers on other ports or similar. Then you can specify the port number in the URL with a colon and a number immediately following the host name. Like when doing HTTP to port 1234: curl http://www.example.org:1234/ The port number you specify in the URL is the number that the server uses to offer its services. Sometimes you may use a proxy, and then you may need to specify that proxy's port number separately from what curl needs to connect to the server. Like when using a HTTP proxy on port 4321: curl --proxy http://proxy.example.org:4321 http://remote.example.org/ ## User name and password Some services are setup to require HTTP authentication and then you need to provide name and password which is then transferred to the remote site in various ways depending on the exact authentication protocol used. You can opt to either insert the user and password in the URL or you can provide them separately: curl http://user:[email protected]/ or curl -u user:password http://example.org/ You need to pay attention that this kind of HTTP authentication is not what is usually done and requested by user-oriented websites these days. They tend to use forms and cookies instead. ## Path part The path part is just sent off to the server to request that it sends back the associated response. The path is what is to the right side of the slash that follows the host name and possibly port number. # Fetch a page ## GET The simplest and most common request/operation made using HTTP is to GET a URL. The URL could itself refer to a web page, an image or a file. The client issues a GET request to the server and receives the document it asked for. If you issue the command line curl https://curl.se you get a web page returned in your terminal window. The entire HTML document that that URL holds. All HTTP replies contain a set of response headers that are normally hidden, use curl's [`--include`](https://curl.se/docs/manpage.html#-i) (`-i`) option to display them as well as the rest of the document. ## HEAD You can ask the remote server for ONLY the headers by using the [`--head`](https://curl.se/docs/manpage.html#-I) (`-I`) option which will make curl issue a HEAD request. In some special cases servers deny the HEAD method while others still work, which is a particular kind of annoyance. The HEAD method is defined and made so that the server returns the headers exactly the way it would do for a GET, but without a body. It means that you may see a `Content-Length:` in the response headers, but there must not be an actual body in the HEAD response. ## Multiple URLs in a single command line A single curl command line may involve one or many URLs. The most common case is probably to just use one, but you can specify any amount of URLs. Yes any. No limits. you will then get requests repeated over and over for all the given URLs. Example, send two GETs: curl http://url1.example.com http://url2.example.com If you use [`--data`](https://curl.se/docs/manpage.html#-d) to POST to the URL, using multiple URLs means that you send that same POST to all the given URLs. Example, send two POSTs: curl --data name=curl http://url1.example.com http://url2.example.com ## Multiple HTTP methods in a single command line Sometimes you need to operate on several URLs in a single command line and do different HTTP methods on each. For this, you will enjoy the [`--next`](https://curl.se/docs/manpage.html#-:) option. It is basically a separator that separates a bunch of options from the next. All the URLs before `--next` will get the same method and will get all the POST data merged into one. When curl reaches the `--next` on the command line, it will sort of reset the method and the POST data and allow a new set. Perhaps this is best shown with a few examples. To send first a HEAD and then a GET: curl -I http://example.com --next http://example.com To first send a POST and then a GET: curl -d score=10 http://example.com/post.cgi --next http://example.com/results.html # HTML forms ## Forms explained Forms are the general way a website can present a HTML page with fields for the user to enter data in, and then press some kind of 'OK' or 'Submit' button to get that data sent to the server. The server then typically uses the posted data to decide how to act. Like using the entered words to search in a database, or to add the info in a bug tracking system, display the entered address on a map or using the info as a login-prompt verifying that the user is allowed to see what it is about to see. Of course there has to be some kind of program on the server end to receive the data you send. You cannot just invent something out of the air. ## GET A GET-form uses the method GET, as specified in HTML like: ```html <form method="GET" action="junk.cgi"> <input type=text name="birthyear"> <input type=submit name=press value="OK"> </form> ``` In your favorite browser, this form will appear with a text box to fill in and a press-button labeled "OK". If you fill in '1905' and press the OK button, your browser will then create a new URL to get for you. The URL will get `junk.cgi?birthyear=1905&press=OK` appended to the path part of the previous URL. If the original form was seen on the page `www.example.com/when/birth.html`, the second page you will get will become `www.example.com/when/junk.cgi?birthyear=1905&press=OK`. Most search engines work this way. To make curl do the GET form post for you, just enter the expected created URL: curl "http://www.example.com/when/junk.cgi?birthyear=1905&press=OK" ## POST The GET method makes all input field names get displayed in the URL field of your browser. That is generally a good thing when you want to be able to bookmark that page with your given data, but it is an obvious disadvantage if you entered secret information in one of the fields or if there are a large amount of fields creating a long and unreadable URL. The HTTP protocol then offers the POST method. This way the client sends the data separated from the URL and thus you will not see any of it in the URL address field. The form would look similar to the previous one: ```html <form method="POST" action="junk.cgi"> <input type=text name="birthyear"> <input type=submit name=press value=" OK "> </form> ``` And to use curl to post this form with the same data filled in as before, we could do it like: curl --data "birthyear=1905&press=%20OK%20" http://www.example.com/when.cgi This kind of POST will use the Content-Type `application/x-www-form-urlencoded` and is the most widely used POST kind. The data you send to the server MUST already be properly encoded, curl will not do that for you. For example, if you want the data to contain a space, you need to replace that space with `%20`, etc. Failing to comply with this will most likely cause your data to be received wrongly and messed up. Recent curl versions can in fact url-encode POST data for you, like this: curl --data-urlencode "name=I am Daniel" http://www.example.com If you repeat `--data` several times on the command line, curl will concatenate all the given data pieces - and put a `&` symbol between each data segment. ## File Upload POST Back in late 1995 they defined an additional way to post data over HTTP. It is documented in the RFC 1867, why this method sometimes is referred to as RFC1867-posting. This method is mainly designed to better support file uploads. A form that allows a user to upload a file could be written like this in HTML: ```html <form method="POST" enctype='multipart/form-data' action="upload.cgi"> <input type=file name=upload> <input type=submit name=press value="OK"> </form> ``` This clearly shows that the Content-Type about to be sent is `multipart/form-data`. To post to a form like this with curl, you enter a command line like: curl --form upload=@localfilename --form press=OK [URL] ## Hidden Fields A common way for HTML based applications to pass state information between pages is to add hidden fields to the forms. Hidden fields are already filled in, they are not displayed to the user and they get passed along just as all the other fields. A similar example form with one visible field, one hidden field and one submit button could look like: ```html <form method="POST" action="foobar.cgi"> <input type=text name="birthyear"> <input type=hidden name="person" value="daniel"> <input type=submit name="press" value="OK"> </form> ``` To POST this with curl, you will not have to think about if the fields are hidden or not. To curl they are all the same: curl --data "birthyear=1905&press=OK&person=daniel" [URL] ## Figure Out What A POST Looks Like When you are about fill in a form and send to a server by using curl instead of a browser, you are of course interested in sending a POST exactly the way your browser does. An easy way to get to see this, is to save the HTML page with the form on your local disk, modify the 'method' to a GET, and press the submit button (you could also change the action URL if you want to). You will then clearly see the data get appended to the URL, separated with a `?`-letter as GET forms are supposed to. # HTTP upload ## PUT Perhaps the best way to upload data to a HTTP server is to use PUT. Then again, this of course requires that someone put a program or script on the server end that knows how to receive a HTTP PUT stream. Put a file to a HTTP server with curl: curl --upload-file uploadfile http://www.example.com/receive.cgi # HTTP Authentication ## Basic Authentication HTTP Authentication is the ability to tell the server your username and password so that it can verify that you are allowed to do the request you are doing. The Basic authentication used in HTTP (which is the type curl uses by default) is **plain text** based, which means it sends username and password only slightly obfuscated, but still fully readable by anyone that sniffs on the network between you and the remote server. To tell curl to use a user and password for authentication: curl --user name:password http://www.example.com ## Other Authentication The site might require a different authentication method (check the headers returned by the server), and then [`--ntlm`](https://curl.se/docs/manpage.html#--ntlm), [`--digest`](https://curl.se/docs/manpage.html#--digest), [`--negotiate`](https://curl.se/docs/manpage.html#--negotiate) or even [`--anyauth`](https://curl.se/docs/manpage.html#--anyauth) might be options that suit you. ## Proxy Authentication Sometimes your HTTP access is only available through the use of a HTTP proxy. This seems to be especially common at various companies. A HTTP proxy may require its own user and password to allow the client to get through to the Internet. To specify those with curl, run something like: curl --proxy-user proxyuser:proxypassword curl.se If your proxy requires the authentication to be done using the NTLM method, use [`--proxy-ntlm`](https://curl.se/docs/manpage.html#--proxy-ntlm), if it requires Digest use [`--proxy-digest`](https://curl.se/docs/manpage.html#--proxy-digest). If you use any one of these user+password options but leave out the password part, curl will prompt for the password interactively. ## Hiding credentials Do note that when a program is run, its parameters might be possible to see when listing the running processes of the system. Thus, other users may be able to watch your passwords if you pass them as plain command line options. There are ways to circumvent this. It is worth noting that while this is how HTTP Authentication works, many websites will not use this concept when they provide logins etc. See the Web Login chapter further below for more details on that. # More HTTP Headers ## Referer A HTTP request may include a 'referer' field (yes it is misspelled), which can be used to tell from which URL the client got to this particular resource. Some programs/scripts check the referer field of requests to verify that this was not arriving from an external site or an unknown page. While this is a stupid way to check something so easily forged, many scripts still do it. Using curl, you can put anything you want in the referer-field and thus more easily be able to fool the server into serving your request. Use curl to set the referer field with: curl --referer http://www.example.come http://www.example.com ## User Agent Similar to the referer field, all HTTP requests may set the User-Agent field. It names what user agent (client) that is being used. Many applications use this information to decide how to display pages. Silly web programmers try to make different pages for users of different browsers to make them look the best possible for their particular browsers. They usually also do different kinds of javascript, vbscript etc. At times, you will see that getting a page with curl will not return the same page that you see when getting the page with your browser. Then you know it is time to set the User Agent field to fool the server into thinking you are one of those browsers. To make curl look like Internet Explorer 5 on a Windows 2000 box: curl --user-agent "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)" [URL] Or why not look like you are using Netscape 4.73 on an old Linux box: curl --user-agent "Mozilla/4.73 [en] (X11; U; Linux 2.2.15 i686)" [URL] ## Redirects ## Location header When a resource is requested from a server, the reply from the server may include a hint about where the browser should go next to find this page, or a new page keeping newly generated output. The header that tells the browser to redirect is `Location:`. Curl does not follow `Location:` headers by default, but will simply display such pages in the same manner it displays all HTTP replies. It does however feature an option that will make it attempt to follow the `Location:` pointers. To tell curl to follow a Location: curl --location http://www.example.com If you use curl to POST to a site that immediately redirects you to another page, you can safely use [`--location`](https://curl.se/docs/manpage.html#-L) (`-L`) and `--data`/`--form` together. curl will only use POST in the first request, and then revert to GET in the following operations. ## Other redirects Browser typically support at least two other ways of redirects that curl does not: first the html may contain a meta refresh tag that asks the browser to load a specific URL after a set number of seconds, or it may use javascript to do it. # Cookies ## Cookie Basics The way the web browsers do "client side state control" is by using cookies. Cookies are just names with associated contents. The cookies are sent to the client by the server. The server tells the client for what path and host name it wants the cookie sent back, and it also sends an expiration date and a few more properties. When a client communicates with a server with a name and path as previously specified in a received cookie, the client sends back the cookies and their contents to the server, unless of course they are expired. Many applications and servers use this method to connect a series of requests into a single logical session. To be able to use curl in such occasions, we must be able to record and send back cookies the way the web application expects them. The same way browsers deal with them. ## Cookie options The simplest way to send a few cookies to the server when getting a page with curl is to add them on the command line like: curl --cookie "name=Daniel" http://www.example.com Cookies are sent as common HTTP headers. This is practical as it allows curl to record cookies simply by recording headers. Record cookies with curl by using the [`--dump-header`](https://curl.se/docs/manpage.html#-D) (`-D`) option like: curl --dump-header headers_and_cookies http://www.example.com (Take note that the [`--cookie-jar`](https://curl.se/docs/manpage.html#-c) option described below is a better way to store cookies.) Curl has a full blown cookie parsing engine built-in that comes in use if you want to reconnect to a server and use cookies that were stored from a previous connection (or hand-crafted manually to fool the server into believing you had a previous connection). To use previously stored cookies, you run curl like: curl --cookie stored_cookies_in_file http://www.example.com Curl's "cookie engine" gets enabled when you use the [`--cookie`](https://curl.se/docs/manpage.html#-b) option. If you only want curl to understand received cookies, use `--cookie` with a file that does not exist. Example, if you want to let curl understand cookies from a page and follow a location (and thus possibly send back cookies it received), you can invoke it like: curl --cookie nada --location http://www.example.com Curl has the ability to read and write cookie files that use the same file format that Netscape and Mozilla once used. It is a convenient way to share cookies between scripts or invokes. The `--cookie` (`-b`) switch automatically detects if a given file is such a cookie file and parses it, and by using the `--cookie-jar` (`-c`) option you will make curl write a new cookie file at the end of an operation: curl --cookie cookies.txt --cookie-jar newcookies.txt \ http://www.example.com # HTTPS ## HTTPS is HTTP secure There are a few ways to do secure HTTP transfers. By far the most common protocol for doing this is what is generally known as HTTPS, HTTP over SSL. SSL encrypts all the data that is sent and received over the network and thus makes it harder for attackers to spy on sensitive information. SSL (or TLS as the latest version of the standard is called) offers a truckload of advanced features to allow all those encryptions and key infrastructure mechanisms encrypted HTTP requires. Curl supports encrypted fetches when built to use a TLS library and it can be built to use one out of a fairly large set of libraries - `curl -V` will show which one your curl was built to use (if any!). To get a page from a HTTPS server, simply run curl like: curl https://secure.example.com ## Certificates In the HTTPS world, you use certificates to validate that you are the one you claim to be, as an addition to normal passwords. Curl supports client- side certificates. All certificates are locked with a pass phrase, which you need to enter before the certificate can be used by curl. The pass phrase can be specified on the command line or if not, entered interactively when curl queries for it. Use a certificate with curl on a HTTPS server like: curl --cert mycert.pem https://secure.example.com curl also tries to verify that the server is who it claims to be, by verifying the server's certificate against a locally stored CA cert bundle. Failing the verification will cause curl to deny the connection. You must then use [`--insecure`](https://curl.se/docs/manpage.html#-k) (`-k`) in case you want to tell curl to ignore that the server cannot be verified. More about server certificate verification and ca cert bundles can be read in the [SSLCERTS document](https://curl.se/docs/sslcerts.html). At times you may end up with your own CA cert store and then you can tell curl to use that to verify the server's certificate: curl --cacert ca-bundle.pem https://example.com/ # Custom Request Elements ## Modify method and headers Doing fancy stuff, you may need to add or change elements of a single curl request. For example, you can change the POST request to a PROPFIND and send the data as `Content-Type: text/xml` (instead of the default Content-Type) like this: curl --data "<xml>" --header "Content-Type: text/xml" \ --request PROPFIND example.com You can delete a default header by providing one without content. Like you can ruin the request by chopping off the Host: header: curl --header "Host:" http://www.example.com You can add headers the same way. Your server may want a `Destination:` header, and you can add it: curl --header "Destination: http://nowhere" http://example.com ## More on changed methods It should be noted that curl selects which methods to use on its own depending on what action to ask for. `-d` will do POST, `-I` will do HEAD and so on. If you use the [`--request`](https://curl.se/docs/manpage.html#-X) / `-X` option you can change the method keyword curl selects, but you will not modify curl's behavior. This means that if you for example use -d "data" to do a POST, you can modify the method to a `PROPFIND` with `-X` and curl will still think it sends a POST . You can change the normal GET to a POST method by simply adding `-X POST` in a command line like: curl -X POST http://example.org/ ... but curl will still think and act as if it sent a GET so it will not send any request body etc. # Web Login ## Some login tricks While not strictly just HTTP related, it still causes a lot of people problems so here's the executive run-down of how the vast majority of all login forms work and how to login to them using curl. It can also be noted that to do this properly in an automated fashion, you will most certainly need to script things and do multiple curl invokes etc. First, servers mostly use cookies to track the logged-in status of the client, so you will need to capture the cookies you receive in the responses. Then, many sites also set a special cookie on the login page (to make sure you got there through their login page) so you should make a habit of first getting the login-form page to capture the cookies set there. Some web-based login systems feature various amounts of javascript, and sometimes they use such code to set or modify cookie contents. Possibly they do that to prevent programmed logins, like this manual describes how to... Anyway, if reading the code is not enough to let you repeat the behavior manually, capturing the HTTP requests done by your browsers and analyzing the sent cookies is usually a working method to work out how to shortcut the javascript need. In the actual `<form>` tag for the login, lots of sites fill-in random/session or otherwise secretly generated hidden tags and you may need to first capture the HTML code for the login form and extract all the hidden fields to be able to do a proper login POST. Remember that the contents need to be URL encoded when sent in a normal POST. # Debug ## Some debug tricks Many times when you run curl on a site, you will notice that the site does not seem to respond the same way to your curl requests as it does to your browser's. Then you need to start making your curl requests more similar to your browser's requests: - Use the `--trace-ascii` option to store fully detailed logs of the requests for easier analyzing and better understanding - Make sure you check for and use cookies when needed (both reading with `--cookie` and writing with `--cookie-jar`) - Set user-agent (with [`-A`](https://curl.se/docs/manpage.html#-A)) to one like a recent popular browser does - Set referer (with [`-E`](https://curl.se/docs/manpage.html#-E)) like it is set by the browser - If you use POST, make sure you send all the fields and in the same order as the browser does it. ## Check what the browsers do A good helper to make sure you do this right, is the web browsers' developers tools that let you view all headers you send and receive (even when using HTTPS). A more raw approach is to capture the HTTP traffic on the network with tools such as Wireshark or tcpdump and check what headers that were sent and received by the browser. (HTTPS forces you to use `SSLKEYLOGFILE` to do that.)
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/SSLCERTS.md
SSL Certificate Verification ============================ SSL is TLS ---------- SSL is the old name. It is called TLS these days. Native SSL ---------- If libcurl was built with Schannel or Secure Transport support (the native SSL libraries included in Windows and Mac OS X), then this does not apply to you. Scroll down for details on how the OS-native engines handle SSL certificates. If you are not sure, then run "curl -V" and read the results. If the version string says `Schannel` in it, then it was built with Schannel support. It is about trust ----------------- This system is about trust. In your local CA certificate store you have certs from *trusted* Certificate Authorities that you then can use to verify that the server certificates you see are valid. they are signed by one of the CAs you trust. Which CAs do you trust? You can decide to trust the same set of companies your operating system trusts, or the set one of the known browsers trust. That is basically trust via someone else you trust. You should just be aware that modern operating systems and browsers are setup to trust *hundreds* of companies and recent years several such CAs have been found untrustworthy. Certificate Verification ------------------------ libcurl performs peer SSL certificate verification by default. This is done by using a CA certificate store that the SSL library can use to make sure the peer's server certificate is valid. If you communicate with HTTPS, FTPS or other TLS-using servers using certificates that are signed by CAs present in the store, you can be sure that the remote server really is the one it claims to be. If the remote server uses a self-signed certificate, if you do not install a CA cert store, if the server uses a certificate signed by a CA that is not included in the store you use or if the remote host is an impostor impersonating your favorite site, and you want to transfer files from this server, do one of the following: 1. Tell libcurl to *not* verify the peer. With libcurl you disable this with `curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, FALSE);` With the curl command line tool, you disable this with -k/--insecure. 2. Get a CA certificate that can verify the remote server and use the proper option to point out this CA cert for verification when connecting. For libcurl hackers: `curl_easy_setopt(curl, CURLOPT_CAINFO, cacert);` With the curl command line tool: --cacert [file] 3. Add the CA cert for your server to the existing default CA certificate store. The default CA certificate store can be changed at compile time with the following configure options: --with-ca-bundle=FILE: use the specified file as CA certificate store. CA certificates need to be concatenated in PEM format into this file. --with-ca-path=PATH: use the specified path as CA certificate store. CA certificates need to be stored as individual PEM files in this directory. You may need to run c_rehash after adding files there. If neither of the two options is specified, configure will try to auto-detect a setting. It's also possible to explicitly not hardcode any default store but rely on the built in default the crypto library may provide instead. You can achieve that by passing both --without-ca-bundle and --without-ca-path to the configure script. If you use Internet Explorer, this is one way to get extract the CA cert for a particular server: - View the certificate by double-clicking the padlock - Find out where the CA certificate is kept (Certificate> Authority Information Access>URL) - Get a copy of the crt file using curl - Convert it from crt to PEM using the openssl tool: openssl x509 -inform DES -in yourdownloaded.crt \ -out outcert.pem -text - Add the 'outcert.pem' to the CA certificate store or use it stand-alone as described below. If you use the 'openssl' tool, this is one way to get extract the CA cert for a particular server: - `openssl s_client -showcerts -servername server -connect server:443 > cacert.pem` - type "quit", followed by the "ENTER" key - The certificate will have "BEGIN CERTIFICATE" and "END CERTIFICATE" markers. - If you want to see the data in the certificate, you can do: "openssl x509 -inform PEM -in certfile -text -out certdata" where certfile is the cert you extracted from logfile. Look in certdata. - If you want to trust the certificate, you can add it to your CA certificate store or use it stand-alone as described. Just remember that the security is no better than the way you obtained the certificate. 4. If you are using the curl command line tool, you can specify your own CA cert file by setting the environment variable `CURL_CA_BUNDLE` to the path of your choice. If you are using the curl command line tool on Windows, curl will search for a CA cert file named "curl-ca-bundle.crt" in these directories and in this order: 1. application's directory 2. current working directory 3. Windows System directory (e.g. C:\windows\system32) 4. Windows Directory (e.g. C:\windows) 5. all directories along %PATH% 5. Get a better/different/newer CA cert bundle! One option is to extract the one a recent Firefox browser uses by running 'make ca-bundle' in the curl build tree root, or possibly download a version that was generated this way for you: [CA Extract](https://curl.se/docs/caextract.html) Neglecting to use one of the above methods when dealing with a server using a certificate that is not signed by one of the certificates in the installed CA certificate store, will cause SSL to report an error ("certificate verify failed") during the handshake and SSL will then refuse further communication with that server. Certificate Verification with NSS --------------------------------- If libcurl was built with NSS support, then depending on the OS distribution, it is probably required to take some additional steps to use the system-wide CA cert db. RedHat ships with an additional module, libnsspem.so, which enables NSS to read the OpenSSL PEM CA bundle. On openSUSE you can install p11-kit-nss-trust which makes NSS use the system wide CA certificate store. NSS also has a new [database format](https://wiki.mozilla.org/NSS_Shared_DB). Starting with version 7.19.7, libcurl automatically adds the 'sql:' prefix to the certdb directory (either the hardcoded default /etc/pki/nssdb or the directory configured with SSL_DIR environment variable). To check which certdb format your distribution provides, examine the default certdb location: /etc/pki/nssdb; the new certdb format can be identified by the filenames cert9.db, key4.db, pkcs11.txt; filenames of older versions are cert8.db, key3.db, secmod.db. Certificate Verification with Schannel and Secure Transport ----------------------------------------------------------- If libcurl was built with Schannel (Microsoft's native TLS engine) or Secure Transport (Apple's native TLS engine) support, then libcurl will still perform peer certificate verification, but instead of using a CA cert bundle, it will use the certificates that are built into the OS. These are the same certificates that appear in the Internet Options control panel (under Windows) or Keychain Access application (under OS X). Any custom security rules for certificates will be honored. Schannel will run CRL checks on certificates unless peer verification is disabled. Secure Transport on iOS will run OCSP checks on certificates unless peer verification is disabled. Secure Transport on OS X will run either OCSP or CRL checks on certificates if those features are enabled, and this behavior can be adjusted in the preferences of Keychain Access. HTTPS proxy ----------- Since version 7.52.0, curl can do HTTPS to the proxy separately from the connection to the server. This TLS connection is handled separately from the server connection so instead of `--insecure` and `--cacert` to control the certificate verification, you use `--proxy-insecure` and `--proxy-cacert`. With these options, you make sure that the TLS connection and the trust of the proxy can be kept totally separate from the TLS connection to the server.
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/CURL-DISABLE.md
# Code defines to disable features and protocols ## CURL_DISABLE_ALTSVC Disable support for Alt-Svc: HTTP headers. ## CURL_DISABLE_COOKIES Disable support for HTTP cookies. ## CURL_DISABLE_CRYPTO_AUTH Disable support for authentication methods using crypto. ## CURL_DISABLE_DICT Disable the DICT protocol ## CURL_DISABLE_DOH Disable DNS-over-HTTPS ## CURL_DISABLE_FILE Disable the FILE protocol ## CURL_DISABLE_FTP Disable the FTP (and FTPS) protocol ## CURL_DISABLE_GETOPTIONS Disable the `curl_easy_options` API calls that lets users get information about existing options to `curl_easy_setopt`. ## CURL_DISABLE_GOPHER Disable the GOPHER protocol. ## CURL_DISABLE_HSTS Disable the HTTP Strict Transport Security support. ## CURL_DISABLE_HTTP Disable the HTTP(S) protocols. Note that this then also disable HTTP proxy support. ## CURL_DISABLE_HTTP_AUTH Disable support for all HTTP authentication methods. ## CURL_DISABLE_IMAP Disable the IMAP(S) protocols. ## CURL_DISABLE_LDAP Disable the LDAP(S) protocols. ## CURL_DISABLE_LDAPS Disable the LDAPS protocol. ## CURL_DISABLE_LIBCURL_OPTION Disable the --libcurl option from the curl tool. ## CURL_DISABLE_MIME Disable MIME support. ## CURL_DISABLE_MQTT Disable MQTT support. ## CURL_DISABLE_NETRC Disable the netrc parser. ## CURL_DISABLE_NTLM Disable support for NTLM. ## CURL_DISABLE_OPENSSL_AUTO_LOAD_CONFIG Disable the auto load config support in the OpenSSL backend. ## CURL_DISABLE_PARSEDATE Disable date parsing ## CURL_DISABLE_POP3 Disable the POP3 protocol ## CURL_DISABLE_PROGRESS_METER Disable the built-in progress meter ## CURL_DISABLE_PROXY Disable support for proxies ## CURL_DISABLE_RTSP Disable the RTSP protocol. ## CURL_DISABLE_SHUFFLE_DNS Disable the shuffle DNS feature ## CURL_DISABLE_SMB Disable the SMB(S) protocols ## CURL_DISABLE_SMTP Disable the SMTP(S) protocols ## CURL_DISABLE_SOCKETPAIR Disable the use of socketpair internally to allow waking up and canceling curl_multi_poll(). ## CURL_DISABLE_TELNET Disable the TELNET protocol ## CURL_DISABLE_TFTP Disable the TFTP protocol ## CURL_DISABLE_VERBOSE_STRINGS Disable verbose strings and error messages.
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/FEATURES.md
# Features -- what curl can do ## curl tool - config file support - multiple URLs in a single command line - range "globbing" support: [0-13], {one,two,three} - multiple file upload on a single command line - custom maximum transfer rate - redirectable stderr - parallel transfers ## libcurl - full URL syntax with no length limit - custom maximum download time - custom least download speed acceptable - custom output result after completion - guesses protocol from host name unless specified - uses .netrc - progress bar with time statistics while downloading - "standard" proxy environment variables support - compiles on win32 (reported builds on 70+ operating systems) - selectable network interface for outgoing traffic - IPv6 support on unix and Windows - happy eyeballs dual-stack connects - persistent connections - SOCKS 4 + 5 support, with or without local name resolving - supports user name and password in proxy environment variables - operations through HTTP proxy "tunnel" (using CONNECT) - replaceable memory functions (malloc, free, realloc, etc) - asynchronous name resolving (6) - both a push and a pull style interface - international domain names (11) ## HTTP - HTTP/0.9 responses are optionally accepted - HTTP/1.0 - HTTP/1.1 - HTTP/2, including multiplexing and server push (5) - GET - PUT - HEAD - POST - multipart formpost (RFC1867-style) - authentication: Basic, Digest, NTLM (9) and Negotiate (SPNEGO) (3) to server and proxy - resume (both GET and PUT) - follow redirects - maximum amount of redirects to follow - custom HTTP request - cookie get/send fully parsed - reads/writes the netscape cookie file format - custom headers (replace/remove internally generated headers) - custom user-agent string - custom referrer string - range - proxy authentication - time conditions - via HTTP proxy, HTTPS proxy or SOCKS proxy - retrieve file modification date - Content-Encoding support for deflate and gzip - "Transfer-Encoding: chunked" support in uploads - automatic data compression (12) ## HTTPS (1) - (all the HTTP features) - HTTP/3 experimental support - using client certificates - verify server certificate - via HTTP proxy, HTTPS proxy or SOCKS proxy - select desired encryption - select usage of a specific SSL version ## FTP - download - authentication - Kerberos 5 (13) - active/passive using PORT, EPRT, PASV or EPSV - single file size information (compare to HTTP HEAD) - 'type=' URL support - dir listing - dir listing names-only - upload - upload append - upload via http-proxy as HTTP PUT - download resume - upload resume - custom ftp commands (before and/or after the transfer) - simple "range" support - via HTTP proxy, HTTPS proxy or SOCKS proxy - all operations can be tunneled through proxy - customizable to retrieve file modification date - no dir depth limit ## FTPS (1) - implicit `ftps://` support that use SSL on both connections - explicit "AUTH TLS" and "AUTH SSL" usage to "upgrade" plain `ftp://` connection to use SSL for both or one of the connections ## SCP (8) - both password and public key auth ## SFTP (7) - both password and public key auth - with custom commands sent before/after the transfer ## TFTP - download - upload ## TELNET - connection negotiation - custom telnet options - stdin/stdout I/O ## LDAP (2) - full LDAP URL support ## DICT - extended DICT URL support ## FILE - URL support - upload - resume ## SMB - SMBv1 over TCP and SSL - download - upload - authentication with NTLMv1 ## SMTP - authentication: Plain, Login, CRAM-MD5, Digest-MD5, NTLM (9), Kerberos 5 (4) and External. - send e-mails - mail from support - mail size support - mail auth support for trusted server-to-server relaying - multiple recipients - via http-proxy ## SMTPS (1) - implicit `smtps://` support - explicit "STARTTLS" usage to "upgrade" plain `smtp://` connections to use SSL - via http-proxy ## POP3 - authentication: Clear Text, APOP and SASL - SASL based authentication: Plain, Login, CRAM-MD5, Digest-MD5, NTLM (9), Kerberos 5 (4) and External. - list e-mails - retrieve e-mails - enhanced command support for: CAPA, DELE, TOP, STAT, UIDL and NOOP via custom requests - via http-proxy ## POP3S (1) - implicit `pop3s://` support - explicit "STLS" usage to "upgrade" plain `pop3://` connections to use SSL - via http-proxy ## IMAP - authentication: Clear Text and SASL - SASL based authentication: Plain, Login, CRAM-MD5, Digest-MD5, NTLM (9), Kerberos 5 (4) and External. - list the folders of a mailbox - select a mailbox with support for verifying the UIDVALIDITY - fetch e-mails with support for specifying the UID and SECTION - upload e-mails via the append command - enhanced command support for: EXAMINE, CREATE, DELETE, RENAME, STATUS, STORE, COPY and UID via custom requests - via http-proxy ## IMAPS (1) - implicit `imaps://` support - explicit "STARTTLS" usage to "upgrade" plain `imap://` connections to use SSL - via http-proxy ## MQTT - Subscribe to and publish topics using url scheme `mqtt://broker/topic` ## Footnotes 1. requires a TLS library 2. requires OpenLDAP or WinLDAP 3. requires a GSS-API implementation (such as Heimdal or MIT Kerberos) or SSPI (native Windows) 4. requires a GSS-API implementation, however, only Windows SSPI is currently supported 5. requires nghttp2 6. requires c-ares 7. requires libssh2, libssh or wolfSSH 8. requires libssh2 or libssh 9. requires OpenSSL, GnuTLS, mbedTLS, NSS, yassl, Secure Transport or SSPI (native Windows) 10. - 11. requires libidn2 or Windows 12. requires libz, brotli and/or zstd 13. requires a GSS-API implementation (such as Heimdal or MIT Kerberos)
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/MANUAL.md
# curl tutorial ## Simple Usage Get the main page from a web-server: curl https://www.example.com/ Get the README file the user's home directory at funet's ftp-server: curl ftp://ftp.funet.fi/README Get a web page from a server using port 8000: curl http://www.weirdserver.com:8000/ Get a directory listing of an FTP site: curl ftp://ftp.funet.fi Get the definition of curl from a dictionary: curl dict://dict.org/m:curl Fetch two documents at once: curl ftp://ftp.funet.fi/ http://www.weirdserver.com:8000/ Get a file off an FTPS server: curl ftps://files.are.secure.com/secrets.txt or use the more appropriate FTPS way to get the same file: curl --ftp-ssl ftp://files.are.secure.com/secrets.txt Get a file from an SSH server using SFTP: curl -u username sftp://example.com/etc/issue Get a file from an SSH server using SCP using a private key (not password-protected) to authenticate: curl -u username: --key ~/.ssh/id_rsa scp://example.com/~/file.txt Get a file from an SSH server using SCP using a private key (password-protected) to authenticate: curl -u username: --key ~/.ssh/id_rsa --pass private_key_password scp://example.com/~/file.txt Get the main page from an IPv6 web server: curl "http://[2001:1890:1112:1::20]/" Get a file from an SMB server: curl -u "domain\username:passwd" smb://server.example.com/share/file.txt ## Download to a File Get a web page and store in a local file with a specific name: curl -o thatpage.html http://www.example.com/ Get a web page and store in a local file, make the local file get the name of the remote document (if no file name part is specified in the URL, this will fail): curl -O http://www.example.com/index.html Fetch two files and store them with their remote names: curl -O www.haxx.se/index.html -O curl.se/download.html ## Using Passwords ### FTP To ftp files using name+passwd, include them in the URL like: curl ftp://name:[email protected]:port/full/path/to/file or specify them with the -u flag like curl -u name:passwd ftp://machine.domain:port/full/path/to/file ### FTPS It is just like for FTP, but you may also want to specify and use SSL-specific options for certificates etc. Note that using `FTPS://` as prefix is the "implicit" way as described in the standards while the recommended "explicit" way is done by using FTP:// and the `--ftp-ssl` option. ### SFTP / SCP This is similar to FTP, but you can use the `--key` option to specify a private key to use instead of a password. Note that the private key may itself be protected by a password that is unrelated to the login password of the remote system; this password is specified using the `--pass` option. Typically, curl will automatically extract the public key from the private key file, but in cases where curl does not have the proper library support, a matching public key file must be specified using the `--pubkey` option. ### HTTP Curl also supports user and password in HTTP URLs, thus you can pick a file like: curl http://name:[email protected]/full/path/to/file or specify user and password separately like in curl -u name:passwd http://machine.domain/full/path/to/file HTTP offers many different methods of authentication and curl supports several: Basic, Digest, NTLM and Negotiate (SPNEGO). Without telling which method to use, curl defaults to Basic. You can also ask curl to pick the most secure ones out of the ones that the server accepts for the given URL, by using `--anyauth`. **Note**! According to the URL specification, HTTP URLs can not contain a user and password, so that style will not work when using curl via a proxy, even though curl allows it at other times. When using a proxy, you _must_ use the `-u` style for user and password. ### HTTPS Probably most commonly used with private certificates, as explained below. ## Proxy curl supports both HTTP and SOCKS proxy servers, with optional authentication. It does not have special support for FTP proxy servers since there are no standards for those, but it can still be made to work with many of them. You can also use both HTTP and SOCKS proxies to transfer files to and from FTP servers. Get an ftp file using an HTTP proxy named my-proxy that uses port 888: curl -x my-proxy:888 ftp://ftp.leachsite.com/README Get a file from an HTTP server that requires user and password, using the same proxy as above: curl -u user:passwd -x my-proxy:888 http://www.get.this/ Some proxies require special authentication. Specify by using -U as above: curl -U user:passwd -x my-proxy:888 http://www.get.this/ A comma-separated list of hosts and domains which do not use the proxy can be specified as: curl --noproxy localhost,get.this -x my-proxy:888 http://www.get.this/ If the proxy is specified with `--proxy1.0` instead of `--proxy` or `-x`, then curl will use HTTP/1.0 instead of HTTP/1.1 for any `CONNECT` attempts. curl also supports SOCKS4 and SOCKS5 proxies with `--socks4` and `--socks5`. See also the environment variables Curl supports that offer further proxy control. Most FTP proxy servers are set up to appear as a normal FTP server from the client's perspective, with special commands to select the remote FTP server. curl supports the `-u`, `-Q` and `--ftp-account` options that can be used to set up transfers through many FTP proxies. For example, a file can be uploaded to a remote FTP server using a Blue Coat FTP proxy with the options: curl -u "[email protected] Proxy-Username:Remote-Pass" --ftp-account Proxy-Password --upload-file local-file ftp://my-ftp.proxy.server:21/remote/upload/path/ See the manual for your FTP proxy to determine the form it expects to set up transfers, and curl's `-v` option to see exactly what curl is sending. ## Ranges HTTP 1.1 introduced byte-ranges. Using this, a client can request to get only one or more subparts of a specified document. Curl supports this with the `-r` flag. Get the first 100 bytes of a document: curl -r 0-99 http://www.get.this/ Get the last 500 bytes of a document: curl -r -500 http://www.get.this/ Curl also supports simple ranges for FTP files as well. Then you can only specify start and stop position. Get the first 100 bytes of a document using FTP: curl -r 0-99 ftp://www.get.this/README ## Uploading ### FTP / FTPS / SFTP / SCP Upload all data on stdin to a specified server: curl -T - ftp://ftp.upload.com/myfile Upload data from a specified file, login with user and password: curl -T uploadfile -u user:passwd ftp://ftp.upload.com/myfile Upload a local file to the remote site, and use the local file name at the remote site too: curl -T uploadfile -u user:passwd ftp://ftp.upload.com/ Upload a local file to get appended to the remote file: curl -T localfile -a ftp://ftp.upload.com/remotefile Curl also supports ftp upload through a proxy, but only if the proxy is configured to allow that kind of tunneling. If it does, you can run curl in a fashion similar to: curl --proxytunnel -x proxy:port -T localfile ftp.upload.com ### SMB / SMBS curl -T file.txt -u "domain\username:passwd" smb://server.example.com/share/ ### HTTP Upload all data on stdin to a specified HTTP site: curl -T - http://www.upload.com/myfile Note that the HTTP server must have been configured to accept PUT before this can be done successfully. For other ways to do HTTP data upload, see the POST section below. ## Verbose / Debug If curl fails where it is not supposed to, if the servers do not let you in, if you cannot understand the responses: use the `-v` flag to get verbose fetching. Curl will output lots of info and what it sends and receives in order to let the user see all client-server interaction (but it will not show you the actual data). curl -v ftp://ftp.upload.com/ To get even more details and information on what curl does, try using the `--trace` or `--trace-ascii` options with a given file name to log to, like this: curl --trace trace.txt www.haxx.se ## Detailed Information Different protocols provide different ways of getting detailed information about specific files/documents. To get curl to show detailed information about a single file, you should use `-I`/`--head` option. It displays all available info on a single file for HTTP and FTP. The HTTP information is a lot more extensive. For HTTP, you can get the header information (the same as `-I` would show) shown before the data by using `-i`/`--include`. Curl understands the `-D`/`--dump-header` option when getting files from both FTP and HTTP, and it will then store the headers in the specified file. Store the HTTP headers in a separate file (headers.txt in the example): curl --dump-header headers.txt curl.se Note that headers stored in a separate file can be useful at a later time if you want curl to use cookies sent by the server. More about that in the cookies section. ## POST (HTTP) It's easy to post data using curl. This is done using the `-d <data>` option. The post data must be urlencoded. Post a simple "name" and "phone" guestbook. curl -d "name=Rafael%20Sagula&phone=3320780" http://www.where.com/guest.cgi How to post a form with curl, lesson #1: Dig out all the `<input>` tags in the form that you want to fill in. If there's a "normal" post, you use `-d` to post. `-d` takes a full "post string", which is in the format <variable1>=<data1>&<variable2>=<data2>&... The 'variable' names are the names set with `"name="` in the `<input>` tags, and the data is the contents you want to fill in for the inputs. The data *must* be properly URL encoded. That means you replace space with + and that you replace weird letters with %XX where XX is the hexadecimal representation of the letter's ASCII code. Example: (page located at `http://www.formpost.com/getthis/`) ```html <form action="post.cgi" method="post"> <input name=user size=10> <input name=pass type=password size=10> <input name=id type=hidden value="blablabla"> <input name=ding value="submit"> </form> ``` We want to enter user 'foobar' with password '12345'. To post to this, you enter a curl command line like: curl -d "user=foobar&pass=12345&id=blablabla&ding=submit" http://www.formpost.com/getthis/post.cgi While `-d` uses the application/x-www-form-urlencoded mime-type, generally understood by CGI's and similar, curl also supports the more capable multipart/form-data type. This latter type supports things like file upload. `-F` accepts parameters like `-F "name=contents"`. If you want the contents to be read from a file, use `@filename` as contents. When specifying a file, you can also specify the file content type by appending `;type=<mime type>` to the file name. You can also post the contents of several files in one field. For example, the field name 'coolfiles' is used to send three files, with different content types using the following syntax: curl -F "[email protected];type=image/gif,fil2.txt,fil3.html" http://www.post.com/postit.cgi If the content-type is not specified, curl will try to guess from the file extension (it only knows a few), or use the previously specified type (from an earlier file if several files are specified in a list) or else it will use the default type 'application/octet-stream'. Emulate a fill-in form with `-F`. Let's say you fill in three fields in a form. One field is a file name which to post, one field is your name and one field is a file description. We want to post the file we have written named "cooltext.txt". To let curl do the posting of this data instead of your favourite browser, you have to read the HTML source of the form page and find the names of the input fields. In our example, the input field names are 'file', 'yourname' and 'filedescription'. curl -F "[email protected]" -F "yourname=Daniel" -F "filedescription=Cool text file with cool text inside" http://www.post.com/postit.cgi To send two files in one post you can do it in two ways: Send multiple files in a single "field" with a single field name: curl -F "[email protected],cat.gif" $URL Send two fields with two field names curl -F "[email protected]" -F "[email protected]" $URL To send a field value literally without interpreting a leading `@` or `<`, or an embedded `;type=`, use `--form-string` instead of `-F`. This is recommended when the value is obtained from a user or some other unpredictable source. Under these circumstances, using `-F` instead of `--form-string` could allow a user to trick curl into uploading a file. ## Referrer An HTTP request has the option to include information about which address referred it to the actual page. Curl allows you to specify the referrer to be used on the command line. It is especially useful to fool or trick stupid servers or CGI scripts that rely on that information being available or contain certain data. curl -e www.coolsite.com http://www.showme.com/ ## User Agent An HTTP request has the option to include information about the browser that generated the request. Curl allows it to be specified on the command line. It is especially useful to fool or trick stupid servers or CGI scripts that only accept certain browsers. Example: curl -A 'Mozilla/3.0 (Win95; I)' http://www.nationsbank.com/ Other common strings: - `Mozilla/3.0 (Win95; I)` - Netscape Version 3 for Windows 95 - `Mozilla/3.04 (Win95; U)` - Netscape Version 3 for Windows 95 - `Mozilla/2.02 (OS/2; U)` - Netscape Version 2 for OS/2 - `Mozilla/4.04 [en] (X11; U; AIX 4.2; Nav)` - Netscape for AIX - `Mozilla/4.05 [en] (X11; U; Linux 2.0.32 i586)` - Netscape for Linux Note that Internet Explorer tries hard to be compatible in every way: - `Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)` - MSIE for W95 Mozilla is not the only possible User-Agent name: - `Konqueror/1.0` - KDE File Manager desktop client - `Lynx/2.7.1 libwww-FM/2.14` - Lynx command line browser ## Cookies Cookies are generally used by web servers to keep state information at the client's side. The server sets cookies by sending a response line in the headers that looks like `Set-Cookie: <data>` where the data part then typically contains a set of `NAME=VALUE` pairs (separated by semicolons `;` like `NAME1=VALUE1; NAME2=VALUE2;`). The server can also specify for what path the "cookie" should be used for (by specifying `path=value`), when the cookie should expire (`expire=DATE`), for what domain to use it (`domain=NAME`) and if it should be used on secure connections only (`secure`). If you have received a page from a server that contains a header like: ```http Set-Cookie: sessionid=boo123; path="/foo"; ``` it means the server wants that first pair passed on when we get anything in a path beginning with "/foo". Example, get a page that wants my name passed in a cookie: curl -b "name=Daniel" www.sillypage.com Curl also has the ability to use previously received cookies in following sessions. If you get cookies from a server and store them in a file in a manner similar to: curl --dump-header headers www.example.com ... you can then in a second connect to that (or another) site, use the cookies from the 'headers' file like: curl -b headers www.example.com While saving headers to a file is a working way to store cookies, it is however error-prone and not the preferred way to do this. Instead, make curl save the incoming cookies using the well-known netscape cookie format like this: curl -c cookies.txt www.example.com Note that by specifying `-b` you enable the "cookie awareness" and with `-L` you can make curl follow a location: (which often is used in combination with cookies). So that if a site sends cookies and a location, you can use a non-existing file to trigger the cookie awareness like: curl -L -b empty.txt www.example.com The file to read cookies from must be formatted using plain HTTP headers OR as netscape's cookie file. Curl will determine what kind it is based on the file contents. In the above command, curl will parse the header and store the cookies received from www.example.com. curl will send to the server the stored cookies which match the request as it follows the location. The file "empty.txt" may be a nonexistent file. To read and write cookies from a netscape cookie file, you can set both `-b` and `-c` to use the same file: curl -b cookies.txt -c cookies.txt www.example.com ## Progress Meter The progress meter exists to show a user that something actually is happening. The different fields in the output have the following meaning: % Total % Received % Xferd Average Speed Time Curr. Dload Upload Total Current Left Speed 0 151M 0 38608 0 0 9406 0 4:41:43 0:00:04 4:41:39 9287 From left-to-right: - % - percentage completed of the whole transfer - Total - total size of the whole expected transfer - % - percentage completed of the download - Received - currently downloaded amount of bytes - % - percentage completed of the upload - Xferd - currently uploaded amount of bytes - Average Speed Dload - the average transfer speed of the download - Average Speed Upload - the average transfer speed of the upload - Time Total - expected time to complete the operation - Time Current - time passed since the invoke - Time Left - expected time left to completion - Curr.Speed - the average transfer speed the last 5 seconds (the first 5 seconds of a transfer is based on less time of course.) The `-#` option will display a totally different progress bar that does not need much explanation! ## Speed Limit Curl allows the user to set the transfer speed conditions that must be met to let the transfer keep going. By using the switch `-y` and `-Y` you can make curl abort transfers if the transfer speed is below the specified lowest limit for a specified time. To have curl abort the download if the speed is slower than 3000 bytes per second for 1 minute, run: curl -Y 3000 -y 60 www.far-away-site.com This can be used in combination with the overall time limit, so that the above operation must be completed in whole within 30 minutes: curl -m 1800 -Y 3000 -y 60 www.far-away-site.com Forcing curl not to transfer data faster than a given rate is also possible, which might be useful if you are using a limited bandwidth connection and you do not want your transfer to use all of it (sometimes referred to as "bandwidth throttle"). Make curl transfer data no faster than 10 kilobytes per second: curl --limit-rate 10K www.far-away-site.com or curl --limit-rate 10240 www.far-away-site.com Or prevent curl from uploading data faster than 1 megabyte per second: curl -T upload --limit-rate 1M ftp://uploadshereplease.com When using the `--limit-rate` option, the transfer rate is regulated on a per-second basis, which will cause the total transfer speed to become lower than the given number. Sometimes of course substantially lower, if your transfer stalls during periods. ## Config File Curl automatically tries to read the `.curlrc` file (or `_curlrc` file on Microsoft Windows systems) from the user's home dir on startup. The config file could be made up with normal command line switches, but you can also specify the long options without the dashes to make it more readable. You can separate the options and the parameter with spaces, or with `=` or `:`. Comments can be used within the file. If the first letter on a line is a `#`-symbol the rest of the line is treated as a comment. If you want the parameter to contain spaces, you must enclose the entire parameter within double quotes (`"`). Within those quotes, you specify a quote as `\"`. NOTE: You must specify options and their arguments on the same line. Example, set default time out and proxy in a config file: # We want a 30 minute timeout: -m 1800 # ... and we use a proxy for all accesses: proxy = proxy.our.domain.com:8080 Whitespaces ARE significant at the end of lines, but all whitespace leading up to the first characters of each line are ignored. Prevent curl from reading the default file by using -q as the first command line parameter, like: curl -q www.thatsite.com Force curl to get and display a local help page in case it is invoked without URL by making a config file similar to: # default url to get url = "http://help.with.curl.com/curlhelp.html" You can specify another config file to be read by using the `-K`/`--config` flag. If you set config file name to `-` it will read the config from stdin, which can be handy if you want to hide options from being visible in process tables etc: echo "user = user:passwd" | curl -K - http://that.secret.site.com ## Extra Headers When using curl in your own programs, you may end up needing to pass on your own custom headers when getting a web page. You can do this by using the `-H` flag. Example, send the header `X-you-and-me: yes` to the server when getting a page: curl -H "X-you-and-me: yes" www.love.com This can also be useful in case you want curl to send a different text in a header than it normally does. The `-H` header you specify then replaces the header curl would normally send. If you replace an internal header with an empty one, you prevent that header from being sent. To prevent the `Host:` header from being used: curl -H "Host:" www.server.com ## FTP and Path Names Do note that when getting files with a `ftp://` URL, the given path is relative the directory you enter. To get the file `README` from your home directory at your ftp site, do: curl ftp://user:[email protected]/README But if you want the README file from the root directory of that same site, you need to specify the absolute file name: curl ftp://user:[email protected]//README (I.e with an extra slash in front of the file name.) ## SFTP and SCP and Path Names With sftp: and scp: URLs, the path name given is the absolute name on the server. To access a file relative to the remote user's home directory, prefix the file with `/~/` , such as: curl -u $USER sftp://home.example.com/~/.bashrc ## FTP and Firewalls The FTP protocol requires one of the involved parties to open a second connection as soon as data is about to get transferred. There are two ways to do this. The default way for curl is to issue the PASV command which causes the server to open another port and await another connection performed by the client. This is good if the client is behind a firewall that does not allow incoming connections. curl ftp.download.com If the server, for example, is behind a firewall that does not allow connections on ports other than 21 (or if it just does not support the `PASV` command), the other way to do it is to use the `PORT` command and instruct the server to connect to the client on the given IP number and port (as parameters to the PORT command). The `-P` flag to curl supports a few different options. Your machine may have several IP-addresses and/or network interfaces and curl allows you to select which of them to use. Default address can also be used: curl -P - ftp.download.com Download with `PORT` but use the IP address of our `le0` interface (this does not work on windows): curl -P le0 ftp.download.com Download with `PORT` but use 192.168.0.10 as our IP address to use: curl -P 192.168.0.10 ftp.download.com ## Network Interface Get a web page from a server using a specified port for the interface: curl --interface eth0:1 http://www.example.com/ or curl --interface 192.168.1.10 http://www.example.com/ ## HTTPS Secure HTTP requires a TLS library to be installed and used when curl is built. If that is done, curl is capable of retrieving and posting documents using the HTTPS protocol. Example: curl https://www.secure-site.com curl is also capable of using client certificates to get/post files from sites that require valid certificates. The only drawback is that the certificate needs to be in PEM-format. PEM is a standard and open format to store certificates with, but it is not used by the most commonly used browsers. If you want curl to use the certificates you use with your (favourite) browser, you may need to download/compile a converter that can convert your browser's formatted certificates to PEM formatted ones. Example on how to automatically retrieve a document using a certificate with a personal password: curl -E /path/to/cert.pem:password https://secure.site.com/ If you neglect to specify the password on the command line, you will be prompted for the correct password before any data can be received. Many older HTTPS servers have problems with specific SSL or TLS versions, which newer versions of OpenSSL etc use, therefore it is sometimes useful to specify what SSL-version curl should use. Use -3, -2 or -1 to specify that exact SSL version to use (for SSLv3, SSLv2 or TLSv1 respectively): curl -2 https://secure.site.com/ Otherwise, curl will attempt to use a sensible TLS default version. ## Resuming File Transfers To continue a file transfer where it was previously aborted, curl supports resume on HTTP(S) downloads as well as FTP uploads and downloads. Continue downloading a document: curl -C - -o file ftp://ftp.server.com/path/file Continue uploading a document: curl -C - -T file ftp://ftp.server.com/path/file Continue downloading a document from a web server curl -C - -o file http://www.server.com/ ## Time Conditions HTTP allows a client to specify a time condition for the document it requests. It is `If-Modified-Since` or `If-Unmodified-Since`. curl allows you to specify them with the `-z`/`--time-cond` flag. For example, you can easily make a download that only gets performed if the remote file is newer than a local copy. It would be made like: curl -z local.html http://remote.server.com/remote.html Or you can download a file only if the local file is newer than the remote one. Do this by prepending the date string with a `-`, as in: curl -z -local.html http://remote.server.com/remote.html You can specify a "free text" date as condition. Tell curl to only download the file if it was updated since January 12, 2012: curl -z "Jan 12 2012" http://remote.server.com/remote.html Curl will then accept a wide range of date formats. You always make the date check the other way around by prepending it with a dash (`-`). ## DICT For fun try curl dict://dict.org/m:curl curl dict://dict.org/d:heisenbug:jargon curl dict://dict.org/d:daniel:gcide Aliases for 'm' are 'match' and 'find', and aliases for 'd' are 'define' and 'lookup'. For example, curl dict://dict.org/find:curl Commands that break the URL description of the RFC (but not the DICT protocol) are curl dict://dict.org/show:db curl dict://dict.org/show:strat Authentication support is still missing ## LDAP If you have installed the OpenLDAP library, curl can take advantage of it and offer `ldap://` support. On Windows, curl will use WinLDAP from Platform SDK by default. Default protocol version used by curl is LDAPv3. LDAPv2 will be used as fallback mechanism in case if LDAPv3 will fail to connect. LDAP is a complex thing and writing an LDAP query is not an easy task. I do advise you to dig up the syntax description for that elsewhere. One such place might be: [RFC 2255, The LDAP URL Format](https://curl.se/rfc/rfc2255.txt) To show you an example, this is how I can get all people from my local LDAP server that has a certain sub-domain in their email address: curl -B "ldap://ldap.frontec.se/o=frontec??sub?mail=*sth.frontec.se" If I want the same info in HTML format, I can get it by not using the `-B` (enforce ASCII) flag. You also can use authentication when accessing LDAP catalog: curl -u user:passwd "ldap://ldap.frontec.se/o=frontec??sub?mail=*" curl "ldap://user:[email protected]/o=frontec??sub?mail=*" By default, if user and password provided, OpenLDAP/WinLDAP will use basic authentication. On Windows you can control this behavior by providing one of `--basic`, `--ntlm` or `--digest` option in curl command line curl --ntlm "ldap://user:[email protected]/o=frontec??sub?mail=*" On Windows, if no user/password specified, auto-negotiation mechanism will be used with current logon credentials (SSPI/SPNEGO). ## Environment Variables Curl reads and understands the following environment variables: http_proxy, HTTPS_PROXY, FTP_PROXY They should be set for protocol-specific proxies. General proxy should be set with ALL_PROXY A comma-separated list of host names that should not go through any proxy is set in (only an asterisk, `*` matches all hosts) NO_PROXY If the host name matches one of these strings, or the host is within the domain of one of these strings, transactions with that node will not be proxied. When a domain is used, it needs to start with a period. A user can specify that both www.example.com and foo.example.com should not use a proxy by setting `NO_PROXY` to `.example.com`. By including the full name you can exclude specific host names, so to make `www.example.com` not use a proxy but still have `foo.example.com` do it, set `NO_PROXY` to `www.example.com`. The usage of the `-x`/`--proxy` flag overrides the environment variables. ## Netrc Unix introduced the `.netrc` concept a long time ago. It is a way for a user to specify name and password for commonly visited FTP sites in a file so that you do not have to type them in each time you visit those sites. You realize this is a big security risk if someone else gets hold of your passwords, so therefore most unix programs will not read this file unless it is only readable by yourself (curl does not care though). Curl supports `.netrc` files if told to (using the `-n`/`--netrc` and `--netrc-optional` options). This is not restricted to just FTP, so curl can use it for all protocols where authentication is used. A simple `.netrc` file could look something like: machine curl.se login iamdaniel password mysecret ## Custom Output To better allow script programmers to get to know about the progress of curl, the `-w`/`--write-out` option was introduced. Using this, you can specify what information from the previous transfer you want to extract. To display the amount of bytes downloaded together with some text and an ending newline: curl -w 'We downloaded %{size_download} bytes\n' www.download.com ## Kerberos FTP Transfer Curl supports kerberos4 and kerberos5/GSSAPI for FTP transfers. You need the kerberos package installed and used at curl build time for it to be available. First, get the krb-ticket the normal way, like with the kinit/kauth tool. Then use curl in way similar to: curl --krb private ftp://krb4site.com -u username:fakepwd There's no use for a password on the `-u` switch, but a blank one will make curl ask for one and you already entered the real password to kinit/kauth. ## TELNET The curl telnet support is basic and easy to use. Curl passes all data passed to it on stdin to the remote server. Connect to a remote telnet server using a command line similar to: curl telnet://remote.server.com And enter the data to pass to the server on stdin. The result will be sent to stdout or to the file you specify with `-o`. You might want the `-N`/`--no-buffer` option to switch off the buffered output for slow connections or similar. Pass options to the telnet protocol negotiation, by using the `-t` option. To tell the server we use a vt100 terminal, try something like: curl -tTTYPE=vt100 telnet://remote.server.com Other interesting options for it `-t` include: - `XDISPLOC=<X display>` Sets the X display location. - `NEW_ENV=<var,val>` Sets an environment variable. NOTE: The telnet protocol does not specify any way to login with a specified user and password so curl cannot do that automatically. To do that, you need to track when the login prompt is received and send the username and password accordingly. ## Persistent Connections Specifying multiple files on a single command line will make curl transfer all of them, one after the other in the specified order. libcurl will attempt to use persistent connections for the transfers so that the second transfer to the same host can use the same connection that was already initiated and was left open in the previous transfer. This greatly decreases connection time for all but the first transfer and it makes a far better use of the network. Note that curl cannot use persistent connections for transfers that are used in subsequence curl invokes. Try to stuff as many URLs as possible on the same command line if they are using the same host, as that will make the transfers faster. If you use an HTTP proxy for file transfers, practically all transfers will be persistent. ## Multiple Transfers With A Single Command Line As is mentioned above, you can download multiple files with one command line by simply adding more URLs. If you want those to get saved to a local file instead of just printed to stdout, you need to add one save option for each URL you specify. Note that this also goes for the `-O` option (but not `--remote-name-all`). For example: get two files and use `-O` for the first and a custom file name for the second: curl -O http://url.com/file.txt ftp://ftp.com/moo.exe -o moo.jpg You can also upload multiple files in a similar fashion: curl -T local1 ftp://ftp.com/moo.exe -T local2 ftp://ftp.com/moo2.txt ## IPv6 curl will connect to a server with IPv6 when a host lookup returns an IPv6 address and fall back to IPv4 if the connection fails. The `--ipv4` and `--ipv6` options can specify which address to use when both are available. IPv6 addresses can also be specified directly in URLs using the syntax: http://[2001:1890:1112:1::20]/overview.html When this style is used, the `-g` option must be given to stop curl from interpreting the square brackets as special globbing characters. Link local and site local addresses including a scope identifier, such as `fe80::1234%1`, may also be used, but the scope portion must be numeric or match an existing network interface on Linux and the percent character must be URL escaped. The previous example in an SFTP URL might look like: sftp://[fe80::1234%251]/ IPv6 addresses provided other than in URLs (e.g. to the `--proxy`, `--interface` or `--ftp-port` options) should not be URL encoded. ## Mailing Lists For your convenience, we have several open mailing lists to discuss curl, its development and things relevant to this. Get all info at https://curl.se/mail/. Please direct curl questions, feature requests and trouble reports to one of these mailing lists instead of mailing any individual. Available lists include: ### curl-users Users of the command line tool. How to use it, what does not work, new features, related tools, questions, news, installations, compilations, running, porting etc. ### curl-library Developers using or developing libcurl. Bugs, extensions, improvements. ### curl-announce Low-traffic. Only receives announcements of new public versions. At worst, that makes something like one or two mails per month, but usually only one mail every second month. ### curl-and-php Using the curl functions in PHP. Everything curl with a PHP angle. Or PHP with a curl angle. ### curl-and-python Python hackers using curl with or without the python binding pycurl.
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/HTTP-COOKIES.md
# HTTP Cookies ## Cookie overview Cookies are `name=contents` pairs that a HTTP server tells the client to hold and then the client sends back those to the server on subsequent requests to the same domains and paths for which the cookies were set. Cookies are either "session cookies" which typically are forgotten when the session is over which is often translated to equal when browser quits, or the cookies are not session cookies they have expiration dates after which the client will throw them away. Cookies are set to the client with the Set-Cookie: header and are sent to servers with the Cookie: header. For a long time, the only spec explaining how to use cookies was the original [Netscape spec from 1994](https://curl.se/rfc/cookie_spec.html). In 2011, [RFC6265](https://www.ietf.org/rfc/rfc6265.txt) was finally published and details how cookies work within HTTP. In 2016, an update which added support for prefixes was [proposed](https://tools.ietf.org/html/draft-ietf-httpbis-cookie-prefixes-00), and in 2017, another update was [drafted](https://tools.ietf.org/html/draft-ietf-httpbis-cookie-alone-01) to deprecate modification of 'secure' cookies from non-secure origins. Both of these drafts have been incorporated into a proposal to [replace](https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02) RFC6265. Cookie prefixes and secure cookie modification protection has been implemented by curl. ## Cookies saved to disk Netscape once created a file format for storing cookies on disk so that they would survive browser restarts. curl adopted that file format to allow sharing the cookies with browsers, only to see browsers move away from that format. Modern browsers no longer use it, while curl still does. The netscape cookie file format stores one cookie per physical line in the file with a bunch of associated meta data, each field separated with TAB. That file is called the cookiejar in curl terminology. When libcurl saves a cookiejar, it creates a file header of its own in which there is a URL mention that will link to the web version of this document. ## Cookie file format The cookie file format is text based and stores one cookie per line. Lines that start with `#` are treated as comments. Each line that specifies a single cookie consists of seven text fields separated with TAB characters. A valid line must end with a newline character. ### Fields in the file Field number, what type and example data and the meaning of it: 0. string `example.com` - the domain name 1. boolean `FALSE` - include subdomains 2. string `/foobar/` - path 3. boolean `TRUE` - send/receive over HTTPS only 4. number `1462299217` - expires at - seconds since Jan 1st 1970, or 0 5. string `person` - name of the cookie 6. string `daniel` - value of the cookie ## Cookies with curl the command line tool curl has a full cookie "engine" built in. If you just activate it, you can have curl receive and send cookies exactly as mandated in the specs. Command line options: `-b, --cookie` tell curl a file to read cookies from and start the cookie engine, or if it is not a file it will pass on the given string. -b name=var works and so does -b cookiefile. `-j, --junk-session-cookies` when used in combination with -b, it will skip all "session cookies" on load so as to appear to start a new cookie session. `-c, --cookie-jar` tell curl to start the cookie engine and write cookies to the given file after the request(s) ## Cookies with libcurl libcurl offers several ways to enable and interface the cookie engine. These options are the ones provided by the native API. libcurl bindings may offer access to them using other means. `CURLOPT_COOKIE` Is used when you want to specify the exact contents of a cookie header to send to the server. `CURLOPT_COOKIEFILE` Tell libcurl to activate the cookie engine, and to read the initial set of cookies from the given file. Read-only. `CURLOPT_COOKIEJAR` Tell libcurl to activate the cookie engine, and when the easy handle is closed save all known cookies to the given cookiejar file. Write-only. `CURLOPT_COOKIELIST` Provide detailed information about a single cookie to add to the internal storage of cookies. Pass in the cookie as a HTTP header with all the details set, or pass in a line from a netscape cookie file. This option can also be used to flush the cookies etc. `CURLINFO_COOKIELIST` Extract cookie information from the internal cookie storage as a linked list. ## Cookies with javascript These days a lot of the web is built up by javascript. The webbrowser loads complete programs that render the page you see. These javascript programs can also set and access cookies. Since curl and libcurl are plain HTTP clients without any knowledge of or capability to handle javascript, such cookies will not be detected or used. Often, if you want to mimic what a browser does on such websites, you can record web browser HTTP traffic when using such a site and then repeat the cookie operations using curl or libcurl.
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/MQTT.md
# MQTT in curl ## Usage A plain "GET" subscribes to the topic and prints all published messages. Doing a "POST" publishes the post data to the topic and exits. Example subscribe: curl mqtt://host/home/bedroom/temp Example publish: curl -d 75 mqtt://host/home/bedroom/dimmer ## What does curl deliver as a response to a subscribe It outputs two bytes topic length (MSB | LSB), the topic followed by the payload. ## Caveats Remaining limitations: - Only QoS level 0 is implemented for publish - No way to set retain flag for publish - No TLS (mqtts) support - Naive EAGAIN handling will not handle split messages
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/HYPER.md
# Hyper Hyper is a separate HTTP library written in Rust. curl can be told to use this library as a backend to deal with HTTP. ## Experimental! Hyper support in curl is considered **EXPERIMENTAL** until further notice. It needs to be explicitly enabled at build-time. Further development and tweaking of the Hyper backend support in curl will happen in in the master branch using pull-requests, just like ordinary changes. ## Hyper version The C API for Hyper is brand new and is still under development. ## build curl with hyper Build hyper and enable the C API: % git clone https://github.com/hyperium/hyper % cd hyper % RUSTFLAGS="--cfg hyper_unstable_ffi" cargo build --features client,http1,http2,ffi Build curl to use hyper's C API: % git clone https://github.com/curl/curl % cd curl % ./buildconf % ./configure --with-hyper=<hyper dir> % make # using Hyper internally Hyper is a low level HTTP transport library. curl itself provides all HTTP headers and Hyper provides all received headers back to curl. Therefore, most of the "header logic" in curl as in responding to and acting on specific input and output headers are done the same way in curl code. The API in Hyper delivers received HTTP headers as (cleaned up) name=value pairs, making it impossible for curl to know the exact byte representation over the wire with Hyper. ## Limitations The hyper backend does not support - `CURLOPT_IGNORE_CONTENT_LENGTH` - `--raw` and disabling `CURLOPT_HTTP_TRANSFER_DECODING` - RTSP - hyper is much stricter about what HTTP header contents it allow in requests - HTTP/0.9 ## Remaining issues This backend is still not feature complete with the native backend. Areas that still need attention and verification include: - multiplexed HTTP/2 - h2 Upgrade: - pausing transfers - receiving HTTP/1 trailers - sending HTTP/1 trailers
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/ECH.md
# TLS: ECH support in curl and libcurl ## Summary **ECH** means **Encrypted Client Hello**, a TLS 1.3 extension which is currently the subject of an [IETF Draft][tlsesni]. (ECH was formerly known as ESNI). This file is intended to show the latest current state of ECH support in **curl** and **libcurl**. At end of August 2019, an [experimental fork of curl][niallorcurl], built using an [experimental fork of OpenSSL][sftcdopenssl], which in turn provided an implementation of ECH, was demonstrated interoperating with a server belonging to the [DEfO Project][defoproj]. Further sections here describe - resources needed for building and demonstrating **curl** support for ECH, - progress to date, - TODO items, and - additional details of specific stages of the progress. ## Resources needed To build and demonstrate ECH support in **curl** and/or **libcurl**, you will need - a TLS library, supported by **libcurl**, which implements ECH; - an edition of **curl** and/or **libcurl** which supports the ECH implementation of the chosen TLS library; - an environment for building and running **curl**, and at least building **OpenSSL**; - a server, supporting ECH, against which to run a demonstration and perhaps a specific target URL; - some instructions. The following set of resources is currently known to be available. | Set | Component | Location | Remarks | |:-----|:-------------|:------------------------------|:-------------------------------------------| | DEfO | TLS library | [sftcd/openssl][sftcdopenssl] | Tag *esni-2019-08-30* avoids bleeding edge | | | curl fork | [niallor/curl][niallorcurl] | Tag *esni-2019-08-30* likewise | | | instructions | [ESNI-README][niallorreadme] | | ## Progress ### PR 4011 (Jun 2019) expected in curl release 7.67.0 (Oct 2019) - Details [below](#pr-4011); - New configuration option: `--enable-ech`; - Build-time check for availability of resources needed for ECH support; - Pre-processor symbol `USE_ECH` for conditional compilation of ECH support code, subject to configuration option and availability of needed resources. ## TODO - (next PR) Add libcurl options to set ECH parameters. - (next PR) Add curl tool command line options to set ECH parameters. - (WIP) Extend DoH functions so that published ECH parameters can be retrieved from DNS instead of being required as options. - (WIP) Work with OpenSSL community to finalize ECH API. - Track OpenSSL ECH API in libcurl - Identify and implement any changes needed for CMake. - Optimize build-time checking of available resources. - Encourage ECH support work on other TLS/SSL backends. ## Additional detail ### PR 4011 **TLS: Provide ECH support framework for curl and libcurl** The proposed change provides a framework to facilitate work to implement ECH support in curl and libcurl. It is not intended either to provide ECH functionality or to favour any particular TLS-providing backend. Specifically, the change reserves a feature bit for ECH support (symbol `CURL_VERSION_ECH`), implements setting and reporting of this bit, includes dummy book-keeping for the symbol, adds a build-time configuration option (`--enable-ech`), provides an extensible check for resources available to provide ECH support, and defines a compiler pre-processor symbol (`USE_ECH`) accordingly. Proposed-by: @niallor (Niall O'Reilly)\ Encouraged-by: @sftcd (Stephen Farrell)\ See-also: [this message](https://curl.se/mail/lib-2019-05/0108.html) Limitations: - Book-keeping (symbols-in-versions) needs real release number, not 'DUMMY'. - Framework is incomplete, as it covers autoconf, but not CMake. - Check for available resources, although extensible, refers only to specific work in progress ([described here](https://github.com/sftcd/openssl/tree/master/esnistuff)) to implement ECH for OpenSSL, as this is the immediate motivation for the proposed change. ## References Cloudflare blog: [Encrypting SNI: Fixing One of the Core Internet Bugs][corebug] Cloudflare blog: [Encrypt it or lose it: how encrypted SNI works][esniworks] IETF Draft: [Encrypted Server Name Indication for TLS 1.3][tlsesni] --- [tlsesni]: https://datatracker.ietf.org/doc/draft-ietf-tls-esni/ [esniworks]: https://blog.cloudflare.com/encrypted-sni/ [corebug]: https://blog.cloudflare.com/esni/ [defoproj]: https://defo.ie/ [sftcdopenssl]: https://github.com/sftcd/openssl/ [niallorcurl]: https://github.com/niallor/curl/ [niallorreadme]: https://github.com/niallor/curl/blob/master/ESNI-README.md
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/CMakeLists.txt
#*************************************************************************** # _ _ ____ _ # Project ___| | | | _ \| | # / __| | | | |_) | | # | (__| |_| | _ <| |___ # \___|\___/|_| \_\_____| # # Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. # # This software is licensed as described in the file COPYING, which # you should have received as part of this distribution. The terms # are also available at https://curl.se/docs/copyright.html. # # You may opt to use, copy, modify, merge, publish, distribute and/or sell # copies of the Software, and permit persons to whom the Software is # furnished to do so, under the terms of the COPYING file. # # This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY # KIND, either express or implied. # ########################################################################### #add_subdirectory(examples) add_subdirectory(libcurl) add_subdirectory(cmdline-opts)
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/CODE_OF_CONDUCT.md
Contributor Code of Conduct =========================== As contributors and maintainers of this project, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. We are committed to making participation in this project a harassment-free experience for everyone, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, or religion. Examples of unacceptable behavior by participants include the use of sexual language or imagery, derogatory comments or personal attacks, trolling, public or private harassment, insults, or other unprofessional conduct. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct. Project maintainers who do not follow the Code of Conduct may be removed from the project team. This code of conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by opening an issue or contacting one or more of the project maintainers. This Code of Conduct is adapted from the [Contributor Covenant](https://contributor-covenant.org/), version 1.1.0, available at [https://contributor-covenant.org/version/1/1/0/](https://contributor-covenant.org/version/1/1/0/)
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/EXPERIMENTAL.md
# Experimental Some features and functionality in curl and libcurl are considered **EXPERIMENTAL**. Experimental support in curl means: 1. Experimental features are provided to allow users to try them out and provide feedback on functionality and API etc before they ship and get "carved in stone". 2. You must enable the feature when invoking configure as otherwise curl will not be built with the feature present. 3. We strongly advice against using this feature in production. 4. **We reserve the right to change behavior** of the feature without sticking to our API/ABI rules as we do for regular features, as long as it is marked experimental. 5. Experimental features are clearly marked so in documentation. Beware. ## Experimental features right now - The Hyper HTTP backend - HTTP/3 support and options - CURLSSLOPT_NATIVE_CA (No configure option, feature built in when supported)
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/VERSIONS.md
Version Numbers and Releases ============================ Curl is not only curl. Curl is also libcurl. they are actually individually versioned, but they usually follow each other closely. The version numbering is always built up using the same system: X.Y.Z - X is main version number - Y is release number - Z is patch number ## Bumping numbers One of these numbers will get bumped in each new release. The numbers to the right of a bumped number will be reset to zero. The main version number will get bumped when *really* big, world colliding changes are made. The release number is bumped when changes are performed or things/features are added. The patch number is bumped when the changes are mere bugfixes. It means that after release 1.2.3, we can release 2.0.0 if something really big has been made, 1.3.0 if not that big changes were made or 1.2.4 if only bugs were fixed. Bumping, as in increasing the number with 1, is unconditionally only affecting one of the numbers (except the ones to the right of it, that may be set to zero). 1 becomes 2, 3 becomes 4, 9 becomes 10, 88 becomes 89 and 99 becomes 100. So, after 1.2.9 comes 1.2.10. After 3.99.3, 3.100.0 might come. All original curl source release archives are named according to the libcurl version (not according to the curl client version that, as said before, might differ). As a service to any application that might want to support new libcurl features while still being able to build with older versions, all releases have the libcurl version stored in the curl/curlver.h file using a static numbering scheme that can be used for comparison. The version number is defined as: ```c #define LIBCURL_VERSION_NUM 0xXXYYZZ ``` Where XX, YY and ZZ are the main version, release and patch numbers in hexadecimal. All three number fields are always represented using two digits (eight bits each). 1.2 would appear as "0x010200" while version 9.11.7 appears as "0x090b07". This 6-digit hexadecimal number is always a greater number in a more recent release. It makes comparisons with greater than and less than work. This number is also available as three separate defines: `LIBCURL_VERSION_MAJOR`, `LIBCURL_VERSION_MINOR` and `LIBCURL_VERSION_PATCH`.
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/CHECKSRC.md
# checksrc This is the tool we use within the curl project to scan C source code and check that it adheres to our [Source Code Style guide](CODE_STYLE.md). ## Usage checksrc.pl [options] [file1] [file2] ... ## Command line options `-W[file]` skip that file and excludes it from being checked. Helpful when, for example, one of the files is generated. `-D[dir]` directory name to prepend to file names when accessing them. `-h` shows the help output, that also lists all recognized warnings ## What does checksrc warn for? checksrc does not check and verify the code against the entire style guide, but the script is instead an effort to detect the most common mistakes and syntax mistakes that contributors make before they get accustomed to our code style. Heck, many of us regulars do the mistakes too and this script helps us keep the code in shape. checksrc.pl -h Lists how to use the script and it lists all existing warnings it has and problems it detects. At the time of this writing, the existing checksrc warnings are: - `ASSIGNWITHINCONDITION`: Assignment within a conditional expression. The code style mandates the assignment to be done outside of it. - `ASTERISKNOSPACE`: A pointer was declared like `char* name` instead of the more appropriate `char *name` style. The asterisk should sit next to the name. - `ASTERISKSPACE`: A pointer was declared like `char * name` instead of the more appropriate `char *name` style. The asterisk should sit right next to the name without a space in between. - `BADCOMMAND`: There's a bad !checksrc! instruction in the code. See the **Ignore certain warnings** section below for details. - `BANNEDFUNC`: A banned function was used. The functions sprintf, vsprintf, strcat, strncat, gets are **never** allowed in curl source code. - `BRACEELSE`: '} else' on the same line. The else is supposed to be on the following line. - `BRACEPOS`: wrong position for an open brace (`{`). - `BRACEWHILE`: more than once space between end brace and while keyword - `COMMANOSPACE`: a comma without following space - `COPYRIGHT`: the file is missing a copyright statement! - `CPPCOMMENTS`: `//` comment detected, that is not C89 compliant - `DOBRACE`: only use one space after do before open brace - `EMPTYLINEBRACE`: found empty line before open brace - `EQUALSNOSPACE`: no space after `=` sign - `EQUALSNULL`: comparison with `== NULL` used in if/while. We use `!var`. - `EXCLAMATIONSPACE`: space found after exclamations mark - `FOPENMODE`: `fopen()` needs a macro for the mode string, use it - `INDENTATION`: detected a wrong start column for code. Note that this warning only checks some specific places and will certainly miss many bad indentations. - `LONGLINE`: A line is longer than 79 columns. - `MULTISPACE`: Multiple spaces were found where only one should be used. - `NOSPACEEQUALS`: An equals sign was found without preceding space. We prefer `a = 2` and *not* `a=2`. - `NOTEQUALSZERO`: check found using `!= 0`. We use plain `if(var)`. - `ONELINECONDITION`: do not put the conditional block on the same line as `if()` - `OPENCOMMENT`: File ended with a comment (`/*`) still "open". - `PARENBRACE`: `){` was used without sufficient space in between. - `RETURNNOSPACE`: `return` was used without space between the keyword and the following value. - `SEMINOSPACE`: There was no space (or newline) following a semicolon. - `SIZEOFNOPAREN`: Found use of sizeof without parentheses. We prefer `sizeof(int)` style. - `SNPRINTF` - Found use of `snprintf()`. Since we use an internal replacement with a different return code etc, we prefer `msnprintf()`. - `SPACEAFTERPAREN`: there was a space after open parenthesis, `( text`. - `SPACEBEFORECLOSE`: there was a space before a close parenthesis, `text )`. - `SPACEBEFORECOMMA`: there was a space before a comma, `one , two`. - `SPACEBEFOREPAREN`: there was a space before an open parenthesis, `if (`, where one was not expected - `SPACESEMICOLON`: there was a space before semicolon, ` ;`. - `TABS`: TAB characters are not allowed! - `TRAILINGSPACE`: Trailing whitespace on the line - `TYPEDEFSTRUCT`: we frown upon (most) typedefed structs - `UNUSEDIGNORE`: a checksrc inlined warning ignore was asked for but not used, that is an ignore that should be removed or changed to get used. ### Extended warnings Some warnings are quite computationally expensive to perform, so they are turned off by default. To enable these warnings, place a `.checksrc` file in the directory where they should be activated with commands to enable the warnings you are interested in. The format of the file is to enable one warning per line like so: `enable <EXTENDEDWARNING>` Currently there is one extended warning which can be enabled: - `COPYRIGHTYEAR`: the current changeset has not updated the copyright year in the source file ## Ignore certain warnings Due to the nature of the source code and the flaws of the checksrc tool, there is sometimes a need to ignore specific warnings. checksrc allows a few different ways to do this. ### Inline ignore You can control what to ignore within a specific source file by providing instructions to checksrc in the source code itself. You need a magic marker that is `!checksrc!` followed by the instruction. The instruction can ask to ignore a specific warning N number of times or you ignore all of them until you mark the end of the ignored section. Inline ignores are only done for that single specific source code file. Example /* !checksrc! disable LONGLINE all */ This will ignore the warning for overly long lines until it is re-enabled with: /* !checksrc! enable LONGLINE */ If the enabling is not performed before the end of the file, it will be enabled automatically for the next file. You can also opt to ignore just N violations so that if you have a single long line you just cannot shorten and is agreed to be fine anyway: /* !checksrc! disable LONGLINE 1 */ ... and the warning for long lines will be enabled again automatically after it has ignored that single warning. The number `1` can of course be changed to any other integer number. It can be used to make sure only the exact intended instances are ignored and nothing extra. ### Directory wide ignore patterns This is a method we have transitioned away from. Use inline ignores as far as possible. Make a `checksrc.skip` file in the directory of the source code with the false positive, and include the full offending line into this file.
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/HELP-US.md
# How to get started helping out in the curl project We are always in need of more help. If you are new to the project and are looking for ways to contribute and help out, this document aims to give a few good starting points. A good idea is to start by subscribing to the [curl-library mailing list](https://lists.haxx.se/listinfo/curl-library) to keep track of the current discussion topics. ## Scratch your own itch One of the best ways is to start working on any problems or issues you have found yourself or perhaps got annoyed at in the past. It can be a spelling error in an error text or a weirdly phrased section in a man page. Hunt it down and report the bug. Or make your first pull request with a fix for that. ## Smaller tasks Some projects mark small issues as "beginner friendly", "bite-sized" or similar. We do not do that in curl since such issues never linger around long enough. Simple issues get handled fast. If you are looking for a smaller or simpler task in the project to help out with as an entry-point into the project, perhaps because you are a newcomer or even maybe not a terribly experienced developer, here's our advice: - Read through this document to get a grasp on a general approach to use - Consider adding a test case for something not currently tested (correctly) - Consider updating or adding documentation - One way to get your feet wet gently in the project, is to participate in an existing issue/PR and help out by reproducing the issue, review the code in the PR etc. ## Help wanted In the issue tracker we occasionally mark bugs with [help wanted](https://github.com/curl/curl/labels/help%20wanted), as a sign that the bug is acknowledged to exist and that there's nobody known to work on this issue for the moment. Those are bugs that are fine to "grab" and provide a pull request for. The complexity level of these will of course vary, so pick one that piques your interest. ## Work on known bugs Some bugs are known and have not yet received attention and work enough to get fixed. We collect such known existing flaws in the [KNOWN_BUGS](https://curl.se/docs/knownbugs.html) page. Many of them link to the original bug report with some additional details, but some may also have aged a bit and may require some verification that the bug still exists in the same way and that what was said about it in the past is still valid. ## Fix autobuild problems On the [autobuilds page](https://curl.se/dev/builds.html) we show a collection of test results from the automatic curl build and tests that are performed by volunteers. Fixing compiler warnings and errors shown there is something we value greatly. Also, if you own or run systems or architectures that are not already tested in the autobuilds, we also appreciate more volunteers running builds automatically to help us keep curl portable. ## TODO items Ideas for features and functions that we have considered worthwhile to implement and provide are kept in the [TODO](https://curl.se/docs/todo.html) file. Some of the ideas are rough. Some are well thought out. Some probably are not really suitable anymore. Before you invest a lot of time on a TODO item, do bring it up for discussion on the mailing list. For discussion on applicability but also for ideas and brainstorming on specific ways to do the implementation etc. ## You decide You can also come up with a completely new thing you think we should do. Or not do. Or fix. Or add to the project. You then either bring it to the mailing list first to see if people will shoot down the idea at once, or you bring a first draft of the idea as a pull request and take the discussion there around the specific implementation. Either way is fine. ## CONTRIBUTE We offer [guidelines](https://curl.se/dev/contribute.html) that are suitable to be familiar with before you decide to contribute to curl. If you are used to open source development, you will probably not find many surprises in there.
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/SSL-PROBLEMS.md
_ _ ____ _ ___| | | | _ \| | / __| | | | |_) | | | (__| |_| | _ <| |___ \___|\___/|_| \_\_____| # SSL problems First, let's establish that we often refer to TLS and SSL interchangeably as SSL here. The current protocol is called TLS, it was called SSL a long time ago. There are several known reasons why a connection that involves SSL might fail. This is a document that attempts to details the most common ones and how to mitigate them. ## CA certs CA certs are used to digitally verify the server's certificate. You need a "ca bundle" for this. See lots of more details on this in the SSLCERTS document. ## CA bundle missing intermediate certificates When using said CA bundle to verify a server cert, you will experience problems if your CA store does not contain the certificates for the intermediates if the server does not provide them. The TLS protocol mandates that the intermediate certificates are sent in the handshake, but as browsers have ways to survive or work around such omissions, missing intermediates in TLS handshakes still happen that browser-users will not notice. Browsers work around this problem in two ways: they cache intermediate certificates from previous transfers and some implement the TLS "AIA" extension that lets the client explicitly download such certificates on demand. ## Protocol version Some broken servers fail to support the protocol negotiation properly that SSL servers are supposed to handle. This may cause the connection to fail completely. Sometimes you may need to explicitly select a SSL version to use when connecting to make the connection succeed. An additional complication can be that modern SSL libraries sometimes are built with support for older SSL and TLS versions disabled! All versions of SSL and the TLS versions before 1.2 are considered insecure and should be avoided. Use TLS 1.2 or later. ## Ciphers Clients give servers a list of ciphers to select from. If the list does not include any ciphers the server wants/can use, the connection handshake fails. curl has recently disabled the user of a whole bunch of seriously insecure ciphers from its default set (slightly depending on SSL backend in use). You may have to explicitly provide an alternative list of ciphers for curl to use to allow the server to use a WEAK cipher for you. Note that these weak ciphers are identified as flawed. For example, this includes symmetric ciphers with less than 128 bit keys and RC4. Schannel in Windows XP is not able to connect to servers that no longer support the legacy handshakes and algorithms used by those versions, so we advice against building curl to use Schannel on really old Windows versions. References: https://tools.ietf.org/html/draft-popov-tls-prohibiting-rc4-01 ## Allow BEAST BEAST is the name of a TLS 1.0 attack that surfaced 2011. When adding means to mitigate this attack, it turned out that some broken servers out there in the wild did not work properly with the BEAST mitigation in place. To make such broken servers work, the --ssl-allow-beast option was introduced. Exactly as it sounds, it re-introduces the BEAST vulnerability but on the other hand it allows curl to connect to that kind of strange servers. ## Disabling certificate revocation checks Some SSL backends may do certificate revocation checks (CRL, OCSP, etc) depending on the OS or build configuration. The --ssl-no-revoke option was introduced in 7.44.0 to disable revocation checking but currently is only supported for Schannel (the native Windows SSL library), with an exception in the case of Windows' Untrusted Publishers block list which it seems cannot be bypassed. This option may have broader support to accommodate other SSL backends in the future. References: https://curl.se/docs/ssl-compared.html
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/HTTP3.md
# HTTP3 (and QUIC) ## Resources [HTTP/3 Explained](https://http3-explained.haxx.se/en/) - the online free book describing the protocols involved. [QUIC implementation](https://github.com/curl/curl/wiki/QUIC-implementation) - the wiki page describing the plan for how to support QUIC and HTTP/3 in curl and libcurl. [quicwg.org](https://quicwg.org/) - home of the official protocol drafts ## QUIC libraries QUIC libraries we are experimenting with: [ngtcp2](https://github.com/ngtcp2/ngtcp2) [quiche](https://github.com/cloudflare/quiche) ## Experimental! HTTP/3 and QUIC support in curl is considered **EXPERIMENTAL** until further notice. It needs to be enabled at build-time. Further development and tweaking of the HTTP/3 support in curl will happen in in the master branch using pull-requests, just like ordinary changes. # ngtcp2 version ## Build with OpenSSL Build (patched) OpenSSL % git clone --depth 1 -b openssl-3.0.0+quic https://github.com/quictls/openssl % cd openssl % ./config enable-tls1_3 --prefix=<somewhere1> % make % make install Build nghttp3 % cd .. % git clone https://github.com/ngtcp2/nghttp3 % cd nghttp3 % autoreconf -fi % ./configure --prefix=<somewhere2> --enable-lib-only % make % make install Build ngtcp2 % cd .. % git clone https://github.com/ngtcp2/ngtcp2 % cd ngtcp2 % autoreconf -fi % ./configure PKG_CONFIG_PATH=<somewhere1>/lib/pkgconfig:<somewhere2>/lib/pkgconfig LDFLAGS="-Wl,-rpath,<somewhere1>/lib" --prefix=<somewhere3> --enable-lib-only % make % make install Build curl % cd .. % git clone https://github.com/curl/curl % cd curl % autoreconf -fi % LDFLAGS="-Wl,-rpath,<somewhere1>/lib" ./configure --with-openssl=<somewhere1> --with-nghttp3=<somewhere2> --with-ngtcp2=<somewhere3> % make % make install For OpenSSL 3.0.0 or later builds on Linux for x86_64 architecture, substitute all occurances of "/lib" with "/lib64" ## Build with GnuTLS Build GnuTLS % git clone --depth 1 https://gitlab.com/gnutls/gnutls.git % cd gnutls % ./bootstrap % ./configure --prefix=<somewhere1> % make % make install Build nghttp3 % cd .. % git clone https://github.com/ngtcp2/nghttp3 % cd nghttp3 % autoreconf -fi % ./configure --prefix=<somewhere2> --enable-lib-only % make % make install Build ngtcp2 % cd .. % git clone https://github.com/ngtcp2/ngtcp2 % cd ngtcp2 % autoreconf -fi % ./configure PKG_CONFIG_PATH=<somewhere1>/lib/pkgconfig:<somewhere2>/lib/pkgconfig LDFLAGS="-Wl,-rpath,<somewhere1>/lib" --prefix=<somewhere3> --enable-lib-only --with-gnutls % make % make install Build curl % cd .. % git clone https://github.com/curl/curl % cd curl % autoreconf -fi % ./configure --without-openssl --with-gnutls=<somewhere1> --with-nghttp3=<somewhere2> --with-ngtcp2=<somewhere3> % make % make install # quiche version ## build Build quiche and BoringSSL: % git clone --recursive https://github.com/cloudflare/quiche % cd quiche % cargo build --release --features ffi,pkg-config-meta,qlog % mkdir deps/boringssl/src/lib % ln -vnf $(find target/release -name libcrypto.a -o -name libssl.a) deps/boringssl/src/lib/ Build curl: % cd .. % git clone https://github.com/curl/curl % cd curl % autoreconf -fi % ./configure LDFLAGS="-Wl,-rpath,$PWD/../quiche/target/release" --with-openssl=$PWD/../quiche/deps/boringssl/src --with-quiche=$PWD/../quiche/target/release % make % make install If `make install` results in `Permission denied` error, you will need to prepend it with `sudo`. ## Run Use HTTP/3 directly: curl --http3 https://nghttp2.org:4433/ Upgrade via Alt-Svc: curl --alt-svc altsvc.cache https://quic.aiortc.org/ See this [list of public HTTP/3 servers](https://bagder.github.io/HTTP3-test/)
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/README.md
![curl logo](https://curl.se/logo/curl-logo.svg) # Documentation you will find a mix of various documentation in this directory and subdirectories, using several different formats. Some of them are not ideal for reading directly in your browser. If you would rather see the rendered version of the documentation, check out the curl website's [documentation section](https://curl.se/docs/) for general curl stuff or the [libcurl section](https://curl.se/libcurl/) for libcurl related documentation.
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/RELEASE-PROCEDURE.md
curl release procedure - how to do a release ============================================ in the source code repo ----------------------- - run `./scripts/copyright.pl` and correct possible omissions - edit `RELEASE-NOTES` to be accurate - update `docs/THANKS` - make sure all relevant changes are committed on the master branch - tag the git repo in this style: `git tag -a curl-7_34_0`. -a annotates the tag and we use underscores instead of dots in the version number. Make sure the tag is GPG signed (using -s). - run "./maketgz 7.34.0" to build the release tarballs. It is important that you run this on a machine with the correct set of autotools etc installed as this is what then will be shipped and used by most users on \*nix like systems. - push the git commits and the new tag - gpg sign the 4 tarballs as maketgz suggests - upload the 8 resulting files to the primary download directory in the curl-www repo -------------------- - edit `Makefile` (version number and date), - edit `_newslog.html` (announce the new release) and - edit `_changes.html` (insert changes+bugfixes from RELEASE-NOTES) - commit all local changes - tag the repo with the same name as used for the source repo. - make sure all relevant changes are committed and pushed on the master branch (the website then updates its contents automatically) on GitHub --------- - edit the newly made release tag so that it is listed as the latest release inform ------ - send an email to curl-users, curl-announce and curl-library. Insert the RELEASE-NOTES into the mail. celebrate --------- - suitable beverage intake is encouraged for the festivities curl release scheduling ======================= Release Cycle ------------- We do releases every 8 weeks on Wednesdays. If critical problems arise, we can insert releases outside of the schedule or we can move the release date - but this is rare. Each 8 week release cycle is split in two 4-week periods. - During the first 4 weeks after a release, we allow new features and changes to curl and libcurl. If we accept any such changes, we bump the minor number used for the next release. - During the second 4-week period we do not merge any features or changes, we then only focus on fixing bugs and polishing things to make a solid coming release. - After a regular procedure-following release (made on Wednesdays), the feature window remains closed until the following Monday in case of special actions or patch releases etc. If a future release date happens to end up on a "bad date", like in the middle of common public holidays or when the lead release manager is away traveling, the release date can be moved forwards or backwards a full week. This is then advertised well in advance. Coming dates ------------ Based on the description above, here are some planned release dates (at the time of this writing): - September 15, 2021 (7.79.0) - November 10, 2021 - January 5, 2022 - March 2, 2022 - April 27, 2022 - June 22, 2022 - August 17, 2022 - October 12, 2022 - December 7, 2022 - February 1, 2023 - March 20, 2023 (8.0.0) The above (and more) curl-related dates are published in [iCalendar format](https://calendar.google.com/calendar/ical/c9u5d64odop9js55oltfarjk6g%40group.calendar.google.com/public/basic.ics) as well.
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/GOVERNANCE.md
# Decision making in the curl project A rough guide to how we make decisions and who does what. ## BDFL This project was started by and has to some extent been pushed forward over the years with Daniel Stenberg as the driving force. It matches a standard BDFL (Benevolent Dictator For Life) style project. This setup has been used due to convenience and the fact that is has worked fine this far. It is not because someone thinks of it as a superior project leadership model. It will also only continue working as long as Daniel manages to listen in to what the project and the general user population wants and expects from us. ## Legal entity There is no legal entity. The curl project is just a bunch of people scattered around the globe with the common goal to produce source code that creates great products. We are not part of any umbrella organization and we are not located in any specific country. We are totally independent. The copyrights in the project are owned by the individuals and organizations that wrote those parts of the code. ## Decisions The curl project is not a democracy, but everyone is entitled to state their opinion and may argue for their sake within the community. All and any changes that have been done or will be done are eligible to bring up for discussion, to object to or to praise. Ideally, we find consensus for the appropriate way forward in any given situation or challenge. If there is no obvious consensus, a maintainer who's knowledgeable in the specific area will take an "executive" decision that they think is the right for the project. ## Donations Donating plain money to curl is best done to curl's [Open Collective fund](https://opencollective.com/curl). Open Collective is a US based non-profit organization that holds on to funds for us. This fund is then used for paying the curl security bug bounties, to reimburse project related expenses etc. Donations to the project can also come in form of server hosting, providing services and paying for people to work on curl related code etc. Usually, such donations are services paid for directly by the sponsors. We grade sponsors in a few different levels and if they meet the criteria, they can be mentioned on the Sponsors page on the curl website. ## Commercial Support The curl project does not do or offer commercial support. It only hosts mailing lists, runs bug trackers etc to facilitate communication and work. However, Daniel works for wolfSSL and we offer commercial curl support there. # Key roles ## User Someone who uses or has used curl or libcurl. ## Contributor Someone who has helped the curl project, who has contributed to bring it forward. Contributing could be to provide advice, debug a problem, file a bug report, run test infrastructure or writing code etc. ## Commit author Sometimes also called 'committer'. Someone who has authored a commit in the curl source code repository. Committers are recorded as `Author` in git. ## Maintainers A maintainer in the curl project is an individual who has been given permissions to push commits to one of the git repositories. Maintainers are free to push commits to the repositories at their own will. Maintainers are however expected to listen to feedback from users and any change that is non-trivial in size or nature *should* be brought to the project as a Pull-Request (PR) to allow others to comment/object before merge. ## Former maintainers A maintainer who stops being active in the project will at some point get their push permissions removed. We do this for security reasons but also to make sure that we always have the list of maintainers as "the team that push stuff to curl". Getting push permissions removed is not a punishment. Everyone who ever worked on maintaining curl is considered a hero, for all time hereafter. ## Security team members We have a security team. That is the team of people who are subscribed to the curl-security mailing list; the receivers of security reports from users and developers. This list of people will vary over time but should be skilled developers familiar with the curl project. The security team works best when it consists of a small set of active persons. We invite new members when the team seems to need it, and we also expect to retire security team members as they "drift off" from the project or just find themselves unable to perform their duties there. ## Server admins We run a web server, a mailing list and more on the curl project's primary server. That physical machine is owned and run by Haxx. Daniel is the primary admin of all things curl related server stuff, but Björn Stenberg and Linus Feltzing serve as backup admins for when Daniel is gone or unable. The primary server is paid for by Haxx. The machine is physically located in a server bunker in Stockholm Sweden, operated by the company Portlane. The website contents are served to the web via Fastly and Daniel is the primary curl contact with Fastly. ## BDFL That is Daniel. # Maintainers A curl maintainer is a project volunteer who has the authority and rights to merge changes into a git repository in the curl project. Anyone can aspire to become a curl maintainer. ### Duties There are no mandatory duties. We hope and wish that maintainers consider reviewing patches and help merging them, especially when the changes are within the area of personal expertise and experience. ### Requirements - only merge code that meets our quality and style guide requirements. - *never* merge code without doing a PR first, unless the change is "trivial" - if in doubt, ask for input/feedback from others ### Recommendations - we require two-factor authentication enabled on your GitHub account to reduce risk of malicious source code tampering - consider enabling signed git commits for additional verification of changes ### Merge advice When you are merging patches/PRs... - make sure the commit messages follow our template - squash patch sets into a few logical commits even if the PR did not, if necessary - avoid the "merge" button on GitHub, do it "manually" instead to get full control and full audit trail (github leaves out you as "Committer:") - remember to credit the reporter and the helpers! ## Who are maintainers? The [list of maintainers](https://github.com/orgs/curl/people). Be aware that the level of presence and activity in the project vary greatly between different individuals and over time. ### Become a maintainer? If you think you can help making the project better by shouldering some maintaining responsibilities, then please get in touch. You will be expected to be familiar with the curl project and its ways of working. You need to have gotten a few quality patches merged as a proof of this. ### Stop being a maintainer If you (appear to) not be active in the project anymore, you may be removed as a maintainer. Thank you for your service!
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/HISTORY.md
How curl Became Like This ========================= Towards the end of 1996, Daniel Stenberg was spending time writing an IRC bot for an Amiga related channel on EFnet. He then came up with the idea to make currency-exchange calculations available to Internet Relay Chat (IRC) users. All the necessary data were published on the Web; he just needed to automate their retrieval. 1996 ---- On November 11, 1996 the Brazilian developer Rafael Sagula wrote and released HttpGet version 0.1. Daniel extended this existing command-line open-source tool. After a few minor adjustments, it did just what he needed. The first release with Daniel's additions was 0.2, released on December 17, 1996. Daniel quickly became the new maintainer of the project. 1997 ---- HttpGet 0.3 was released in January 1997 and now it accepted HTTP URLs on the command line. HttpGet 1.0 was released on April 8th 1997 with brand new HTTP proxy support. We soon found and fixed support for getting currencies over GOPHER. Once FTP download support was added, the name of the project was changed and urlget 2.0 was released in August 1997. The http-only days were already passed. Version 2.2 was released on August 14 1997 and introduced support to build for and run on Windows and Solaris. November 24 1997: Version 3.1 added FTP upload support. Version 3.5 added support for HTTP POST. 1998 ---- February 4: urlget 3.10 February 9: urlget 3.11 March 14: urlget 3.12 added proxy authentication. The project slowly grew bigger. With upload capabilities, the name was once again misleading and a second name change was made. On March 20, 1998 curl 4 was released. (The version numbering from the previous names was kept.) (Unrelated to this project a company called Curl Corporation registered a US trademark on the name "CURL" on May 18 1998. That company had then already registered the curl.com domain back in November of the previous year. All this was revealed to us much later.) SSL support was added, powered by the SSLeay library. August: first announcement of curl on freshmeat.net. October: with the curl 4.9 release and the introduction of cookie support, curl was no longer released under the GPL license. Now we are at 4000 lines of code, we switched over to the MPL license to restrict the effects of "copyleft". November: configure script and reported successful compiles on several major operating systems. The never-quite-understood -F option was added and curl could now simulate quite a lot of a browser. TELNET support was added. Curl 5 was released in December 1998 and introduced the first ever curl man page. People started making Linux RPM packages out of it. 1999 ---- January: DICT support added. OpenSSL took over and SSLeay was abandoned. May: first Debian package. August: LDAP:// and FILE:// support added. The curl website gets 1300 visits weekly. Moved site to curl.haxx.nu. September: Released curl 6.0. 15000 lines of code. December 28: added the project on Sourceforge and started using its services for managing the project. 2000 ---- Spring: major internal overhaul to provide a suitable library interface. The first non-beta release was named 7.1 and arrived in August. This offered the easy interface and turned out to be the beginning of actually getting other software and programs to be based on and powered by libcurl. Almost 20000 lines of code. June: the curl site moves to "curl.haxx.se" August, the curl website gets 4000 visits weekly. The PHP guys adopted libcurl already the same month, when the first ever third party libcurl binding showed up. CURL has been a supported module in PHP since the release of PHP 4.0.2. This would soon get followers. More than 16 different bindings exist at the time of this writing. September: kerberos4 support was added. November: started the work on a test suite for curl. It was later re-written from scratch again. The libcurl major SONAME number was set to 1. 2001 ---- January: Daniel released curl 7.5.2 under a new license again: MIT (or MPL). The MIT license is extremely liberal and can be combined with GPL in other projects. This would finally put an end to the "complaints" from people involved in GPLed projects that previously were prohibited from using libcurl while it was released under MPL only. (Due to the fact that MPL is deemed "GPL incompatible".) March 22: curl supports HTTP 1.1 starting with the release of 7.7. This also introduced libcurl's ability to do persistent connections. 24000 lines of code. The libcurl major SONAME number was bumped to 2 due to this overhaul. The first experimental ftps:// support was added. August: The curl website gets 8000 visits weekly. Curl Corporation contacted Daniel to discuss "the name issue". After Daniel's reply, they have never since got back in touch again. September: libcurl 7.9 introduces cookie jar and curl_formadd(). During the forthcoming 7.9.x releases, we introduced the multi interface slowly and without many whistles. September 25: curl (7.7.2) is bundled in Mac OS X (10.1) for the first time. It was already becoming more and more of a standard utility of Linux distributions and a regular in the BSD ports collections. 2002 ---- June: the curl website gets 13000 visits weekly. curl and libcurl is 35000 lines of code. Reported successful compiles on more than 40 combinations of CPUs and operating systems. To estimate number of users of the curl tool or libcurl library is next to impossible. Around 5000 downloaded packages each week from the main site gives a hint, but the packages are mirrored extensively, bundled with numerous OS distributions and otherwise retrieved as part of other software. October 1: with the release of curl 7.10 it is released under the MIT license only. Starting with 7.10, curl verifies SSL server certificates by default. 2003 ---- January: Started working on the distributed curl tests. The autobuilds. February: the curl site averages at 20000 visits weekly. At any given moment, there's an average of 3 people browsing the website. Multiple new authentication schemes are supported: Digest (May), NTLM (June) and Negotiate (June). November: curl 7.10.8 is released. 45000 lines of code. ~55000 unique visitors to the website. Five official web mirrors. December: full-fledged SSL for FTP is supported. 2004 ---- January: curl 7.11.0 introduced large file support. June: curl 7.12.0 introduced IDN support. 10 official web mirrors. This release bumped the major SONAME to 3 due to the removal of the curl_formparse() function August: Curl and libcurl 7.12.1 Public curl release number: 82 Releases counted from the very beginning: 109 Available command line options: 96 Available curl_easy_setopt() options: 120 Number of public functions in libcurl: 36 Amount of public website mirrors: 12 Number of known libcurl bindings: 26 2005 ---- April: GnuTLS can now optionally be used for the secure layer when curl is built. April: Added the multi_socket() API September: TFTP support was added. More than 100,000 unique visitors of the curl website. 25 mirrors. December: security vulnerability: libcurl URL Buffer Overflow 2006 ---- January: We dropped support for Gopher. We found bugs in the implementation that turned out to have been introduced years ago, so with the conclusion that nobody had found out in all this time we removed it instead of fixing it. March: security vulnerability: libcurl TFTP Packet Buffer Overflow September: The major SONAME number for libcurl was bumped to 4 due to the removal of ftp third party transfer support. November: Added SCP and SFTP support 2007 ---- February: Added support for the Mozilla NSS library to do the SSL/TLS stuff July: security vulnerability: libcurl GnuTLS insufficient cert verification 2008 ---- November: Command line options: 128 curl_easy_setopt() options: 158 Public functions in libcurl: 58 Known libcurl bindings: 37 Contributors: 683 145,000 unique visitors. >100 GB downloaded. 2009 ---- March: security vulnerability: libcurl Arbitrary File Access April: added CMake support August: security vulnerability: libcurl embedded zero in cert name December: Added support for IMAP, POP3 and SMTP 2010 ---- January: Added support for RTSP February: security vulnerability: libcurl data callback excessive length March: The project switched over to use git (hosted by GitHub) instead of CVS for source code control May: Added support for RTMP Added support for PolarSSL to do the SSL/TLS stuff August: Public curl releases: 117 Command line options: 138 curl_easy_setopt() options: 180 Public functions in libcurl: 58 Known libcurl bindings: 39 Contributors: 808 Gopher support added (re-added actually, see January 2006) 2011 ---- February: added support for the axTLS backend April: added the cyassl backend (later renamed to WolfSSL) 2012 ---- July: Added support for Schannel (native Windows TLS backend) and Darwin SSL (Native Mac OS X and iOS TLS backend). Supports metalink October: SSH-agent support. 2013 ---- February: Cleaned up internals to always uses the "multi" non-blocking approach internally and only expose the blocking API with a wrapper. September: First small steps on supporting HTTP/2 with nghttp2. October: Removed krb4 support. December: Happy eyeballs. 2014 ---- March: first real release supporting HTTP/2 September: Website had 245,000 unique visitors and served 236GB data SMB and SMBS support 2015 ---- June: support for multiplexing with HTTP/2 August: support for HTTP/2 server push December: Public Suffix List 2016 ---- January: the curl tool defaults to HTTP/2 for HTTPS URLs December: curl 7.52.0 introduced support for HTTPS-proxy! First TLS 1.3 support 2017 ---- July: OSS-Fuzz started fuzzing libcurl September: Added Multi-SSL support The website serves 3100 GB/month Public curl releases: 169 Command line options: 211 curl_easy_setopt() options: 249 Public functions in libcurl: 74 Contributors: 1609 October: SSLKEYLOGFILE support, new MIME API October: Daniel received the Polhem Prize for his work on curl November: brotli 2018 ---- January: new SSH backend powered by libssh March: starting with the 1803 release of Windows 10, curl is shipped bundled with Microsoft's operating system. July: curl shows headers using bold type face October: added DNS-over-HTTPS (DoH) and the URL API MesaLink is a new supported TLS backend libcurl now does HTTP/2 (and multiplexing) by default on HTTPS URLs curl and libcurl are installed in an estimated 5 *billion* instances world-wide. October 31: Curl and libcurl 7.62.0 Public curl releases: 177 Command line options: 219 curl_easy_setopt() options: 261 Public functions in libcurl: 80 Contributors: 1808 December: removed axTLS support 2019 ---- March: added experimental alt-svc support August: the first HTTP/3 requests with curl. September: 7.66.0 is released and the tool offers parallel downloads 2020 ---- curl and libcurl are installed in an estimated 10 *billion* instances world-wide. January: added BearSSL support March: removed support for PolarSSL, added wolfSSH support April: experimental MQTT support August: zstd support November: the website moves to curl.se. The website serves 10TB data monthly. 2021 ---- February 3: curl 7.75.0 ships with support for Hyper is a HTTP backend March 31: curl 7.76.0 ships with support for rustls
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/INTERNALS.md
curl internals ============== - [Intro](#intro) - [git](#git) - [Portability](#Portability) - [Windows vs Unix](#winvsunix) - [Library](#Library) - [`Curl_connect`](#Curl_connect) - [`multi_do`](#multi_do) - [`Curl_readwrite`](#Curl_readwrite) - [`multi_done`](#multi_done) - [`Curl_disconnect`](#Curl_disconnect) - [HTTP(S)](#http) - [FTP](#ftp) - [Kerberos](#kerberos) - [TELNET](#telnet) - [FILE](#file) - [SMB](#smb) - [LDAP](#ldap) - [E-mail](#email) - [General](#general) - [Persistent Connections](#persistent) - [multi interface/non-blocking](#multi) - [SSL libraries](#ssl) - [Library Symbols](#symbols) - [Return Codes and Informationals](#returncodes) - [AP/ABI](#abi) - [Client](#client) - [Memory Debugging](#memorydebug) - [Test Suite](#test) - [Asynchronous name resolves](#asyncdns) - [c-ares](#cares) - [`curl_off_t`](#curl_off_t) - [curlx](#curlx) - [Content Encoding](#contentencoding) - [`hostip.c` explained](#hostip) - [Track Down Memory Leaks](#memoryleak) - [`multi_socket`](#multi_socket) - [Structs in libcurl](#structs) - [Curl_easy](#Curl_easy) - [connectdata](#connectdata) - [Curl_multi](#Curl_multi) - [Curl_handler](#Curl_handler) - [conncache](#conncache) - [Curl_share](#Curl_share) - [CookieInfo](#CookieInfo) <a name="intro"></a> Intro ===== This project is split in two. The library and the client. The client part uses the library, but the library is designed to allow other applications to use it. The largest amount of code and complexity is in the library part. <a name="git"></a> git === All changes to the sources are committed to the git repository as soon as they are somewhat verified to work. Changes shall be committed as independently as possible so that individual changes can be easily spotted and tracked afterwards. Tagging shall be used extensively, and by the time we release new archives we should tag the sources with a name similar to the released version number. <a name="Portability"></a> Portability =========== We write curl and libcurl to compile with C89 compilers. On 32-bit and up machines. Most of libcurl assumes more or less POSIX compliance but that is not a requirement. We write libcurl to build and work with lots of third party tools, and we want it to remain functional and buildable with these and later versions (older versions may still work but is not what we work hard to maintain): Dependencies ------------ - OpenSSL 0.9.7 - GnuTLS 3.1.10 - zlib 1.1.4 - libssh2 1.0 - c-ares 1.16.0 - libidn2 2.0.0 - wolfSSL 2.0.0 - openldap 2.0 - MIT Kerberos 1.2.4 - GSKit V5R3M0 - NSS 3.14.x - Heimdal ? - nghttp2 1.12.0 - WinSock 2.2 (on Windows 95+ and Windows CE .NET 4.1+) Operating Systems ----------------- On systems where configure runs, we aim at working on them all - if they have a suitable C compiler. On systems that do not run configure, we strive to keep curl running correctly on: - Windows 98 - AS/400 V5R3M0 - Symbian 9.1 - Windows CE ? - TPF ? Build tools ----------- When writing code (mostly for generating stuff included in release tarballs) we use a few "build tools" and we make sure that we remain functional with these versions: - GNU Libtool 1.4.2 - GNU Autoconf 2.57 - GNU Automake 1.7 - GNU M4 1.4 - perl 5.004 - roffit 0.5 - groff ? (any version that supports `groff -Tps -man [in] [out]`) - ps2pdf (gs) ? <a name="winvsunix"></a> Windows vs Unix =============== There are a few differences in how to program curl the Unix way compared to the Windows way. Perhaps the four most notable details are: 1. Different function names for socket operations. In curl, this is solved with defines and macros, so that the source looks the same in all places except for the header file that defines them. The macros in use are `sclose()`, `sread()` and `swrite()`. 2. Windows requires a couple of init calls for the socket stuff. That is taken care of by the `curl_global_init()` call, but if other libs also do it etc there might be reasons for applications to alter that behavior. We require WinSock version 2.2 and load this version during global init. 3. The file descriptors for network communication and file operations are not as easily interchangeable as in Unix. We avoid this by not trying any funny tricks on file descriptors. 4. When writing data to stdout, Windows makes end-of-lines the DOS way, thus destroying binary data, although you do want that conversion if it is text coming through... (sigh) We set stdout to binary under windows Inside the source code, We make an effort to avoid `#ifdef [Your OS]`. All conditionals that deal with features *should* instead be in the format `#ifdef HAVE_THAT_WEIRD_FUNCTION`. Since Windows cannot run configure scripts, we maintain a `curl_config-win32.h` file in lib directory that is supposed to look exactly like a `curl_config.h` file would have looked like on a Windows machine! Generally speaking: always remember that this will be compiled on dozens of operating systems. Do not walk on the edge! <a name="Library"></a> Library ======= (See [Structs in libcurl](#structs) for the separate section describing all major internal structs and their purposes.) There are plenty of entry points to the library, namely each publicly defined function that libcurl offers to applications. All of those functions are rather small and easy-to-follow. All the ones prefixed with `curl_easy` are put in the `lib/easy.c` file. `curl_global_init()` and `curl_global_cleanup()` should be called by the application to initialize and clean up global stuff in the library. As of today, it can handle the global SSL initialization if SSL is enabled and it can initialize the socket layer on Windows machines. libcurl itself has no "global" scope. All printf()-style functions use the supplied clones in `lib/mprintf.c`. This makes sure we stay absolutely platform independent. [ `curl_easy_init()`][2] allocates an internal struct and makes some initializations. The returned handle does not reveal internals. This is the `Curl_easy` struct which works as an "anchor" struct for all `curl_easy` functions. All connections performed will get connect-specific data allocated that should be used for things related to particular connections/requests. [`curl_easy_setopt()`][1] takes three arguments, where the option stuff must be passed in pairs: the parameter-ID and the parameter-value. The list of options is documented in the man page. This function mainly sets things in the `Curl_easy` struct. `curl_easy_perform()` is just a wrapper function that makes use of the multi API. It basically calls `curl_multi_init()`, `curl_multi_add_handle()`, `curl_multi_wait()`, and `curl_multi_perform()` until the transfer is done and then returns. Some of the most important key functions in `url.c` are called from `multi.c` when certain key steps are to be made in the transfer operation. <a name="Curl_connect"></a> Curl_connect() -------------- Analyzes the URL, it separates the different components and connects to the remote host. This may involve using a proxy and/or using SSL. The `Curl_resolv()` function in `lib/hostip.c` is used for looking up host names (it does then use the proper underlying method, which may vary between platforms and builds). When `Curl_connect` is done, we are connected to the remote site. Then it is time to tell the server to get a document/file. `Curl_do()` arranges this. This function makes sure there's an allocated and initiated `connectdata` struct that is used for this particular connection only (although there may be several requests performed on the same connect). A bunch of things are initialized/inherited from the `Curl_easy` struct. <a name="multi_do"></a> multi_do() --------- `multi_do()` makes sure the proper protocol-specific function is called. The functions are named after the protocols they handle. The protocol-specific functions of course deal with protocol-specific negotiations and setup. When they are ready to start the actual file transfer they call the `Curl_setup_transfer()` function (in `lib/transfer.c`) to setup the transfer and returns. If this DO function fails and the connection is being re-used, libcurl will then close this connection, setup a new connection and re-issue the DO request on that. This is because there is no way to be perfectly sure that we have discovered a dead connection before the DO function and thus we might wrongly be re-using a connection that was closed by the remote peer. <a name="Curl_readwrite"></a> Curl_readwrite() ---------------- Called during the transfer of the actual protocol payload. During transfer, the progress functions in `lib/progress.c` are called at frequent intervals (or at the user's choice, a specified callback might get called). The speedcheck functions in `lib/speedcheck.c` are also used to verify that the transfer is as fast as required. <a name="multi_done"></a> multi_done() ----------- Called after a transfer is done. This function takes care of everything that has to be done after a transfer. This function attempts to leave matters in a state so that `multi_do()` should be possible to call again on the same connection (in a persistent connection case). It might also soon be closed with `Curl_disconnect()`. <a name="Curl_disconnect"></a> Curl_disconnect() ----------------- When doing normal connections and transfers, no one ever tries to close any connections so this is not normally called when `curl_easy_perform()` is used. This function is only used when we are certain that no more transfers are going to be made on the connection. It can be also closed by force, or it can be called to make sure that libcurl does not keep too many connections alive at the same time. This function cleans up all resources that are associated with a single connection. <a name="http"></a> HTTP(S) ======= HTTP offers a lot and is the protocol in curl that uses the most lines of code. There is a special file `lib/formdata.c` that offers all the multipart post functions. base64-functions for user+password stuff (and more) is in `lib/base64.c` and all functions for parsing and sending cookies are found in `lib/cookie.c`. HTTPS uses in almost every case the same procedure as HTTP, with only two exceptions: the connect procedure is different and the function used to read or write from the socket is different, although the latter fact is hidden in the source by the use of `Curl_read()` for reading and `Curl_write()` for writing data to the remote server. `http_chunks.c` contains functions that understands HTTP 1.1 chunked transfer encoding. An interesting detail with the HTTP(S) request, is the `Curl_add_buffer()` series of functions we use. They append data to one single buffer, and when the building is finished the entire request is sent off in one single write. This is done this way to overcome problems with flawed firewalls and lame servers. <a name="ftp"></a> FTP === The `Curl_if2ip()` function can be used for getting the IP number of a specified network interface, and it resides in `lib/if2ip.c`. `Curl_ftpsendf()` is used for sending FTP commands to the remote server. It was made a separate function to prevent us programmers from forgetting that they must be CRLF terminated. They must also be sent in one single `write()` to make firewalls and similar happy. <a name="kerberos"></a> Kerberos ======== Kerberos support is mainly in `lib/krb5.c` but also `curl_sasl_sspi.c` and `curl_sasl_gssapi.c` for the email protocols and `socks_gssapi.c` and `socks_sspi.c` for SOCKS5 proxy specifics. <a name="telnet"></a> TELNET ====== Telnet is implemented in `lib/telnet.c`. <a name="file"></a> FILE ==== The `file://` protocol is dealt with in `lib/file.c`. <a name="smb"></a> SMB === The `smb://` protocol is dealt with in `lib/smb.c`. <a name="ldap"></a> LDAP ==== Everything LDAP is in `lib/ldap.c` and `lib/openldap.c`. <a name="email"></a> E-mail ====== The e-mail related source code is in `lib/imap.c`, `lib/pop3.c` and `lib/smtp.c`. <a name="general"></a> General ======= URL encoding and decoding, called escaping and unescaping in the source code, is found in `lib/escape.c`. While transferring data in `Transfer()` a few functions might get used. `curl_getdate()` in `lib/parsedate.c` is for HTTP date comparisons (and more). `lib/getenv.c` offers `curl_getenv()` which is for reading environment variables in a neat platform independent way. That is used in the client, but also in `lib/url.c` when checking the proxy environment variables. Note that contrary to the normal unix `getenv()`, this returns an allocated buffer that must be `free()`ed after use. `lib/netrc.c` holds the `.netrc` parser. `lib/timeval.c` features replacement functions for systems that do not have `gettimeofday()` and a few support functions for timeval conversions. A function named `curl_version()` that returns the full curl version string is found in `lib/version.c`. <a name="persistent"></a> Persistent Connections ====================== The persistent connection support in libcurl requires some considerations on how to do things inside of the library. - The `Curl_easy` struct returned in the [`curl_easy_init()`][2] call must never hold connection-oriented data. It is meant to hold the root data as well as all the options etc that the library-user may choose. - The `Curl_easy` struct holds the "connection cache" (an array of pointers to `connectdata` structs). - This enables the 'curl handle' to be reused on subsequent transfers. - When libcurl is told to perform a transfer, it first checks for an already existing connection in the cache that we can use. Otherwise it creates a new one and adds that to the cache. If the cache is full already when a new connection is added, it will first close the oldest unused one. - When the transfer operation is complete, the connection is left open. Particular options may tell libcurl not to, and protocols may signal closure on connections and then they will not be kept open, of course. - When `curl_easy_cleanup()` is called, we close all still opened connections, unless of course the multi interface "owns" the connections. The curl handle must be re-used in order for the persistent connections to work. <a name="multi"></a> multi interface/non-blocking ============================ The multi interface is a non-blocking interface to the library. To make that interface work as well as possible, no low-level functions within libcurl must be written to work in a blocking manner. (There are still a few spots violating this rule.) One of the primary reasons we introduced c-ares support was to allow the name resolve phase to be perfectly non-blocking as well. The FTP and the SFTP/SCP protocols are examples of how we adapt and adjust the code to allow non-blocking operations even on multi-stage command- response protocols. They are built around state machines that return when they would otherwise block waiting for data. The DICT, LDAP and TELNET protocols are crappy examples and they are subject for rewrite in the future to better fit the libcurl protocol family. <a name="ssl"></a> SSL libraries ============= Originally libcurl supported SSLeay for SSL/TLS transports, but that was then extended to its successor OpenSSL but has since also been extended to several other SSL/TLS libraries and we expect and hope to further extend the support in future libcurl versions. To deal with this internally in the best way possible, we have a generic SSL function API as provided by the `vtls/vtls.[ch]` system, and they are the only SSL functions we must use from within libcurl. vtls is then crafted to use the appropriate lower-level function calls to whatever SSL library that is in use. For example `vtls/openssl.[ch]` for the OpenSSL library. <a name="symbols"></a> Library Symbols =============== All symbols used internally in libcurl must use a `Curl_` prefix if they are used in more than a single file. Single-file symbols must be made static. Public ("exported") symbols must use a `curl_` prefix. (There are exceptions, but they are to be changed to follow this pattern in future versions.) Public API functions are marked with `CURL_EXTERN` in the public header files so that all others can be hidden on platforms where this is possible. <a name="returncodes"></a> Return Codes and Informationals =============================== I have made things simple. Almost every function in libcurl returns a CURLcode, that must be `CURLE_OK` if everything is OK or otherwise a suitable error code as the `curl/curl.h` include file defines. The place that detects an error must use the `Curl_failf()` function to set the human-readable error description. In aiding the user to understand what's happening and to debug curl usage, we must supply a fair number of informational messages by using the `Curl_infof()` function. Those messages are only displayed when the user explicitly asks for them. They are best used when revealing information that is not otherwise obvious. <a name="abi"></a> API/ABI ======= We make an effort to not export or show internals or how internals work, as that makes it easier to keep a solid API/ABI over time. See docs/libcurl/ABI for our promise to users. <a name="client"></a> Client ====== `main()` resides in `src/tool_main.c`. `src/tool_hugehelp.c` is automatically generated by the `mkhelp.pl` perl script to display the complete "manual" and the `src/tool_urlglob.c` file holds the functions used for the URL-"globbing" support. Globbing in the sense that the `{}` and `[]` expansion stuff is there. The client mostly sets up its `config` struct properly, then it calls the `curl_easy_*()` functions of the library and when it gets back control after the `curl_easy_perform()` it cleans up the library, checks status and exits. When the operation is done, the `ourWriteOut()` function in `src/writeout.c` may be called to report about the operation. That function is mostly using the `curl_easy_getinfo()` function to extract useful information from the curl session. It may loop and do all this several times if many URLs were specified on the command line or config file. <a name="memorydebug"></a> Memory Debugging ================ The file `lib/memdebug.c` contains debug-versions of a few functions. Functions such as `malloc()`, `free()`, `fopen()`, `fclose()`, etc that somehow deal with resources that might give us problems if we "leak" them. The functions in the memdebug system do nothing fancy, they do their normal function and then log information about what they just did. The logged data can then be analyzed after a complete session, `memanalyze.pl` is the perl script present in `tests/` that analyzes a log file generated by the memory tracking system. It detects if resources are allocated but never freed and other kinds of errors related to resource management. Internally, definition of preprocessor symbol `DEBUGBUILD` restricts code which is only compiled for debug enabled builds. And symbol `CURLDEBUG` is used to differentiate code which is _only_ used for memory tracking/debugging. Use `-DCURLDEBUG` when compiling to enable memory debugging, this is also switched on by running configure with `--enable-curldebug`. Use `-DDEBUGBUILD` when compiling to enable a debug build or run configure with `--enable-debug`. `curl --version` will list 'Debug' feature for debug enabled builds, and will list 'TrackMemory' feature for curl debug memory tracking capable builds. These features are independent and can be controlled when running the configure script. When `--enable-debug` is given both features will be enabled, unless some restriction prevents memory tracking from being used. <a name="test"></a> Test Suite ========== The test suite is placed in its own subdirectory directly off the root in the curl archive tree, and it contains a bunch of scripts and a lot of test case data. The main test script is `runtests.pl` that will invoke test servers like `httpserver.pl` and `ftpserver.pl` before all the test cases are performed. The test suite currently only runs on Unix-like platforms. you will find a description of the test suite in the `tests/README` file, and the test case data files in the `tests/FILEFORMAT` file. The test suite automatically detects if curl was built with the memory debugging enabled, and if it was, it will detect memory leaks, too. <a name="asyncdns"></a> Asynchronous name resolves ========================== libcurl can be built to do name resolves asynchronously, using either the normal resolver in a threaded manner or by using c-ares. <a name="cares"></a> [c-ares][3] ------ ### Build libcurl to use a c-ares 1. ./configure --enable-ares=/path/to/ares/install 2. make ### c-ares on win32 First I compiled c-ares. I changed the default C runtime library to be the single-threaded rather than the multi-threaded (this seems to be required to prevent linking errors later on). Then I simply build the areslib project (the other projects adig/ahost seem to fail under MSVC). Next was libcurl. I opened `lib/config-win32.h` and I added a: `#define USE_ARES 1` Next thing I did was I added the path for the ares includes to the include path, and the libares.lib to the libraries. Lastly, I also changed libcurl to be single-threaded rather than multi-threaded, again this was to prevent some duplicate symbol errors. I'm not sure why I needed to change everything to single-threaded, but when I did not I got redefinition errors for several CRT functions (`malloc()`, `stricmp()`, etc.) <a name="curl_off_t"></a> `curl_off_t` ========== `curl_off_t` is a data type provided by the external libcurl include headers. It is the type meant to be used for the [`curl_easy_setopt()`][1] options that end with LARGE. The type is 64-bit large on most modern platforms. <a name="curlx"></a> curlx ===== The libcurl source code offers a few functions by source only. They are not part of the official libcurl API, but the source files might be useful for others so apps can optionally compile/build with these sources to gain additional functions. We provide them through a single header file for easy access for apps: `curlx.h` `curlx_strtoofft()` ------------------- A macro that converts a string containing a number to a `curl_off_t` number. This might use the `curlx_strtoll()` function which is provided as source code in strtoofft.c. Note that the function is only provided if no `strtoll()` (or equivalent) function exist on your platform. If `curl_off_t` is only a 32-bit number on your platform, this macro uses `strtol()`. Future ------ Several functions will be removed from the public `curl_` name space in a future libcurl release. They will then only become available as `curlx_` functions instead. To make the transition easier, we already today provide these functions with the `curlx_` prefix to allow sources to be built properly with the new function names. The concerned functions are: - `curlx_getenv` - `curlx_strequal` - `curlx_strnequal` - `curlx_mvsnprintf` - `curlx_msnprintf` - `curlx_maprintf` - `curlx_mvaprintf` - `curlx_msprintf` - `curlx_mprintf` - `curlx_mfprintf` - `curlx_mvsprintf` - `curlx_mvprintf` - `curlx_mvfprintf` <a name="contentencoding"></a> Content Encoding ================ ## About content encodings [HTTP/1.1][4] specifies that a client may request that a server encode its response. This is usually used to compress a response using one (or more) encodings from a set of commonly available compression techniques. These schemes include `deflate` (the zlib algorithm), `gzip`, `br` (brotli) and `compress`. A client requests that the server perform an encoding by including an `Accept-Encoding` header in the request document. The value of the header should be one of the recognized tokens `deflate`, ... (there's a way to register new schemes/tokens, see sec 3.5 of the spec). A server MAY honor the client's encoding request. When a response is encoded, the server includes a `Content-Encoding` header in the response. The value of the `Content-Encoding` header indicates which encodings were used to encode the data, in the order in which they were applied. It's also possible for a client to attach priorities to different schemes so that the server knows which it prefers. See sec 14.3 of RFC 2616 for more information on the `Accept-Encoding` header. See sec [3.1.2.2 of RFC 7231][15] for more information on the `Content-Encoding` header. ## Supported content encodings The `deflate`, `gzip` and `br` content encodings are supported by libcurl. Both regular and chunked transfers work fine. The zlib library is required for the `deflate` and `gzip` encodings, while the brotli decoding library is for the `br` encoding. ## The libcurl interface To cause libcurl to request a content encoding use: [`curl_easy_setopt`][1](curl, [`CURLOPT_ACCEPT_ENCODING`][5], string) where string is the intended value of the `Accept-Encoding` header. Currently, libcurl does support multiple encodings but only understands how to process responses that use the `deflate`, `gzip` and/or `br` content encodings, so the only values for [`CURLOPT_ACCEPT_ENCODING`][5] that will work (besides `identity`, which does nothing) are `deflate`, `gzip` and `br`. If a response is encoded using the `compress` or methods, libcurl will return an error indicating that the response could not be decoded. If `<string>` is NULL no `Accept-Encoding` header is generated. If `<string>` is a zero-length string, then an `Accept-Encoding` header containing all supported encodings will be generated. The [`CURLOPT_ACCEPT_ENCODING`][5] must be set to any non-NULL value for content to be automatically decoded. If it is not set and the server still sends encoded content (despite not having been asked), the data is returned in its raw form and the `Content-Encoding` type is not checked. ## The curl interface Use the [`--compressed`][6] option with curl to cause it to ask servers to compress responses using any format supported by curl. <a name="hostip"></a> `hostip.c` explained ==================== The main compile-time defines to keep in mind when reading the `host*.c` source file are these: ## `CURLRES_IPV6` this host has `getaddrinfo()` and family, and thus we use that. The host may not be able to resolve IPv6, but we do not really have to take that into account. Hosts that are not IPv6-enabled have `CURLRES_IPV4` defined. ## `CURLRES_ARES` is defined if libcurl is built to use c-ares for asynchronous name resolves. This can be Windows or \*nix. ## `CURLRES_THREADED` is defined if libcurl is built to use threading for asynchronous name resolves. The name resolve will be done in a new thread, and the supported asynch API will be the same as for ares-builds. This is the default under (native) Windows. If any of the two previous are defined, `CURLRES_ASYNCH` is defined too. If libcurl is not built to use an asynchronous resolver, `CURLRES_SYNCH` is defined. ## `host*.c` sources The `host*.c` sources files are split up like this: - `hostip.c` - method-independent resolver functions and utility functions - `hostasyn.c` - functions for asynchronous name resolves - `hostsyn.c` - functions for synchronous name resolves - `asyn-ares.c` - functions for asynchronous name resolves using c-ares - `asyn-thread.c` - functions for asynchronous name resolves using threads - `hostip4.c` - IPv4 specific functions - `hostip6.c` - IPv6 specific functions The `hostip.h` is the single united header file for all this. It defines the `CURLRES_*` defines based on the `config*.h` and `curl_setup.h` defines. <a name="memoryleak"></a> Track Down Memory Leaks ======================= ## Single-threaded Please note that this memory leak system is not adjusted to work in more than one thread. If you want/need to use it in a multi-threaded app. Please adjust accordingly. ## Build Rebuild libcurl with `-DCURLDEBUG` (usually, rerunning configure with `--enable-debug` fixes this). `make clean` first, then `make` so that all files are actually rebuilt properly. It will also make sense to build libcurl with the debug option (usually `-g` to the compiler) so that debugging it will be easier if you actually do find a leak in the library. This will create a library that has memory debugging enabled. ## Modify Your Application Add a line in your application code: ```c curl_dbg_memdebug("dump"); ``` This will make the malloc debug system output a full trace of all resource using functions to the given file name. Make sure you rebuild your program and that you link with the same libcurl you built for this purpose as described above. ## Run Your Application Run your program as usual. Watch the specified memory trace file grow. Make your program exit and use the proper libcurl cleanup functions etc. So that all non-leaks are returned/freed properly. ## Analyze the Flow Use the `tests/memanalyze.pl` perl script to analyze the dump file: tests/memanalyze.pl dump This now outputs a report on what resources that were allocated but never freed etc. This report is fine for posting to the list! If this does not produce any output, no leak was detected in libcurl. Then the leak is mostly likely to be in your code. <a name="multi_socket"></a> `multi_socket` ============== Implementation of the `curl_multi_socket` API The main ideas of this API are simply: 1. The application can use whatever event system it likes as it gets info from libcurl about what file descriptors libcurl waits for what action on. (The previous API returns `fd_sets` which is `select()`-centric). 2. When the application discovers action on a single socket, it calls libcurl and informs that there was action on this particular socket and libcurl can then act on that socket/transfer only and not care about any other transfers. (The previous API always had to scan through all the existing transfers.) The idea is that [`curl_multi_socket_action()`][7] calls a given callback with information about what socket to wait for what action on, and the callback only gets called if the status of that socket has changed. We also added a timer callback that makes libcurl call the application when the timeout value changes, and you set that with [`curl_multi_setopt()`][9] and the [`CURLMOPT_TIMERFUNCTION`][10] option. To get this to work, Internally, there's an added struct to each easy handle in which we store an "expire time" (if any). The structs are then "splay sorted" so that we can add and remove times from the linked list and yet somewhat swiftly figure out both how long there is until the next nearest timer expires and which timer (handle) we should take care of now. Of course, the upside of all this is that we get a [`curl_multi_timeout()`][8] that should also work with old-style applications that use [`curl_multi_perform()`][11]. We created an internal "socket to easy handles" hash table that given a socket (file descriptor) returns the easy handle that waits for action on that socket. This hash is made using the already existing hash code (previously only used for the DNS cache). To make libcurl able to report plain sockets in the socket callback, we had to re-organize the internals of the [`curl_multi_fdset()`][12] etc so that the conversion from sockets to `fd_sets` for that function is only done in the last step before the data is returned. I also had to extend c-ares to get a function that can return plain sockets, as that library too returned only `fd_sets` and that is no longer good enough. The changes done to c-ares are available in c-ares 1.3.1 and later. <a name="structs"></a> Structs in libcurl ================== This section should cover 7.32.0 pretty accurately, but will make sense even for older and later versions as things do not change drastically that often. <a name="Curl_easy"></a> ## Curl_easy The `Curl_easy` struct is the one returned to the outside in the external API as a `CURL *`. This is usually known as an easy handle in API documentations and examples. Information and state that is related to the actual connection is in the `connectdata` struct. When a transfer is about to be made, libcurl will either create a new connection or re-use an existing one. The particular connectdata that is used by this handle is pointed out by `Curl_easy->easy_conn`. Data and information that regard this particular single transfer is put in the `SingleRequest` sub-struct. When the `Curl_easy` struct is added to a multi handle, as it must be in order to do any transfer, the `->multi` member will point to the `Curl_multi` struct it belongs to. The `->prev` and `->next` members will then be used by the multi code to keep a linked list of `Curl_easy` structs that are added to that same multi handle. libcurl always uses multi so `->multi` *will* point to a `Curl_multi` when a transfer is in progress. `->mstate` is the multi state of this particular `Curl_easy`. When `multi_runsingle()` is called, it will act on this handle according to which state it is in. The mstate is also what tells which sockets to return for a specific `Curl_easy` when [`curl_multi_fdset()`][12] is called etc. The libcurl source code generally use the name `data` for the variable that points to the `Curl_easy`. When doing multiplexed HTTP/2 transfers, each `Curl_easy` is associated with an individual stream, sharing the same connectdata struct. Multiplexing makes it even more important to keep things associated with the right thing! <a name="connectdata"></a> ## connectdata A general idea in libcurl is to keep connections around in a connection "cache" after they have been used in case they will be used again and then re-use an existing one instead of creating a new as it creates a significant performance boost. Each `connectdata` identifies a single physical connection to a server. If the connection cannot be kept alive, the connection will be closed after use and then this struct can be removed from the cache and freed. Thus, the same `Curl_easy` can be used multiple times and each time select another `connectdata` struct to use for the connection. Keep this in mind, as it is then important to consider if options or choices are based on the connection or the `Curl_easy`. Functions in libcurl will assume that `connectdata->data` points to the `Curl_easy` that uses this connection (for the moment). As a special complexity, some protocols supported by libcurl require a special disconnect procedure that is more than just shutting down the socket. It can involve sending one or more commands to the server before doing so. Since connections are kept in the connection cache after use, the original `Curl_easy` may no longer be around when the time comes to shut down a particular connection. For this purpose, libcurl holds a special dummy `closure_handle` `Curl_easy` in the `Curl_multi` struct to use when needed. FTP uses two TCP connections for a typical transfer but it keeps both in this single struct and thus can be considered a single connection for most internal concerns. The libcurl source code generally use the name `conn` for the variable that points to the connectdata. <a name="Curl_multi"></a> ## Curl_multi Internally, the easy interface is implemented as a wrapper around multi interface functions. This makes everything multi interface. `Curl_multi` is the multi handle struct exposed as `CURLM *` in external APIs. This struct holds a list of `Curl_easy` structs that have been added to this handle with [`curl_multi_add_handle()`][13]. The start of the list is `->easyp` and `->num_easy` is a counter of added `Curl_easy`s. `->msglist` is a linked list of messages to send back when [`curl_multi_info_read()`][14] is called. Basically a node is added to that list when an individual `Curl_easy`'s transfer has completed. `->hostcache` points to the name cache. It is a hash table for looking up name to IP. The nodes have a limited life time in there and this cache is meant to reduce the time for when the same name is wanted within a short period of time. `->timetree` points to a tree of `Curl_easy`s, sorted by the remaining time until it should be checked - normally some sort of timeout. Each `Curl_easy` has one node in the tree. `->sockhash` is a hash table to allow fast lookups of socket descriptor for which `Curl_easy` uses that descriptor. This is necessary for the `multi_socket` API. `->conn_cache` points to the connection cache. It keeps track of all connections that are kept after use. The cache has a maximum size. `->closure_handle` is described in the `connectdata` section. The libcurl source code generally use the name `multi` for the variable that points to the `Curl_multi` struct. <a name="Curl_handler"></a> ## Curl_handler Each unique protocol that is supported by libcurl needs to provide at least one `Curl_handler` struct. It defines what the protocol is called and what functions the main code should call to deal with protocol specific issues. In general, there's a source file named `[protocol].c` in which there's a `struct Curl_handler Curl_handler_[protocol]` declared. In `url.c` there's then the main array with all individual `Curl_handler` structs pointed to from a single array which is scanned through when a URL is given to libcurl to work with. The concrete function pointer prototypes can be found in `lib/urldata.h`. `->scheme` is the URL scheme name, usually spelled out in uppercase. That is "HTTP" or "FTP" etc. SSL versions of the protocol need their own `Curl_handler` setup so HTTPS separate from HTTP. `->setup_connection` is called to allow the protocol code to allocate protocol specific data that then gets associated with that `Curl_easy` for the rest of this transfer. It gets freed again at the end of the transfer. It will be called before the `connectdata` for the transfer has been selected/created. Most protocols will allocate its private `struct [PROTOCOL]` here and assign `Curl_easy->req.p.[protocol]` to it. `->connect_it` allows a protocol to do some specific actions after the TCP connect is done, that can still be considered part of the connection phase. Some protocols will alter the `connectdata->recv[]` and `connectdata->send[]` function pointers in this function. `->connecting` is similarly a function that keeps getting called as long as the protocol considers itself still in the connecting phase. `->do_it` is the function called to issue the transfer request. What we call the DO action internally. If the DO is not enough and things need to be kept getting done for the entire DO sequence to complete, `->doing` is then usually also provided. Each protocol that needs to do multiple commands or similar for do/doing need to implement their own state machines (see SCP, SFTP, FTP). Some protocols (only FTP and only due to historical reasons) has a separate piece of the DO state called `DO_MORE`. `->doing` keeps getting called while issuing the transfer request command(s) `->done` gets called when the transfer is complete and DONE. That is after the main data has been transferred. `->do_more` gets called during the `DO_MORE` state. The FTP protocol uses this state when setting up the second connection. `->proto_getsock` `->doing_getsock` `->domore_getsock` `->perform_getsock` Functions that return socket information. Which socket(s) to wait for which I/O action(s) during the particular multi state. `->disconnect` is called immediately before the TCP connection is shutdown. `->readwrite` gets called during transfer to allow the protocol to do extra reads/writes `->attach` attaches a transfer to the connection. `->defport` is the default report TCP or UDP port this protocol uses `->protocol` is one or more bits in the `CURLPROTO_*` set. The SSL versions have their "base" protocol set and then the SSL variation. Like "HTTP|HTTPS". `->flags` is a bitmask with additional information about the protocol that will make it get treated differently by the generic engine: - `PROTOPT_SSL` - will make it connect and negotiate SSL - `PROTOPT_DUAL` - this protocol uses two connections - `PROTOPT_CLOSEACTION` - this protocol has actions to do before closing the connection. This flag is no longer used by code, yet still set for a bunch of protocol handlers. - `PROTOPT_DIRLOCK` - "direction lock". The SSH protocols set this bit to limit which "direction" of socket actions that the main engine will concern itself with. - `PROTOPT_NONETWORK` - a protocol that does not use network (read `file:`) - `PROTOPT_NEEDSPWD` - this protocol needs a password and will use a default one unless one is provided - `PROTOPT_NOURLQUERY` - this protocol cannot handle a query part on the URL (?foo=bar) <a name="conncache"></a> ## conncache Is a hash table with connections for later re-use. Each `Curl_easy` has a pointer to its connection cache. Each multi handle sets up a connection cache that all added `Curl_easy`s share by default. <a name="Curl_share"></a> ## Curl_share The libcurl share API allocates a `Curl_share` struct, exposed to the external API as `CURLSH *`. The idea is that the struct can have a set of its own versions of caches and pools and then by providing this struct in the `CURLOPT_SHARE` option, those specific `Curl_easy`s will use the caches/pools that this share handle holds. Then individual `Curl_easy` structs can be made to share specific things that they otherwise would not, such as cookies. The `Curl_share` struct can currently hold cookies, DNS cache and the SSL session cache. <a name="CookieInfo"></a> ## CookieInfo This is the main cookie struct. It holds all known cookies and related information. Each `Curl_easy` has its own private `CookieInfo` even when they are added to a multi handle. They can be made to share cookies by using the share API. [1]: https://curl.se/libcurl/c/curl_easy_setopt.html [2]: https://curl.se/libcurl/c/curl_easy_init.html [3]: https://c-ares.org/ [4]: https://tools.ietf.org/html/rfc7230 "RFC 7230" [5]: https://curl.se/libcurl/c/CURLOPT_ACCEPT_ENCODING.html [6]: https://curl.se/docs/manpage.html#--compressed [7]: https://curl.se/libcurl/c/curl_multi_socket_action.html [8]: https://curl.se/libcurl/c/curl_multi_timeout.html [9]: https://curl.se/libcurl/c/curl_multi_setopt.html [10]: https://curl.se/libcurl/c/CURLMOPT_TIMERFUNCTION.html [11]: https://curl.se/libcurl/c/curl_multi_perform.html [12]: https://curl.se/libcurl/c/curl_multi_fdset.html [13]: https://curl.se/libcurl/c/curl_multi_add_handle.html [14]: https://curl.se/libcurl/c/curl_multi_info_read.html [15]: https://tools.ietf.org/html/rfc7231#section-3.1.2.2
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/INSTALL.cmake
_ _ ____ _ ___| | | | _ \| | / __| | | | |_) | | | (__| |_| | _ <| |___ \___|\___/|_| \_\_____| How To Compile with CMake Building with CMake ========================== This document describes how to compile, build and install curl and libcurl from source code using the CMake build tool. To build with CMake, you will of course have to first install CMake. The minimum required version of CMake is specified in the file CMakeLists.txt found in the top of the curl source tree. Once the correct version of CMake is installed you can follow the instructions below for the platform you are building on. CMake builds can be configured either from the command line, or from one of CMake's GUI's. Current flaws in the curl CMake build ===================================== Missing features in the cmake build: - Builds libcurl without large file support - Does not support all SSL libraries (only OpenSSL, Schannel, Secure Transport, and mbed TLS, NSS, WolfSSL) - Does not allow different resolver backends (no c-ares build support) - No RTMP support built - Does not allow build curl and libcurl debug enabled - Does not allow a custom CA bundle path - Does not allow you to disable specific protocols from the build - Does not find or use krb4 or GSS - Rebuilds test files too eagerly, but still cannot run the tests - Does not detect the correct strerror_r flavor when cross-compiling (issue #1123) Command Line CMake ================== A CMake build of curl is similar to the autotools build of curl. It consists of the following steps after you have unpacked the source. 1. Create an out of source build tree parallel to the curl source tree and change into that directory $ mkdir curl-build $ cd curl-build 2. Run CMake from the build tree, giving it the path to the top of the curl source tree. CMake will pick a compiler for you. If you want to specify the compile, you can set the CC environment variable prior to running CMake. $ cmake ../curl $ make 3. Install to default location: $ make install (The test suite does not work with the cmake build) ccmake ========= CMake comes with a curses based interface called ccmake. To run ccmake on a curl use the instructions for the command line cmake, but substitute ccmake ../curl for cmake ../curl. This will bring up a curses interface with instructions on the bottom of the screen. You can press the "c" key to configure the project, and the "g" key to generate the project. After the project is generated, you can run make. cmake-gui ========= CMake also comes with a Qt based GUI called cmake-gui. To configure with cmake-gui, you run cmake-gui and follow these steps: 1. Fill in the "Where is the source code" combo box with the path to the curl source tree. 2. Fill in the "Where to build the binaries" combo box with the path to the directory for your build tree, ideally this should not be the same as the source tree, but a parallel directory called curl-build or something similar. 3. Once the source and binary directories are specified, press the "Configure" button. 4. Select the native build tool that you want to use. 5. At this point you can change any of the options presented in the GUI. Once you have selected all the options you want, click the "Generate" button. 6. Run the native build tool that you used CMake to generate.
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/CONTRIBUTE.md
# Contributing to the curl project This document is intended to offer guidelines on how to best contribute to the curl project. This concerns new features as well as corrections to existing flaws or bugs. ## Learning curl ### Join the Community Skip over to [https://curl.se/mail/](https://curl.se/mail/) and join the appropriate mailing list(s). Read up on details before you post questions. Read this file before you start sending patches! We prefer questions sent to and discussions being held on the mailing list(s), not sent to individuals. Before posting to one of the curl mailing lists, please read up on the [mailing list etiquette](https://curl.se/mail/etiquette.html). We also hang out on IRC in #curl on libera.chat If you are at all interested in the code side of things, consider clicking 'watch' on the [curl repo on GitHub](https://github.com/curl/curl) to be notified of pull requests and new issues posted there. ### License and copyright When contributing with code, you agree to put your changes and new code under the same license curl and libcurl is already using unless stated and agreed otherwise. If you add a larger piece of code, you can opt to make that file or set of files to use a different license as long as they do not enforce any changes to the rest of the package and they make sense. Such "separate parts" can not be GPL licensed (as we do not want copyleft to affect users of libcurl) but they must use "GPL compatible" licenses (as we want to allow users to use libcurl properly in GPL licensed environments). When changing existing source code, you do not alter the copyright of the original file(s). The copyright will still be owned by the original creator(s) or those who have been assigned copyright by the original author(s). By submitting a patch to the curl project, you are assumed to have the right to the code and to be allowed by your employer or whatever to hand over that patch/code to us. We will credit you for your changes as far as possible, to give credit but also to keep a trace back to who made what changes. Please always provide us with your full real name when contributing! ### What To Read Source code, the man pages, the [INTERNALS document](https://curl.se/dev/internals.html), [TODO](https://curl.se/docs/todo.html), [KNOWN_BUGS](https://curl.se/docs/knownbugs.html) and the [most recent changes](https://curl.se/dev/sourceactivity.html) in git. Just lurking on the [curl-library mailing list](https://curl.se/mail/list.cgi?list=curl-library) will give you a lot of insights on what's going on right now. Asking there is a good idea too. ## Write a good patch ### Follow code style When writing C code, follow the [CODE_STYLE](https://curl.se/dev/code-style.html) already established in the project. Consistent style makes code easier to read and mistakes less likely to happen. Run `make checksrc` before you submit anything, to make sure you follow the basic style. That script does not verify everything, but if it complains you know you have work to do. ### Non-clobbering All Over When you write new functionality or fix bugs, it is important that you do not fiddle all over the source files and functions. Remember that it is likely that other people have done changes in the same source files as you have and possibly even in the same functions. If you bring completely new functionality, try writing it in a new source file. If you fix bugs, try to fix one bug at a time and send them as separate patches. ### Write Separate Changes It is annoying when you get a huge patch from someone that is said to fix 511 odd problems, but discussions and opinions do not agree with 510 of them - or 509 of them were already fixed in a different way. Then the person merging this change needs to extract the single interesting patch from somewhere within the huge pile of source, and that creates a lot of extra work. Preferably, each fix that corrects a problem should be in its own patch/commit with its own description/commit message stating exactly what they correct so that all changes can be selectively applied by the maintainer or other interested parties. Also, separate changes enable bisecting much better for tracking problems and regression in the future. ### Patch Against Recent Sources Please try to get the latest available sources to make your patches against. It makes the lives of the developers so much easier. The best is if you get the most up-to-date sources from the git repository, but the latest release archive is quite OK as well! ### Documentation Writing docs is dead boring and one of the big problems with many open source projects. But someone's gotta do it! It makes things a lot easier if you submit a small description of your fix or your new features with every contribution so that it can be swiftly added to the package documentation. The documentation is always made in man pages (nroff formatted) or plain ASCII files. All HTML files on the website and in the release archives are generated from the nroff/ASCII versions. ### Test Cases Since the introduction of the test suite, we can quickly verify that the main features are working as they are supposed to. To maintain this situation and improve it, all new features and functions that are added need to be tested in the test suite. Every feature that is added should get at least one valid test case that verifies that it works as documented. If every submitter also posts a few test cases, it will not end up as a heavy burden on a single person! If you do not have test cases or perhaps you have done something that is hard to write tests for, do explain exactly how you have otherwise tested and verified your changes. ## Sharing Your Changes ### How to get your changes into the main sources Ideally you file a [pull request on GitHub](https://github.com/curl/curl/pulls), but you can also send your plain patch to [the curl-library mailing list](https://curl.se/mail/list.cgi?list=curl-library). Either way, your change will be reviewed and discussed there and you will be expected to correct flaws pointed out and update accordingly, or the change risks stalling and eventually just getting deleted without action. As a submitter of a change, you are the owner of that change until it has been merged. Respond on the list or on github about the change and answer questions and/or fix nits/flaws. This is important. We will take lack of replies as a sign that you are not anxious to get your patch accepted and we tend to simply drop such changes. ### About pull requests With github it is easy to send a [pull request](https://github.com/curl/curl/pulls) to the curl project to have changes merged. We strongly prefer pull requests to mailed patches, as it makes it a proper git commit that is easy to merge and they are easy to track and not that easy to lose in the flood of many emails, like they sometimes do on the mailing lists. Every pull request submitted will automatically be tested in several different ways. Every pull request is verified for each of the following: - ... it still builds, warning-free, on Linux and macOS, with both clang and gcc - ... it still builds fine on Windows with several MSVC versions - ... it still builds with cmake on Linux, with gcc and clang - ... it follows rudimentary code style rules - ... the test suite still runs 100% fine - ... the release tarball (the "dist") still works - ... it builds fine in-tree as well as out-of-tree - ... code coverage does not shrink drastically If the pull-request fails one of these tests, it will show up as a red X and you are expected to fix the problem. If you do not understand when the issue is or have other problems to fix the complaint, just ask and other project members will likely be able to help out. Consider the following table while looking at pull request failures: | CI platform as shown in PR | State | What to look at next | | ----------------------------------- | ------ | -------------------------- | | CI / codeql | stable | quality check results | | CI / fuzzing | stable | fuzzing results | | CI / macos ... | stable | all errors and failures | | Code scanning results / CodeQL | stable | quality check results | | FreeBSD FreeBSD: ... | stable | all errors and failures | | LGTM analysis: Python | stable | new findings | | LGTM analysis: C/C++ | stable | new findings | | buildbot/curl_winssl_ ... | stable | all errors and failures | | continuous-integration/appveyor/pr | stable | all errors and failures | | curl.curl (linux ...) | stable | all errors and failures | | curl.curl (windows ...) | flaky | repetitive errors/failures | | deepcode-ci-bot | stable | new findings | | musedev | stable | new findings | Sometimes the tests fail due to a dependency service temporarily being offline or otherwise unavailable, eg. package downloads. In this case you can just try to update your pull requests to rerun the tests later as described below. You can update your pull requests by pushing new commits or force-pushing changes to existing commits. Force-pushing an amended commit without any actual content changed also allows you to retrigger the tests for that commit. When you adjust your pull requests after review, consider squashing the commits so that we can review the full updated version more easily. ### Making quality patches Make the patch against as recent source versions as possible. If you have followed the tips in this document and your patch still has not been incorporated or responded to after some weeks, consider resubmitting it to the list or better yet: change it to a pull request. ### Write good commit messages A short guide to how to write commit messages in the curl project. ---- start ---- [area]: [short line describing the main effect] -- empty line -- [full description, no wider than 72 columns that describe as much as possible as to why this change is made, and possibly what things it fixes and everything else that is related] -- empty line -- [Closes/Fixes #1234 - if this closes or fixes a github issue] [Bug: URL to source of the report or more related discussion] [Reported-by: John Doe - credit the reporter] [whatever-else-by: credit all helpers, finders, doers] ---- stop ---- The first line is a succinct description of the change: - use the imperative, present tense: "change" not "changed" nor "changes" - do not capitalize first letter - no dot (.) at the end The `[area]` in the first line can be `http2`, `cookies`, `openssl` or similar. There's no fixed list to select from but using the same "area" as other related changes could make sense. Do not forget to use commit --author="" if you commit someone else's work, and make sure that you have your own user and email setup correctly in git before you commit ### Write Access to git Repository If you are a frequent contributor, you may be given push access to the git repository and then you will be able to push your changes straight into the git repo instead of sending changes as pull requests or by mail as patches. Just ask if this is what you would want. You will be required to have posted several high quality patches first, before you can be granted push access. ### How To Make a Patch with git You need to first checkout the repository: git clone https://github.com/curl/curl.git You then proceed and edit all the files you like and you commit them to your local repository: git commit [file] As usual, group your commits so that you commit all changes at once that constitute a logical change. Once you have done all your commits and you are happy with what you see, you can make patches out of your changes that are suitable for mailing: git format-patch remotes/origin/master This creates files in your local directory named NNNN-[name].patch for each commit. Now send those patches off to the curl-library list. You can of course opt to do that with the 'git send-email' command. ### How To Make a Patch without git Keep a copy of the unmodified curl sources. Make your changes in a separate source tree. When you think you have something that you want to offer the curl community, use GNU diff to generate patches. If you have modified a single file, try something like: diff -u unmodified-file.c my-changed-one.c > my-fixes.diff If you have modified several files, possibly in different directories, you can use diff recursively: diff -ur curl-original-dir curl-modified-sources-dir > my-fixes.diff The GNU diff and GNU patch tools exist for virtually all platforms, including all kinds of Unixes and Windows: For unix-like operating systems: - [https://savannah.gnu.org/projects/patch/](https://savannah.gnu.org/projects/patch/) - [https://www.gnu.org/software/diffutils/](https://www.gnu.org/software/diffutils/) For Windows: - [https://gnuwin32.sourceforge.io/packages/patch.htm](https://gnuwin32.sourceforge.io/packages/patch.htm) - [https://gnuwin32.sourceforge.io/packages/diffutils.htm](https://gnuwin32.sourceforge.io/packages/diffutils.htm) ### Useful resources - [Webinar on getting code into cURL](https://www.youtube.com/watch?v=QmZ3W1d6LQI)
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/INSTALL.md
# how to install curl and libcurl ## Installing Binary Packages Lots of people download binary distributions of curl and libcurl. This document does not describe how to install curl or libcurl using such a binary package. This document describes how to compile, build and install curl and libcurl from source code. ## Building using vcpkg You can download and install curl and libcurl using the [vcpkg](https://github.com/Microsoft/vcpkg/) dependency manager: git clone https://github.com/Microsoft/vcpkg.git cd vcpkg ./bootstrap-vcpkg.sh ./vcpkg integrate install vcpkg install curl[tool] The curl port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please [create an issue or pull request](https://github.com/Microsoft/vcpkg) on the vcpkg repository. ## Building from git If you get your code off a git repository instead of a release tarball, see the `GIT-INFO` file in the root directory for specific instructions on how to proceed. # Unix A normal Unix installation is made in three or four steps (after you have unpacked the source archive): ./configure --with-openssl [--with-gnutls --with-wolfssl] make make test (optional) make install (Adjust the configure line accordingly to use the TLS library you want.) You probably need to be root when doing the last command. Get a full listing of all available configure options by invoking it like: ./configure --help If you want to install curl in a different file hierarchy than `/usr/local`, specify that when running configure: ./configure --prefix=/path/to/curl/tree If you have write permission in that directory, you can do 'make install' without being root. An example of this would be to make a local install in your own home directory: ./configure --prefix=$HOME make make install The configure script always tries to find a working SSL library unless explicitly told not to. If you have OpenSSL installed in the default search path for your compiler/linker, you do not need to do anything special. If you have OpenSSL installed in `/usr/local/ssl`, you can run configure like: ./configure --with-openssl If you have OpenSSL installed somewhere else (for example, `/opt/OpenSSL`) and you have pkg-config installed, set the pkg-config path first, like this: env PKG_CONFIG_PATH=/opt/OpenSSL/lib/pkgconfig ./configure --with-openssl Without pkg-config installed, use this: ./configure --with-openssl=/opt/OpenSSL If you insist on forcing a build without SSL support, even though you may have OpenSSL installed in your system, you can run configure like this: ./configure --without-ssl If you have OpenSSL installed, but with the libraries in one place and the header files somewhere else, you have to set the `LDFLAGS` and `CPPFLAGS` environment variables prior to running configure. Something like this should work: CPPFLAGS="-I/path/to/ssl/include" LDFLAGS="-L/path/to/ssl/lib" ./configure If you have shared SSL libs installed in a directory where your run-time linker does not find them (which usually causes configure failures), you can provide this option to gcc to set a hard-coded path to the run-time linker: LDFLAGS=-Wl,-R/usr/local/ssl/lib ./configure --with-openssl ## More Options To force a static library compile, disable the shared library creation by running configure like: ./configure --disable-shared To tell the configure script to skip searching for thread-safe functions, add an option like: ./configure --disable-thread If you are a curl developer and use gcc, you might want to enable more debug options with the `--enable-debug` option. curl can be built to use a whole range of libraries to provide various useful services, and configure will try to auto-detect a decent default. But if you want to alter it, you can select how to deal with each individual library. ## Select TLS backend These options are provided to select TLS backend to use. - AmiSSL: `--with-amissl` - BearSSL: `--with-bearssl` - GnuTLS: `--with-gnutls`. - mbedTLS: `--with-mbedtls` - MesaLink: `--with-mesalink` - NSS: `--with-nss` - OpenSSL: `--with-openssl` (also for BoringSSL and libressl) - rustls: `--with-rustls` - schannel: `--with-schannel` - secure transport: `--with-secure-transport` - wolfSSL: `--with-wolfssl` # Windows ## Building Windows DLLs and C run-time (CRT) linkage issues As a general rule, building a DLL with static CRT linkage is highly discouraged, and intermixing CRTs in the same app is something to avoid at any cost. Reading and comprehending Microsoft Knowledge Base articles KB94248 and KB140584 is a must for any Windows developer. Especially important is full understanding if you are not going to follow the advice given above. - [How To Use the C Run-Time](https://support.microsoft.com/help/94248/how-to-use-the-c-run-time) - [Run-Time Library Compiler Options](https://docs.microsoft.com/cpp/build/reference/md-mt-ld-use-run-time-library) - [Potential Errors Passing CRT Objects Across DLL Boundaries](https://docs.microsoft.com/cpp/c-runtime-library/potential-errors-passing-crt-objects-across-dll-boundaries) If your app is misbehaving in some strange way, or it is suffering from memory corruption, before asking for further help, please try first to rebuild every single library your app uses as well as your app using the debug multithreaded dynamic C runtime. If you get linkage errors read section 5.7 of the FAQ document. ## MingW32 Make sure that MinGW32's bin dir is in the search path, for example: ```cmd set PATH=c:\mingw32\bin;%PATH% ``` then run `mingw32-make mingw32` in the root dir. There are other make targets available to build libcurl with more features, use: - `mingw32-make mingw32-zlib` to build with Zlib support; - `mingw32-make mingw32-ssl-zlib` to build with SSL and Zlib enabled; - `mingw32-make mingw32-ssh2-ssl-zlib` to build with SSH2, SSL, Zlib; - `mingw32-make mingw32-ssh2-ssl-sspi-zlib` to build with SSH2, SSL, Zlib and SSPI support. If you have any problems linking libraries or finding header files, be sure to verify that the provided `Makefile.m32` files use the proper paths, and adjust as necessary. It is also possible to override these paths with environment variables, for example: ```cmd set ZLIB_PATH=c:\zlib-1.2.8 set OPENSSL_PATH=c:\openssl-1.0.2c set LIBSSH2_PATH=c:\libssh2-1.6.0 ``` It is also possible to build with other LDAP SDKs than MS LDAP; currently it is possible to build with native Win32 OpenLDAP, or with the Novell CLDAP SDK. If you want to use these you need to set these vars: ```cmd set LDAP_SDK=c:\openldap set USE_LDAP_OPENLDAP=1 ``` or for using the Novell SDK: ```cmd set USE_LDAP_NOVELL=1 ``` If you want to enable LDAPS support then set LDAPS=1. ## Cygwin Almost identical to the unix installation. Run the configure script in the curl source tree root with `sh configure`. Make sure you have the `sh` executable in `/bin/` or you will see the configure fail toward the end. Run `make` ## Disabling Specific Protocols in Windows builds The configure utility, unfortunately, is not available for the Windows environment, therefore, you cannot use the various disable-protocol options of the configure utility on this platform. You can use specific defines to disable specific protocols and features. See [CURL-DISABLE.md](CURL-DISABLE.md) for the full list. If you want to set any of these defines you have the following options: - Modify `lib/config-win32.h` - Modify `lib/curl_setup.h` - Modify `winbuild/Makefile.vc` - Modify the "Preprocessor Definitions" in the libcurl project Note: The pre-processor settings can be found using the Visual Studio IDE under "Project -> Settings -> C/C++ -> General" in VC6 and "Project -> Properties -> Configuration Properties -> C/C++ -> Preprocessor" in later versions. ## Using BSD-style lwIP instead of Winsock TCP/IP stack in Win32 builds In order to compile libcurl and curl using BSD-style lwIP TCP/IP stack it is necessary to make definition of preprocessor symbol `USE_LWIPSOCK` visible to libcurl and curl compilation processes. To set this definition you have the following alternatives: - Modify `lib/config-win32.h` and `src/config-win32.h` - Modify `winbuild/Makefile.vc` - Modify the "Preprocessor Definitions" in the libcurl project Note: The pre-processor settings can be found using the Visual Studio IDE under "Project -> Settings -> C/C++ -> General" in VC6 and "Project -> Properties -> Configuration Properties -> C/C++ -> Preprocessor" in later versions. Once that libcurl has been built with BSD-style lwIP TCP/IP stack support, in order to use it with your program it is mandatory that your program includes lwIP header file `<lwip/opt.h>` (or another lwIP header that includes this) before including any libcurl header. Your program does not need the `USE_LWIPSOCK` preprocessor definition which is for libcurl internals only. Compilation has been verified with [lwIP 1.4.0](https://download.savannah.gnu.org/releases/lwip/lwip-1.4.0.zip) and [contrib-1.4.0](https://download.savannah.gnu.org/releases/lwip/contrib-1.4.0.zip). This BSD-style lwIP TCP/IP stack support must be considered experimental given that it has been verified that lwIP 1.4.0 still needs some polish, and libcurl might yet need some additional adjustment, caveat emptor. ## Important static libcurl usage note When building an application that uses the static libcurl library on Windows, you must add `-DCURL_STATICLIB` to your `CFLAGS`. Otherwise the linker will look for dynamic import symbols. ## Legacy Windows and SSL Schannel (from Windows SSPI), is the native SSL library in Windows. However, Schannel in Windows <= XP is unable to connect to servers that no longer support the legacy handshakes and algorithms used by those versions. If you will be using curl in one of those earlier versions of Windows you should choose another SSL backend such as OpenSSL. # Apple Platforms (macOS, iOS, tvOS, watchOS, and their simulator counterparts) On modern Apple operating systems, curl can be built to use Apple's SSL/TLS implementation, Secure Transport, instead of OpenSSL. To build with Secure Transport for SSL/TLS, use the configure option `--with-secure-transport`. (It is not necessary to use the option `--without-openssl`.) When Secure Transport is in use, the curl options `--cacert` and `--capath` and their libcurl equivalents, will be ignored, because Secure Transport uses the certificates stored in the Keychain to evaluate whether or not to trust the server. This, of course, includes the root certificates that ship with the OS. The `--cert` and `--engine` options, and their libcurl equivalents, are currently unimplemented in curl with Secure Transport. In general, a curl build for an Apple `ARCH/SDK/DEPLOYMENT_TARGET` combination can be taken by providing appropriate values for `ARCH`, `SDK`, `DEPLOYMENT_TARGET` below and running the commands: ```bash # Set these three according to your needs export ARCH=x86_64 export SDK=macosx export DEPLOYMENT_TARGET=10.8 export CFLAGS="-arch $ARCH -isysroot $(xcrun -sdk $SDK --show-sdk-path) -m$SDK-version-min=$DEPLOYMENT_TARGET" ./configure --host=$ARCH-apple-darwin --prefix $(pwd)/artifacts --with-secure-transport make -j8 make install ``` Above will build curl for macOS platform with `x86_64` architecture and `10.8` as deployment target. Here is an example for iOS device: ```bash export ARCH=arm64 export SDK=iphoneos export DEPLOYMENT_TARGET=11.0 export CFLAGS="-arch $ARCH -isysroot $(xcrun -sdk $SDK --show-sdk-path) -m$SDK-version-min=$DEPLOYMENT_TARGET" ./configure --host=$ARCH-apple-darwin --prefix $(pwd)/artifacts --with-secure-transport make -j8 make install ``` Another example for watchOS simulator for macs with Apple Silicon: ```bash export ARCH=arm64 export SDK=watchsimulator export DEPLOYMENT_TARGET=5.0 export CFLAGS="-arch $ARCH -isysroot $(xcrun -sdk $SDK --show-sdk-path) -m$SDK-version-min=$DEPLOYMENT_TARGET" ./configure --host=$ARCH-apple-darwin --prefix $(pwd)/artifacts --with-secure-transport make -j8 make install ``` In all above, the built libraries and executables can be found in `artifacts` folder. # Android When building curl for Android it's recommended to use a Linux environment since using curl's `configure` script is the easiest way to build curl for Android. Before you can build curl for Android, you need to install the Android NDK first. This can be done using the SDK Manager that is part of Android Studio. Once you have installed the Android NDK, you need to figure out where it has been installed and then set up some environment variables before launching `configure`. On macOS, those variables could look like this to compile for `aarch64` and API level 29: ```bash export NDK=~/Library/Android/sdk/ndk/20.1.5948944 export HOST_TAG=darwin-x86_64 export TOOLCHAIN=$NDK/toolchains/llvm/prebuilt/$HOST_TAG export AR=$TOOLCHAIN/bin/aarch64-linux-android-ar export AS=$TOOLCHAIN/bin/aarch64-linux-android-as export CC=$TOOLCHAIN/bin/aarch64-linux-android29-clang export CXX=$TOOLCHAIN/bin/aarch64-linux-android29-clang++ export LD=$TOOLCHAIN/bin/aarch64-linux-android-ld export RANLIB=$TOOLCHAIN/bin/aarch64-linux-android-ranlib export STRIP=$TOOLCHAIN/bin/aarch64-linux-android-strip ``` When building on Linux or targeting other API levels or architectures, you need to adjust those variables accordingly. After that you can build curl like this: ./configure --host aarch64-linux-android --with-pic --disable-shared Note that this will not give you SSL/TLS support. If you need SSL/TLS, you have to build curl against a SSL/TLS layer, e.g. OpenSSL, because it's impossible for curl to access Android's native SSL/TLS layer. To build curl for Android using OpenSSL, follow the OpenSSL build instructions and then install `libssl.a` and `libcrypto.a` to `$TOOLCHAIN/sysroot/usr/lib` and copy `include/openssl` to `$TOOLCHAIN/sysroot/usr/include`. Now you can build curl for Android using OpenSSL like this: ./configure --host aarch64-linux-android --with-pic --disable-shared --with-openssl="$TOOLCHAIN/sysroot/usr" Note, however, that you must target at least Android M (API level 23) or `configure` will not be able to detect OpenSSL since `stderr` (and the like) were not defined before Android M. # IBM i For IBM i (formerly OS/400), you can use curl in two different ways: - Natively, running in the **ILE**. The obvious use is being able to call curl from ILE C or RPG applications. - You will need to build this from source. See `packages/OS400/README` for the ILE specific build instructions. - In the **PASE** environment, which runs AIX programs. curl will be built as it would be on AIX. - IBM provides builds of curl in their Yum repository for PASE software. - To build from source, follow the Unix instructions. There are some additional limitations and quirks with curl on this platform; they affect both environments. ## Multithreading notes By default, jobs in IBM i will not start with threading enabled. (Exceptions include interactive PASE sessions started by `QP2TERM` or SSH.) If you use curl in an environment without threading when options like async DNS were enabled, you will messages like: ``` getaddrinfo() thread failed to start ``` Do not panic! curl and your program are not broken. You can fix this by: - Set the environment variable `QIBM_MULTI_THREADED` to `Y` before starting your program. This can be done at whatever scope you feel is appropriate. - Alternatively, start the job with the `ALWMLTTHD` parameter set to `*YES`. # Cross compile Download and unpack the curl package. `cd` to the new directory. (e.g. `cd curl-7.12.3`) Set environment variables to point to the cross-compile toolchain and call configure with any options you need. Be sure and specify the `--host` and `--build` parameters at configuration time. The following script is an example of cross-compiling for the IBM 405GP PowerPC processor using the toolchain from MonteVista for Hardhat Linux. ```bash #! /bin/sh export PATH=$PATH:/opt/hardhat/devkit/ppc/405/bin export CPPFLAGS="-I/opt/hardhat/devkit/ppc/405/target/usr/include" export AR=ppc_405-ar export AS=ppc_405-as export LD=ppc_405-ld export RANLIB=ppc_405-ranlib export CC=ppc_405-gcc export NM=ppc_405-nm ./configure --target=powerpc-hardhat-linux --host=powerpc-hardhat-linux --build=i586-pc-linux-gnu --prefix=/opt/hardhat/devkit/ppc/405/target/usr/local --exec-prefix=/usr/local ``` You may also need to provide a parameter like `--with-random=/dev/urandom` to configure as it cannot detect the presence of a random number generating device for a target system. The `--prefix` parameter specifies where curl will be installed. If `configure` completes successfully, do `make` and `make install` as usual. In some cases, you may be able to simplify the above commands to as little as: ./configure --host=ARCH-OS # REDUCING SIZE There are a number of configure options that can be used to reduce the size of libcurl for embedded applications where binary size is an important factor. First, be sure to set the `CFLAGS` variable when configuring with any relevant compiler optimization flags to reduce the size of the binary. For gcc, this would mean at minimum the -Os option, and potentially the `-march=X`, `-mdynamic-no-pic` and `-flto` options as well, e.g. ./configure CFLAGS='-Os' LDFLAGS='-Wl,-Bsymbolic'... Note that newer compilers often produce smaller code than older versions due to improved optimization. Be sure to specify as many `--disable-` and `--without-` flags on the configure command-line as you can to disable all the libcurl features that you know your application is not going to need. Besides specifying the `--disable-PROTOCOL` flags for all the types of URLs your application will not use, here are some other flags that can reduce the size of the library: - `--disable-ares` (disables support for the C-ARES DNS library) - `--disable-cookies` (disables support for HTTP cookies) - `--disable-crypto-auth` (disables HTTP cryptographic authentication) - `--disable-ipv6` (disables support for IPv6) - `--disable-manual` (disables support for the built-in documentation) - `--disable-proxy` (disables support for HTTP and SOCKS proxies) - `--disable-unix-sockets` (disables support for UNIX sockets) - `--disable-verbose` (eliminates debugging strings and error code strings) - `--disable-versioned-symbols` (disables support for versioned symbols) - `--enable-symbol-hiding` (eliminates unneeded symbols in the shared library) - `--without-libidn` (disables support for the libidn DNS library) - `--without-librtmp` (disables support for RTMP) - `--without-openssl` (disables support for SSL/TLS) - `--without-zlib` (disables support for on-the-fly decompression) The GNU compiler and linker have a number of options that can reduce the size of the libcurl dynamic libraries on some platforms even further. Specify them by providing appropriate `CFLAGS` and `LDFLAGS` variables on the configure command-line, e.g. CFLAGS="-Os -ffunction-sections -fdata-sections -fno-unwind-tables -fno-asynchronous-unwind-tables -flto" LDFLAGS="-Wl,-s -Wl,-Bsymbolic -Wl,--gc-sections" Be sure also to strip debugging symbols from your binaries after compiling using 'strip' (or the appropriate variant if cross-compiling). If space is really tight, you may be able to remove some unneeded sections of the shared library using the -R option to objcopy (e.g. the .comment section). Using these techniques it is possible to create a basic HTTP-only shared libcurl library for i386 Linux platforms that is only 113 KiB in size, and an FTP-only library that is 113 KiB in size (as of libcurl version 7.50.3, using gcc 5.4.0). You may find that statically linking libcurl to your application will result in a lower total size than dynamically linking. Note that the curl test harness can detect the use of some, but not all, of the `--disable` statements suggested above. Use will cause tests relying on those features to fail. The test harness can be manually forced to skip the relevant tests by specifying certain key words on the `runtests.pl` command line. Following is a list of appropriate key words: - `--disable-cookies` !cookies - `--disable-manual` !--manual - `--disable-proxy` !HTTP\ proxy !proxytunnel !SOCKS4 !SOCKS5 # PORTS This is a probably incomplete list of known CPU architectures and operating systems that curl has been compiled for. If you know a system curl compiles and runs on, that is not listed, please let us know! ## 85 Operating Systems AIX, AmigaOS, Android, Aros, BeOS, Blackberry 10, Blackberry Tablet OS, Cell OS, ChromeOS, Cisco IOS, Cygwin, Dragonfly BSD, eCOS, FreeBSD, FreeDOS, FreeRTOS, Fuchsia, Garmin OS, Genode, Haiku, HardenedBSD, HP-UX, Hurd, Illumos, Integrity, iOS, ipadOS, IRIX, LineageOS, Linux, Lua RTOS, Mac OS 9, macOS, Mbed, Micrium, MINIX, MorphOS, MPE/iX, MS-DOS, NCR MP-RAS, NetBSD, Netware, Nintendo Switch, NonStop OS, NuttX, OpenBSD, OpenStep, Orbis OS, OS/2, OS/400, OS21, Plan 9, PlayStation Portable, QNX, Qubes OS, ReactOS, Redox, RICS OS, Sailfish OS, SCO Unix, Serenity, SINIX-Z, Solaris, SunOS, Syllable OS, Symbian, Tizen, TPF, Tru64, tvOS, ucLinux, Ultrix, UNICOS, UnixWare, VMS, vxWorks, WebOS, Wii system software, Windows, Windows CE, Xbox System, z/OS, z/TPF, z/VM, z/VSE ## 22 CPU Architectures Alpha, ARC, ARM, AVR32, Cell, HP-PA, Itanium, m68k, MicroBlaze, MIPS, Nios, OpenRISC, POWER, PowerPC, RISC-V, s390, SH4, SPARC, VAX, x86, x86-64, Xtensa
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/BUFREF.md
# bufref This is an internal module for handling buffer references. A referenced buffer is associated with its destructor function that is implicitly called when the reference is invalidated. Once referenced, a buffer cannot be reallocated. A data length is stored within the reference for binary data handling purpose; it is not used by the bufref API. The `struct bufref` is used to hold data referencing a buffer. The members of that structure **MUST NOT** be accessed or modified without using the dedicated bufref API. ## init ```c void Curl_bufref_init(struct bufref *br); ``` Initialises a `bufref` structure. This function **MUST** be called before any other operation is performed on the structure. Upon completion, the referenced buffer is `NULL` and length is zero. This function may also be called to bypass referenced buffer destruction while invalidating the current reference. ## free ```c void Curl_bufref_free(struct bufref *br); ``` Destroys the previously referenced buffer using its destructor and reinitialises the structure for a possible subsequent reuse. ## set ```c void Curl_bufref_set(struct bufref *br, const void *buffer, size_t length, void (*destructor)(void *)); ``` Releases the previously referenced buffer, then assigns the new `buffer` to the structure, associated with its `destructor` function. The later can be specified as `NULL`: this will be the case when the referenced buffer is static. if `buffer` is NULL, `length`must be zero. ## memdup ```c CURLcode Curl_bufref_memdup(struct bufref *br, const void *data, size_t length); ``` Releases the previously referenced buffer, then duplicates the `length`-byte `data` into a buffer allocated via `malloc()` and references the latter associated with destructor `curl_free()`. An additional trailing byte is allocated and set to zero as a possible string zero-terminator; it is not counted in the stored length. Returns `CURLE_OK` if successful, else `CURLE_OUT_OF_MEMORY`. ## ptr ```c const unsigned char *Curl_bufref_ptr(const struct bufref *br); ``` Returns a `const unsigned char *` to the referenced buffer. ## len ```c size_t Curl_bufref_len(const struct bufref *br); ``` Returns the stored length of the referenced buffer.
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/NEW-PROTOCOL.md
# Adding a new protocol? Every once in a while someone comes up with the idea of adding support for yet another protocol to curl. After all, curl already supports 25 something protocols and it is the Internet transfer machine for the world. In the curl project we love protocols and we love supporting many protocols and do it well. So how do you proceed to add a new protocol and what are the requirements? ## No fixed set of requirements This document is an attempt to describe things to consider. There is no checklist of the twenty-seven things you need to cross off. We view the entire effort as a whole and then judge if it seems to be the right thing - for now. The more things that look right, fit our patterns and are done in ways that align with our thinking, the better are the chances that we will agree that supporting this protocol is a grand idea. ## Mutual benefit is preferred curl is not here for your protocol. Your protocol is not here for curl. The best cooperation and end result occur when all involved parties mutually see and agree that supporting this protocol in curl would be good for everyone. Heck, for the world! Consider "selling us" the idea that we need an implementation merged in curl, to be fairly important. *Why* do we want curl to support this new protocol? ## Protocol requirements ### Client-side The protocol implementation is for a client's side of a "communication session". ### Transfer oriented The protocol itself should be focused on *transfers*. Be it uploads or downloads or both. It should at least be possible to view the transfers as such, like we can view reading emails over POP3 as a download and sending emails over SMTP as an upload. If you cannot even shoehorn the protocol into a transfer focused view, then you are up for a tough argument. ### URL There should be a documented URL format. If there is an RFC for it there is no question about it but the syntax does not have to be a published RFC. It could be enough if it is already in use by other implementations. If you make up the syntax just in order to be able to propose it to curl, then you are in a bad place. URLs are designed and defined for interoperability. There should at least be a good chance that other clients and servers can be implemented supporting the same URL syntax and work the same or similar way. URLs work on registered 'schemes'. There is a register of [all officially recognized schemes](https://www.iana.org/assignments/uri-schemes/uri-schemes.xhtml). If your protocol is not in there, is it really a protocol we want? ### Wide and public use The protocol shall already be used or have an expectation of getting used widely. Experimental protocols are better off worked on in experiments first, to prove themselves before they are adopted by curl. ## Code Of course the code needs to be written, provided, licensed agreeably and it should follow our code guidelines and review comments have to be dealt with. If the implementation needs third party code, that third party code should not have noticeably lesser standards than the curl project itself. ## Tests As much of the protocol implementation as possible needs to be verified by curl test cases. We must have the implementation get tested by CI jobs, torture tests and more. We have experienced many times in the past how new implementations were brought to curl and immediately once the code had been merged, the originator vanished from the face of the earth. That is fine, but we need to take the necessary precautions so when it happens we are still fine. Our test infrastructure is powerful enough to test just about every possible protocol - but it might require a bit of an effort to make it happen. ## Documentation We cannot assume that users are particularly familiar with specific details and peculiarities of the protocol. It needs documentation. Maybe it even needs some internal documentation so that the developers who will try to debug something five years from now can figure out functionality a little easier! The protocol specification itself should be freely available without requiring any NDA or similar. ## Do not compare We are constantly raising the bar and we are constantly improving the project. A lot of things we did in the past would not be acceptable if done today. Therefore, you might be tempted to use shortcuts or "hacks" you can spot other - existing - protocol implementations have used, but there is nothing to gain from that. The bar has been raised. Former "cheats" will not be tolerated anymore.
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/SECURITY-PROCESS.md
curl security process ===================== This document describes how security vulnerabilities should be handled in the curl project. Publishing Information ---------------------- All known and public curl or libcurl related vulnerabilities are listed on [the curl website security page](https://curl.se/docs/security.html). Security vulnerabilities **should not** be entered in the project's public bug tracker. Vulnerability Handling ---------------------- The typical process for handling a new security vulnerability is as follows. No information should be made public about a vulnerability until it is formally announced at the end of this process. That means, for example that a bug tracker entry must NOT be created to track the issue since that will make the issue public and it should not be discussed on any of the project's public mailing lists. Also messages associated with any commits should not make any reference to the security nature of the commit if done prior to the public announcement. - The person discovering the issue, the reporter, reports the vulnerability on [https://hackerone.com/curl](https://hackerone.com/curl). Issues filed there reach a handful of selected and trusted people. - Messages that do not relate to the reporting or managing of an undisclosed security vulnerability in curl or libcurl are ignored and no further action is required. - A person in the security team responds to the original report to acknowledge that a human has seen the report. - The security team investigates the report and either rejects it or accepts it. - If the report is rejected, the team writes to the reporter to explain why. - If the report is accepted, the team writes to the reporter to let him/her know it is accepted and that they are working on a fix. - The security team discusses the problem, works out a fix, considers the impact of the problem and suggests a release schedule. This discussion should involve the reporter as much as possible. - The release of the information should be "as soon as possible" and is most often synchronized with an upcoming release that contains the fix. If the reporter, or anyone else involved, thinks the next planned release is too far away, then a separate earlier release should be considered. - Write a security advisory draft about the problem that explains what the problem is, its impact, which versions it affects, solutions or workarounds, when the release is out and make sure to credit all contributors properly. Figure out the CWE (Common Weakness Enumeration) number for the flaw. - Request a CVE number from [HackerOne](https://docs.hackerone.com/programs/cve-requests.html) - Update the "security advisory" with the CVE number. - The security team commits the fix in a private branch. The commit message should ideally contain the CVE number. - The security team also decides on and delivers a monetary reward to the reporter as per the bug-bounty polices. - No more than 10 days before release, inform [distros@openwall](https://oss-security.openwall.org/wiki/mailing-lists/distros) to prepare them about the upcoming public security vulnerability announcement - attach the advisory draft for information with CVE and current patch. 'distros' does not accept an embargo longer than 14 days and they do not care for Windows-specific flaws. - No more than 48 hours before the release, the private branch is merged into the master branch and pushed. Once pushed, the information is accessible to the public and the actual release should follow suit immediately afterwards. The time between the push and the release is used for final tests and reviews. - The project team creates a release that includes the fix. - The project team announces the release and the vulnerability to the world in the same manner we always announce releases. It gets sent to the curl-announce, curl-library and curl-users mailing lists. - The security web page on the website should get the new vulnerability mentioned. security (at curl dot se) ------------------------------ This is a private mailing list for discussions on and about curl security issues. Who is on this list? There are a couple of criteria you must meet, and then we might ask you to join the list or you can ask to join it. It really is not a formal process. We basically only require that you have a long-term presence in the curl project and you have shown an understanding for the project and its way of working. You must have been around for a good while and you should have no plans in vanishing in the near future. We do not make the list of participants public mostly because it tends to vary somewhat over time and a list somewhere will only risk getting outdated. Publishing Security Advisories ------------------------------ 1. Write up the security advisory, using markdown syntax. Use the same subtitles as last time to maintain consistency. 2. Name the advisory file after the allocated CVE id. 3. Add a line on the top of the array in `curl-www/docs/vuln.pm'. 4. Put the new advisory markdown file in the curl-www/docs/ directory. Add it to the git repo. 5. Run `make` in your local web checkout and verify that things look fine. 6. On security advisory release day, push the changes on the curl-www repository's remote master branch. Hackerone --------- Request the issue to be disclosed. If there are sensitive details present in the report and discussion, those should be redacted from the disclosure. The default policy is to disclose as much as possible as soon as the vulnerability has been published. Bug Bounty ---------- See [BUG-BOUNTY](https://curl.se/docs/bugbounty.html) for details on the bug bounty program.
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/BUGS.md
# BUGS ## There are still bugs Curl and libcurl keep being developed. Adding features and changing code means that bugs will sneak in, no matter how hard we try not to. Of course there are lots of bugs left. And lots of misfeatures. To help us make curl the stable and solid product we want it to be, we need bug reports and bug fixes. ## Where to report If you cannot fix a bug yourself and submit a fix for it, try to report an as detailed report as possible to a curl mailing list to allow one of us to have a go at a solution. You can optionally also submit your problem in [curl's bug tracking system](https://github.com/curl/curl/issues). Please read the rest of this document below first before doing that! If you feel you need to ask around first, find a suitable [mailing list]( https://curl.se/mail/) and post your questions there. ## Security bugs If you find a bug or problem in curl or libcurl that you think has a security impact, for example a bug that can put users in danger or make them vulnerable if the bug becomes public knowledge, then please report that bug using our security development process. Security related bugs or bugs that are suspected to have a security impact, should be reported on the [curl security tracker at HackerOne](https://hackerone.com/curl). This ensures that the report reaches the curl security team so that they first can deal with the report away from the public to minimize the harm and impact it will have on existing users out there who might be using the vulnerable versions. The curl project's process for handling security related issues is [documented separately](https://curl.se/dev/secprocess.html). ## What to report When reporting a bug, you should include all information that will help us understand what's wrong, what you expected to happen and how to repeat the bad behavior. You therefore need to tell us: - your operating system's name and version number - what version of curl you are using (`curl -V` is fine) - versions of the used libraries that libcurl is built to use - what URL you were working with (if possible), at least which protocol and anything and everything else you think matters. Tell us what you expected to happen, tell use what did happen, tell us how you could make it work another way. Dig around, try out, test. Then include all the tiny bits and pieces in your report. You will benefit from this yourself, as it will enable us to help you quicker and more accurately. Since curl deals with networks, it often helps us if you include a protocol debug dump with your bug report. The output you get by using the `-v` or `--trace` options. If curl crashed, causing a core dump (in unix), there is hardly any use to send that huge file to anyone of us. Unless we have an exact same system setup as you, we cannot do much with it. Instead, we ask you to get a stack trace and send that (much smaller) output to us instead! The address and how to subscribe to the mailing lists are detailed in the `MANUAL.md` file. ## libcurl problems When you have written your own application with libcurl to perform transfers, it is even more important to be specific and detailed when reporting bugs. Tell us the libcurl version and your operating system. Tell us the name and version of all relevant sub-components like for example the SSL library you are using and what name resolving your libcurl uses. If you use SFTP or SCP, the libssh2 version is relevant etc. Showing us a real source code example repeating your problem is the best way to get our attention and it will greatly increase our chances to understand your problem and to work on a fix (if we agree it truly is a problem). Lots of problems that appear to be libcurl problems are actually just abuses of the libcurl API or other malfunctions in your applications. It is advised that you run your problematic program using a memory debug tool like valgrind or similar before you post memory-related or "crashing" problems to us. ## Who will fix the problems If the problems or bugs you describe are considered to be bugs, we want to have the problems fixed. There are no developers in the curl project that are paid to work on bugs. All developers that take on reported bugs do this on a voluntary basis. We do it out of an ambition to keep curl and libcurl excellent products and out of pride. But please do not assume that you can just lump over something to us and it will then magically be fixed after some given time. Most often we need feedback and help to understand what you have experienced and how to repeat a problem. Then we may only be able to assist YOU to debug the problem and to track down the proper fix. We get reports from many people every month and each report can take a considerable amount of time to really go to the bottom with. ## How to get a stack trace First, you must make sure that you compile all sources with `-g` and that you do not 'strip' the final executable. Try to avoid optimizing the code as well, remove `-O`, `-O2` etc from the compiler options. Run the program until it cores. Run your debugger on the core file, like `<debugger> curl core`. `<debugger>` should be replaced with the name of your debugger, in most cases that will be `gdb`, but `dbx` and others also occur. When the debugger has finished loading the core file and presents you a prompt, enter `where` (without quotes) and press return. The list that is presented is the stack trace. If everything worked, it is supposed to contain the chain of functions that were called when curl crashed. Include the stack trace with your detailed bug report. it will help a lot. ## Bugs in libcurl bindings There will of course pop up bugs in libcurl bindings. You should then primarily approach the team that works on that particular binding and see what you can do to help them fix the problem. If you suspect that the problem exists in the underlying libcurl, then please convert your program over to plain C and follow the steps outlined above. ## Bugs in old versions The curl project typically releases new versions every other month, and we fix several hundred bugs per year. For a huge table of releases, number of bug fixes and more, see: https://curl.se/docs/releases.html The developers in the curl project do not have bandwidth or energy enough to maintain several branches or to spend much time on hunting down problems in old versions when chances are we already fixed them or at least that they have changed nature and appearance in later versions. When you experience a problem and want to report it, you really SHOULD include the version number of the curl you are using when you experience the issue. If that version number shows us that you are using an out-of-date curl, you should also try out a modern curl version to see if the problem persists or how/if it has changed in appearance. Even if you cannot immediately upgrade your application/system to run the latest curl version, you can most often at least run a test version or experimental build or similar, to get this confirmed or not. At times people insist that they cannot upgrade to a modern curl version, but instead they "just want the bug fixed". That is fine, just do not count on us spending many cycles on trying to identify which single commit, if that is even possible, that at some point in the past fixed the problem you are now experiencing. Security wise, it is almost always a bad idea to lag behind the current curl versions by a lot. We keep discovering and reporting security problems over time see you can see in [this table](https://curl.se/docs/vulnerabilities.html) # Bug fixing procedure ## What happens on first filing When a new issue is posted in the issue tracker or on the mailing list, the team of developers first need to see the report. Maybe they took the day off, maybe they are off in the woods hunting. Have patience. Allow at least a few days before expecting someone to have responded. In the issue tracker you can expect that some labels will be set on the issue to help categorize it. ## First response If your issue/bug report was not perfect at once (and few are), chances are that someone will ask follow-up questions. Which version did you use? Which options did you use? How often does the problem occur? How can we reproduce this problem? Which protocols does it involve? Or perhaps much more specific and deep diving questions. It all depends on your specific issue. You should then respond to these follow-up questions and provide more info about the problem, so that we can help you figure it out. Or maybe you can help us figure it out. An active back-and-forth communication is important and the key for finding a cure and landing a fix. ## Not reproducible For problems that we cannot reproduce and cannot understand even after having gotten all the info we need and having studied the source code over again, are really hard to solve so then we may require further work from you who actually see or experience the problem. ## Unresponsive If the problem have not been understood or reproduced, and there's nobody responding to follow-up questions or questions asking for clarifications or for discussing possible ways to move forward with the task, we take that as a strong suggestion that the bug is not important. Unimportant issues will be closed as inactive sooner or later as they cannot be fixed. The inactivity period (waiting for responses) should not be shorter than two weeks but may extend months. ## Lack of time/interest Bugs that are filed and are understood can unfortunately end up in the "nobody cares enough about it to work on it" category. Such bugs are perfectly valid problems that *should* get fixed but apparently are not. We try to mark such bugs as `KNOWN_BUGS material` after a time of inactivity and if no activity is noticed after yet some time those bugs are added to the `KNOWN_BUGS` document and are closed in the issue tracker. ## `KNOWN_BUGS` This is a list of known bugs. Bugs we know exist and that have been pointed out but that have not yet been fixed. The reasons for why they have not been fixed can involve anything really, but the primary reason is that nobody has considered these problems to be important enough to spend the necessary time and effort to have them fixed. The `KNOWN_BUGS` items are always up for grabs and we love the ones who bring one of them back to life and offer solutions to them. The `KNOWN_BUGS` document has a sibling document known as `TODO`. ## `TODO` Issues that are filed or reported that are not really bugs but more missing features or ideas for future improvements and so on are marked as 'enhancement' or 'feature-request' and will be added to the `TODO` document and the issues are closed. We do not keep TODO items open in the issue tracker. The `TODO` document is full of ideas and suggestions of what we can add or fix one day. you are always encouraged and free to grab one of those items and take up a discussion with the curl development team on how that could be implemented or provided in the project so that you can work on ticking it odd that document. If an issue is rather a bug and not a missing feature or functionality, it is listed in `KNOWN_BUGS` instead. ## Closing off stalled bugs The [issue and pull request trackers](https://github.com/curl/curl) only holds "active" entries open (using a non-precise definition of what active actually is, but they are at least not completely dead). Those that are abandoned or in other ways dormant will be closed and sometimes added to `TODO` and `KNOWN_BUGS` instead. This way, we only have "active" issues open on GitHub. Irrelevant issues and pull requests will not distract developers or casual visitors.
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/CIPHERS.md
# Ciphers With curl's options [`CURLOPT_SSL_CIPHER_LIST`](https://curl.se/libcurl/c/CURLOPT_SSL_CIPHER_LIST.html) and [`--ciphers`](https://curl.se/docs/manpage.html#--ciphers) users can control which ciphers to consider when negotiating TLS connections. TLS 1.3 ciphers are supported since curl 7.61 for OpenSSL 1.1.1+ with options [`CURLOPT_TLS13_CIPHERS`](https://curl.se/libcurl/c/CURLOPT_TLS13_CIPHERS.html) and [`--tls13-ciphers`](https://curl.se/docs/manpage.html#--tls13-ciphers) . If you are using a different SSL backend you can try setting TLS 1.3 cipher suites by using the respective regular cipher option. The names of the known ciphers differ depending on which TLS backend that libcurl was built to use. This is an attempt to list known cipher names. ## OpenSSL (based on [OpenSSL docs](https://www.openssl.org/docs/man1.1.0/apps/ciphers.html)) When specifying multiple cipher names, separate them with colon (`:`). ### SSL3 cipher suites `NULL-MD5` `NULL-SHA` `RC4-MD5` `RC4-SHA` `IDEA-CBC-SHA` `DES-CBC3-SHA` `DH-DSS-DES-CBC3-SHA` `DH-RSA-DES-CBC3-SHA` `DHE-DSS-DES-CBC3-SHA` `DHE-RSA-DES-CBC3-SHA` `ADH-RC4-MD5` `ADH-DES-CBC3-SHA` ### TLS v1.0 cipher suites `NULL-MD5` `NULL-SHA` `RC4-MD5` `RC4-SHA` `IDEA-CBC-SHA` `DES-CBC3-SHA` `DHE-DSS-DES-CBC3-SHA` `DHE-RSA-DES-CBC3-SHA` `ADH-RC4-MD5` `ADH-DES-CBC3-SHA` ### AES ciphersuites from RFC3268, extending TLS v1.0 `AES128-SHA` `AES256-SHA` `DH-DSS-AES128-SHA` `DH-DSS-AES256-SHA` `DH-RSA-AES128-SHA` `DH-RSA-AES256-SHA` `DHE-DSS-AES128-SHA` `DHE-DSS-AES256-SHA` `DHE-RSA-AES128-SHA` `DHE-RSA-AES256-SHA` `ADH-AES128-SHA` `ADH-AES256-SHA` ### SEED ciphersuites from RFC4162, extending TLS v1.0 `SEED-SHA` `DH-DSS-SEED-SHA` `DH-RSA-SEED-SHA` `DHE-DSS-SEED-SHA` `DHE-RSA-SEED-SHA` `ADH-SEED-SHA` ### GOST ciphersuites, extending TLS v1.0 `GOST94-GOST89-GOST89` `GOST2001-GOST89-GOST89` `GOST94-NULL-GOST94` `GOST2001-NULL-GOST94` ### Elliptic curve cipher suites `ECDHE-RSA-NULL-SHA` `ECDHE-RSA-RC4-SHA` `ECDHE-RSA-DES-CBC3-SHA` `ECDHE-RSA-AES128-SHA` `ECDHE-RSA-AES256-SHA` `ECDHE-ECDSA-NULL-SHA` `ECDHE-ECDSA-RC4-SHA` `ECDHE-ECDSA-DES-CBC3-SHA` `ECDHE-ECDSA-AES128-SHA` `ECDHE-ECDSA-AES256-SHA` `AECDH-NULL-SHA` `AECDH-RC4-SHA` `AECDH-DES-CBC3-SHA` `AECDH-AES128-SHA` `AECDH-AES256-SHA` ### TLS v1.2 cipher suites `NULL-SHA256` `AES128-SHA256` `AES256-SHA256` `AES128-GCM-SHA256` `AES256-GCM-SHA384` `DH-RSA-AES128-SHA256` `DH-RSA-AES256-SHA256` `DH-RSA-AES128-GCM-SHA256` `DH-RSA-AES256-GCM-SHA384` `DH-DSS-AES128-SHA256` `DH-DSS-AES256-SHA256` `DH-DSS-AES128-GCM-SHA256` `DH-DSS-AES256-GCM-SHA384` `DHE-RSA-AES128-SHA256` `DHE-RSA-AES256-SHA256` `DHE-RSA-AES128-GCM-SHA256` `DHE-RSA-AES256-GCM-SHA384` `DHE-DSS-AES128-SHA256` `DHE-DSS-AES256-SHA256` `DHE-DSS-AES128-GCM-SHA256` `DHE-DSS-AES256-GCM-SHA384` `ECDHE-RSA-AES128-SHA256` `ECDHE-RSA-AES256-SHA384` `ECDHE-RSA-AES128-GCM-SHA256` `ECDHE-RSA-AES256-GCM-SHA384` `ECDHE-ECDSA-AES128-SHA256` `ECDHE-ECDSA-AES256-SHA384` `ECDHE-ECDSA-AES128-GCM-SHA256` `ECDHE-ECDSA-AES256-GCM-SHA384` `ADH-AES128-SHA256` `ADH-AES256-SHA256` `ADH-AES128-GCM-SHA256` `ADH-AES256-GCM-SHA384` `AES128-CCM` `AES256-CCM` `DHE-RSA-AES128-CCM` `DHE-RSA-AES256-CCM` `AES128-CCM8` `AES256-CCM8` `DHE-RSA-AES128-CCM8` `DHE-RSA-AES256-CCM8` `ECDHE-ECDSA-AES128-CCM` `ECDHE-ECDSA-AES256-CCM` `ECDHE-ECDSA-AES128-CCM8` `ECDHE-ECDSA-AES256-CCM8` ### Camellia HMAC-Based ciphersuites from RFC6367, extending TLS v1.2 `ECDHE-ECDSA-CAMELLIA128-SHA256` `ECDHE-ECDSA-CAMELLIA256-SHA384` `ECDHE-RSA-CAMELLIA128-SHA256` `ECDHE-RSA-CAMELLIA256-SHA384` ### TLS 1.3 cipher suites (Note these ciphers are set with `CURLOPT_TLS13_CIPHERS` and `--tls13-ciphers`) `TLS_AES_256_GCM_SHA384` `TLS_CHACHA20_POLY1305_SHA256` `TLS_AES_128_GCM_SHA256` `TLS_AES_128_CCM_8_SHA256` `TLS_AES_128_CCM_SHA256` ## NSS ### Totally insecure `rc4` `rc4-md5` `rc4export` `rc2` `rc2export` `des` `desede3` ### SSL3/TLS cipher suites `rsa_rc4_128_md5` `rsa_rc4_128_sha` `rsa_3des_sha` `rsa_des_sha` `rsa_rc4_40_md5` `rsa_rc2_40_md5` `rsa_null_md5` `rsa_null_sha` `fips_3des_sha` `fips_des_sha` `fortezza` `fortezza_rc4_128_sha` `fortezza_null` ### TLS 1.0 Exportable 56-bit Cipher Suites `rsa_des_56_sha` `rsa_rc4_56_sha` ### AES ciphers `dhe_dss_aes_128_cbc_sha` `dhe_dss_aes_256_cbc_sha` `dhe_rsa_aes_128_cbc_sha` `dhe_rsa_aes_256_cbc_sha` `rsa_aes_128_sha` `rsa_aes_256_sha` ### ECC ciphers `ecdh_ecdsa_null_sha` `ecdh_ecdsa_rc4_128_sha` `ecdh_ecdsa_3des_sha` `ecdh_ecdsa_aes_128_sha` `ecdh_ecdsa_aes_256_sha` `ecdhe_ecdsa_null_sha` `ecdhe_ecdsa_rc4_128_sha` `ecdhe_ecdsa_3des_sha` `ecdhe_ecdsa_aes_128_sha` `ecdhe_ecdsa_aes_256_sha` `ecdh_rsa_null_sha` `ecdh_rsa_128_sha` `ecdh_rsa_3des_sha` `ecdh_rsa_aes_128_sha` `ecdh_rsa_aes_256_sha` `ecdhe_rsa_null` `ecdhe_rsa_rc4_128_sha` `ecdhe_rsa_3des_sha` `ecdhe_rsa_aes_128_sha` `ecdhe_rsa_aes_256_sha` `ecdh_anon_null_sha` `ecdh_anon_rc4_128sha` `ecdh_anon_3des_sha` `ecdh_anon_aes_128_sha` `ecdh_anon_aes_256_sha` ### HMAC-SHA256 cipher suites `rsa_null_sha_256` `rsa_aes_128_cbc_sha_256` `rsa_aes_256_cbc_sha_256` `dhe_rsa_aes_128_cbc_sha_256` `dhe_rsa_aes_256_cbc_sha_256` `ecdhe_ecdsa_aes_128_cbc_sha_256` `ecdhe_rsa_aes_128_cbc_sha_256` ### AES GCM cipher suites in RFC 5288 and RFC 5289 `rsa_aes_128_gcm_sha_256` `dhe_rsa_aes_128_gcm_sha_256` `dhe_dss_aes_128_gcm_sha_256` `ecdhe_ecdsa_aes_128_gcm_sha_256` `ecdh_ecdsa_aes_128_gcm_sha_256` `ecdhe_rsa_aes_128_gcm_sha_256` `ecdh_rsa_aes_128_gcm_sha_256` ### cipher suites using SHA384 `rsa_aes_256_gcm_sha_384` `dhe_rsa_aes_256_gcm_sha_384` `dhe_dss_aes_256_gcm_sha_384` `ecdhe_ecdsa_aes_256_sha_384` `ecdhe_rsa_aes_256_sha_384` `ecdhe_ecdsa_aes_256_gcm_sha_384` `ecdhe_rsa_aes_256_gcm_sha_384` ### chacha20-poly1305 cipher suites `ecdhe_rsa_chacha20_poly1305_sha_256` `ecdhe_ecdsa_chacha20_poly1305_sha_256` `dhe_rsa_chacha20_poly1305_sha_256` ### TLS 1.3 cipher suites `aes_128_gcm_sha_256` `aes_256_gcm_sha_384` `chacha20_poly1305_sha_256` ## GSKit Ciphers are internally defined as [numeric codes](https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_73/apis/gsk_attribute_set_buffer.htm), but libcurl maps them to the following case-insensitive names. ### SSL2 cipher suites (insecure: disabled by default) `rc2-md5` `rc4-md5` `exp-rc2-md5` `exp-rc4-md5` `des-cbc-md5` `des-cbc3-md5` ### SSL3 cipher suites `null-md5` `null-sha` `rc4-md5` `rc4-sha` `exp-rc2-cbc-md5` `exp-rc4-md5` `exp-des-cbc-sha` `des-cbc3-sha` ### TLS v1.0 cipher suites `null-md5` `null-sha` `rc4-md5` `rc4-sha` `exp-rc2-cbc-md5` `exp-rc4-md5` `exp-des-cbc-sha` `des-cbc3-sha` `aes128-sha` `aes256-sha` ### TLS v1.1 cipher suites `null-md5` `null-sha` `rc4-md5` `rc4-sha` `exp-des-cbc-sha` `des-cbc3-sha` `aes128-sha` `aes256-sha` ### TLS v1.2 cipher suites `null-md5` `null-sha` `null-sha256` `rc4-md5` `rc4-sha` `des-cbc3-sha` `aes128-sha` `aes256-sha` `aes128-sha256` `aes256-sha256` `aes128-gcm-sha256` `aes256-gcm-sha384` ## WolfSSL `RC4-SHA`, `RC4-MD5`, `DES-CBC3-SHA`, `AES128-SHA`, `AES256-SHA`, `NULL-SHA`, `NULL-SHA256`, `DHE-RSA-AES128-SHA`, `DHE-RSA-AES256-SHA`, `DHE-PSK-AES256-GCM-SHA384`, `DHE-PSK-AES128-GCM-SHA256`, `PSK-AES256-GCM-SHA384`, `PSK-AES128-GCM-SHA256`, `DHE-PSK-AES256-CBC-SHA384`, `DHE-PSK-AES128-CBC-SHA256`, `PSK-AES256-CBC-SHA384`, `PSK-AES128-CBC-SHA256`, `PSK-AES128-CBC-SHA`, `PSK-AES256-CBC-SHA`, `DHE-PSK-AES128-CCM`, `DHE-PSK-AES256-CCM`, `PSK-AES128-CCM`, `PSK-AES256-CCM`, `PSK-AES128-CCM-8`, `PSK-AES256-CCM-8`, `DHE-PSK-NULL-SHA384`, `DHE-PSK-NULL-SHA256`, `PSK-NULL-SHA384`, `PSK-NULL-SHA256`, `PSK-NULL-SHA`, `HC128-MD5`, `HC128-SHA`, `HC128-B2B256`, `AES128-B2B256`, `AES256-B2B256`, `RABBIT-SHA`, `NTRU-RC4-SHA`, `NTRU-DES-CBC3-SHA`, `NTRU-AES128-SHA`, `NTRU-AES256-SHA`, `AES128-CCM-8`, `AES256-CCM-8`, `ECDHE-ECDSA-AES128-CCM`, `ECDHE-ECDSA-AES128-CCM-8`, `ECDHE-ECDSA-AES256-CCM-8`, `ECDHE-RSA-AES128-SHA`, `ECDHE-RSA-AES256-SHA`, `ECDHE-ECDSA-AES128-SHA`, `ECDHE-ECDSA-AES256-SHA`, `ECDHE-RSA-RC4-SHA`, `ECDHE-RSA-DES-CBC3-SHA`, `ECDHE-ECDSA-RC4-SHA`, `ECDHE-ECDSA-DES-CBC3-SHA`, `AES128-SHA256`, `AES256-SHA256`, `DHE-RSA-AES128-SHA256`, `DHE-RSA-AES256-SHA256`, `ECDH-RSA-AES128-SHA`, `ECDH-RSA-AES256-SHA`, `ECDH-ECDSA-AES128-SHA`, `ECDH-ECDSA-AES256-SHA`, `ECDH-RSA-RC4-SHA`, `ECDH-RSA-DES-CBC3-SHA`, `ECDH-ECDSA-RC4-SHA`, `ECDH-ECDSA-DES-CBC3-SHA`, `AES128-GCM-SHA256`, `AES256-GCM-SHA384`, `DHE-RSA-AES128-GCM-SHA256`, `DHE-RSA-AES256-GCM-SHA384`, `ECDHE-RSA-AES128-GCM-SHA256`, `ECDHE-RSA-AES256-GCM-SHA384`, `ECDHE-ECDSA-AES128-GCM-SHA256`, `ECDHE-ECDSA-AES256-GCM-SHA384`, `ECDH-RSA-AES128-GCM-SHA256`, `ECDH-RSA-AES256-GCM-SHA384`, `ECDH-ECDSA-AES128-GCM-SHA256`, `ECDH-ECDSA-AES256-GCM-SHA384`, `CAMELLIA128-SHA`, `DHE-RSA-CAMELLIA128-SHA`, `CAMELLIA256-SHA`, `DHE-RSA-CAMELLIA256-SHA`, `CAMELLIA128-SHA256`, `DHE-RSA-CAMELLIA128-SHA256`, `CAMELLIA256-SHA256`, `DHE-RSA-CAMELLIA256-SHA256`, `ECDHE-RSA-AES128-SHA256`, `ECDHE-ECDSA-AES128-SHA256`, `ECDH-RSA-AES128-SHA256`, `ECDH-ECDSA-AES128-SHA256`, `ECDHE-RSA-AES256-SHA384`, `ECDHE-ECDSA-AES256-SHA384`, `ECDH-RSA-AES256-SHA384`, `ECDH-ECDSA-AES256-SHA384`, `ECDHE-RSA-CHACHA20-POLY1305`, `ECDHE-ECDSA-CHACHA20-POLY1305`, `DHE-RSA-CHACHA20-POLY1305`, `ECDHE-RSA-CHACHA20-POLY1305-OLD`, `ECDHE-ECDSA-CHACHA20-POLY1305-OLD`, `DHE-RSA-CHACHA20-POLY1305-OLD`, `ADH-AES128-SHA`, `QSH`, `RENEGOTIATION-INFO`, `IDEA-CBC-SHA`, `ECDHE-ECDSA-NULL-SHA`, `ECDHE-PSK-NULL-SHA256`, `ECDHE-PSK-AES128-CBC-SHA256`, `PSK-CHACHA20-POLY1305`, `ECDHE-PSK-CHACHA20-POLY1305`, `DHE-PSK-CHACHA20-POLY1305`, `EDH-RSA-DES-CBC3-SHA`, ## Schannel Schannel allows the enabling and disabling of encryption algorithms, but not specific ciphersuites. They are [defined](https://docs.microsoft.com/windows/desktop/SecCrypto/alg-id) by Microsoft. There is also the case that the selected algorithm is not supported by the protocol or does not match the ciphers offered by the server during the SSL negotiation. In this case curl will return error `CURLE_SSL_CONNECT_ERROR (35) SEC_E_ALGORITHM_MISMATCH` and the request will fail. `CALG_MD2`, `CALG_MD4`, `CALG_MD5`, `CALG_SHA`, `CALG_SHA1`, `CALG_MAC`, `CALG_RSA_SIGN`, `CALG_DSS_SIGN`, `CALG_NO_SIGN`, `CALG_RSA_KEYX`, `CALG_DES`, `CALG_3DES_112`, `CALG_3DES`, `CALG_DESX`, `CALG_RC2`, `CALG_RC4`, `CALG_SEAL`, `CALG_DH_SF`, `CALG_DH_EPHEM`, `CALG_AGREEDKEY_ANY`, `CALG_HUGHES_MD5`, `CALG_SKIPJACK`, `CALG_TEK`, `CALG_CYLINK_MEK`, `CALG_SSL3_SHAMD5`, `CALG_SSL3_MASTER`, `CALG_SCHANNEL_MASTER_HASH`, `CALG_SCHANNEL_MAC_KEY`, `CALG_SCHANNEL_ENC_KEY`, `CALG_PCT1_MASTER`, `CALG_SSL2_MASTER`, `CALG_TLS1_MASTER`, `CALG_RC5`, `CALG_HMAC`, `CALG_TLS1PRF`, `CALG_HASH_REPLACE_OWF`, `CALG_AES_128`, `CALG_AES_192`, `CALG_AES_256`, `CALG_AES`, `CALG_SHA_256`, `CALG_SHA_384`, `CALG_SHA_512`, `CALG_ECDH`, `CALG_ECMQV`, `CALG_ECDSA`, `CALG_ECDH_EPHEM`, As of curl 7.77.0, you can also pass `SCH_USE_STRONG_CRYPTO` as a cipher name to [constrain the set of available ciphers as specified in the schannel documentation](https://docs.microsoft.com/en-us/windows/win32/secauthn/tls-cipher-suites-in-windows-server-2022). Note that the supported ciphers in this case follows the OS version, so if you are running an outdated OS you might still be supporting weak ciphers.
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/URL-SYNTAX.md
# URL syntax and their use in curl ## Specifications The official "URL syntax" is primarily defined in these two different specifications: - [RFC 3986](https://tools.ietf.org/html/rfc3986) (although URL is called "URI" in there) - [The WHATWG URL Specification](https://url.spec.whatwg.org/) RFC 3986 is the earlier one, and curl has always tried to adhere to that one (since it shipped in January 2005). The WHATWG URL spec was written later, is incompatible with the RFC 3986 and changes over time. ## Variations URL parsers as implemented in browsers, libraries and tools usually opt to support one of the mentioned specifications. Bugs, differences in interpretations and the moving nature of the WHATWG spec does however make it unlikely that multiple parsers treat URLs the exact same way! ## Security Due to the inherent differences between URL parser implementations, it is considered a security risk to mix different implementations and assume the same behavior! For example, if you use one parser to check if a URL uses a good host name or the correct auth field, and then pass on that same URL to a *second* parser, there will always be a risk it treats the same URL differently. There is no right and wrong in URL land, only differences of opinions. libcurl offers a separate API to its URL parser for this reason, among others. Applications may at times find it convenient to allow users to specify URLs for various purposes and that string would then end up fed to curl. Getting a URL from an external untrusted party and using it with curl brings several security concerns: 1. If you have an application that runs as or in a server application, getting an unfiltered URL can trick your application to access a local resource instead of a remote resource. Protecting yourself against localhost accesses is hard when accepting user provided URLs. 2. Such custom URLs can access other ports than you planned as port numbers are part of the regular URL format. The combination of a local host and a custom port number can allow external users to play tricks with your local services. 3. Such a URL might use other schemes than you thought of or planned for. ## "RFC3986 plus" curl recognizes a URL syntax that we call "RFC 3986 plus". It is grounded on the well established RFC 3986 to make sure previously written command lines and curl using scripts will remain working. curl's URL parser allows a few deviations from the spec in order to inter-operate better with URLs that appear in the wild. ### spaces In particular `Location:` headers that indicate to the client where a resource has been redirected to, sometimes contain spaces. This is a violation of RFC 3986 but is fine in the WHATWG spec. curl handles these by re-encoding them to `%20`. ### non-ASCII Byte values in a provided URL that are outside of the printable ASCII range are percent-encoded by curl. ### multiple slashes An absolute URL always starts with a "scheme" followed by a colon. For all the schemes curl supports, the colon must be followed by two slashes according to RFC 3986 but not according to the WHATWG spec - which allows one to infinity amount. curl allows one, two or three slashes after the colon to still be considered a valid URL. ### "scheme-less" curl supports "URLs" that do not start with a scheme. This is not supported by any of the specifications. This is a shortcut to entering URLs that was supported by browsers early on and has been mimicked by curl. Based on what the host name starts with, curl will "guess" what protocol to use: - `ftp.` means FTP - `dict.` means DICT - `ldap.` means LDAP - `imap.` means IMAP - `smtp.` means SMTP - `pop3.` means POP3 - all other means HTTP ### globbing letters The curl command line tool supports "globbing" of URLs. It means that you can create ranges and lists using `[N-M]` and `{one,two,three}` sequences. The letters used for this (`[]{}`) are reserved in RFC 3986 and can therefore not legitimately be part of such a URL. They are however not reserved or special in the WHATWG specification, so globbing can mess up such URLs. Globbing can be turned off for such occasions (using `--globoff`). # URL syntax details A URL may consist of the following components - many of them are optional: [scheme][divider][userinfo][hostname][port number][path][query][fragment] Each component is separated from the following component with a divider character or string. For example, this could look like: http://user:[email protected]:80/index.hmtl?foo=bar#top ## Scheme The scheme specifies the protocol to use. A curl build can support a few or many different schemes. You can limit what schemes curl should accept. curl supports the following schemes on URLs specified to transfer. They are matched case insensitively: `dict`, `file`, `ftp`, `ftps`, `gopher`, `gophers`, `http`, `https`, `imap`, `imaps`, `ldap`, `ldaps`, `mqtt`, `pop3`, `pop3s`, `rtmp`, `rtmpe`, `rtmps`, `rtmpt`, `rtmpte`, `rtmpts`, `rtsp`, `smb`, `smbs`, `smtp`, `smtps`, `telnet`, `tftp` When the URL is specified to identify a proxy, curl recognizes the following schemes: `http`, `https`, `socks4`, `socks4a`, `socks5`, `socks5h`, `socks` ## Userinfo The userinfo field can be used to set user name and password for authentication purposes in this transfer. The use of this field is discouraged since it often means passing around the password in plain text and is thus a security risk. URLs for IMAP, POP3 and SMTP also support *login options* as part of the userinfo field. they are provided as a semicolon after the password and then the options. ## Hostname The hostname part of the URL contains the address of the server that you want to connect to. This can be the fully qualified domain name of the server, the local network name of the machine on your network or the IP address of the server or machine represented by either an IPv4 or IPv6 address (within brackets). For example: http://www.example.com/ http://hostname/ http://192.168.0.1/ http://[2001:1890:1112:1::20]/ ### "localhost" Starting in curl 7.77.0, curl will use loopback IP addresses for the name `localhost`: `127.0.0.1` and `::1`. It will not try to resolve the name using the resolver functions. This is done to make sure the host accessed is truly the localhost - the local machine. ### IDNA If curl was built with International Domain Name (IDN) support, it can also handle host names using non-ASCII characters. When built with libidn2, curl uses the IDNA 2008 standard. This is equivalent to the WHATWG URL spec, but differs from certain browsers that use IDNA 2003 Transitional Processing. The two standards have a huge overlap but differ slightly, perhaps most famously in how they deal with the German "double s" (`ß`). When winidn is used, curl uses IDNA 2003 Transitional Processing, like the rest of Windows. ## Port number If there's a colon after the hostname, that should be followed by the port number to use. 1 - 65535. curl also supports a blank port number field - but only if the URL starts with a scheme. If the port number is not specified in the URL, curl will used a default port based on the provide scheme: DICT 2628, FTP 21, FTPS 990, GOPHER 70, GOPHERS 70, HTTP 80, HTTPS 443, IMAP 132, IMAPS 993, LDAP 369, LDAPS 636, MQTT 1883, POP3 110, POP3S 995, RTMP 1935, RTMPS 443, RTMPT 80, RTSP 554, SCP 22, SFTP 22, SMB 445, SMBS 445, SMTP 25, SMTPS 465, TELNET 23, TFTP 69 # Scheme specific behaviors ## FTP The path part of an FTP request specifies the file to retrieve and from which directory. If the file part is omitted then libcurl downloads the directory listing for the directory specified. If the directory is omitted then the directory listing for the root / home directory will be returned. FTP servers typically put the user in its "home directory" after login, which then differs between users. To explicitly specify the root directory of an FTP server start the path with double slash `//` or `/%2f` (2F is the hexadecimal value of the ascii code for the slash). ## FILE When a `FILE://` URL is accessed on Windows systems, it can be crafted in a way so that Windows attempts to connect to a (remote) machine when curl wants to read or write such a path. curl only allows the hostname part of a FILE URL to be one out of these three alternatives: `localhost`, `127.0.0.1` or blank ("", zero characters). Anything else will make curl fail to parse the URL. ### Windows-specific FILE details curl accepts that the FILE URL's path starts with a "drive letter". That is a single letter `a` to `z` followed by a colon or a pipe character (`|`). The Windows operating system itself will convert some file accesses to perform network accesses over SMB/CIFS, through several different file path patterns. This way, a `file://` URL passed to curl *might* be converted into a network access inadvertently and unknowingly to curl. This is a Windows feature curl cannot control or disable. ## IMAP The path part of an IMAP request not only specifies the mailbox to list or select, but can also be used to check the `UIDVALIDITY` of the mailbox, to specify the `UID`, `SECTION` and `PARTIAL` octets of the message to fetch and to specify what messages to search for. A top level folder list: imap://user:[email protected] A folder list on the user's inbox: imap://user:[email protected]/INBOX Select the user's inbox and fetch message with uid = 1: imap://user:[email protected]/INBOX/;UID=1 Select the user's inbox and fetch the first message in the mail box: imap://user:[email protected]/INBOX/;MAILINDEX=1 Select the user's inbox, check the `UIDVALIDITY` of the mailbox is 50 and fetch message 2 if it is: imap://user:[email protected]/INBOX;UIDVALIDITY=50/;UID=2 Select the user's inbox and fetch the text portion of message 3: imap://user:[email protected]/INBOX/;UID=3/;SECTION=TEXT Select the user's inbox and fetch the first 1024 octets of message 4: imap://user:[email protected]/INBOX/;UID=4/;PARTIAL=0.1024 Select the user's inbox and check for NEW messages: imap://user:[email protected]/INBOX?NEW Select the user's inbox and search for messages containing "shadows" in the subject line: imap://user:[email protected]/INBOX?SUBJECT%20shadows Searching via the query part of the URL `?` is a search request for the results to be returned as message sequence numbers (MAILINDEX). It is possible to make a search request for results to be returned as unique ID numbers (UID) by using a custom curl request via `-X`. UID numbers are unique per session (and multiple sessions when UIDVALIDITY is the same). For example, if you are searching for `"foo bar"` in header+body (TEXT) and you want the matching MAILINDEX numbers returned then you could search via URL: imap://user:[email protected]/INBOX?TEXT%20%22foo%20bar%22 .. but if you wanted matching UID numbers you would have to use a custom request: imap://user:[email protected]/INBOX -X "UID SEARCH TEXT \"foo bar\"" For more information about IMAP commands please see RFC 9051. For more information about the individual components of an IMAP URL please see RFC 5092. * Note old curl versions would FETCH by message sequence number when UID was specified in the URL. That was a bug fixed in 7.62.0, which added MAILINDEX to FETCH by mail sequence number. ## LDAP The path part of a LDAP request can be used to specify the: Distinguished Name, Attributes, Scope, Filter and Extension for a LDAP search. Each field is separated by a question mark and when that field is not required an empty string with the question mark separator should be included. Search for the DN as `My Organisation`: ldap://ldap.example.com/o=My%20Organisation the same search but will only return postalAddress attributes: ldap://ldap.example.com/o=My%20Organisation?postalAddress Search for an empty DN and request information about the `rootDomainNamingContext` attribute for an Active Directory server: ldap://ldap.example.com/?rootDomainNamingContext For more information about the individual components of a LDAP URL please see [RFC 4516](https://tools.ietf.org/html/rfc4516). ## POP3 The path part of a POP3 request specifies the message ID to retrieve. If the ID is not specified then a list of waiting messages is returned instead. ## SCP The path part of an SCP URL specifies the path and file to retrieve or upload. The file is taken as an absolute path from the root directory on the server. To specify a path relative to the user's home directory on the server, prepend `~/` to the path portion. ## SFTP The path part of an SFTP URL specifies the file to retrieve or upload. If the path ends with a slash (`/`) then a directory listing is returned instead of a file. If the path is omitted entirely then the directory listing for the root / home directory will be returned. ## SMB The path part of a SMB request specifies the file to retrieve and from what share and directory or the share to upload to and as such, may not be omitted. If the user name is embedded in the URL then it must contain the domain name and as such, the backslash must be URL encoded as %2f. curl supports SMB version 1 (only) ## SMTP The path part of a SMTP request specifies the host name to present during communication with the mail server. If the path is omitted, then libcurl will attempt to resolve the local computer's host name. However, this may not return the fully qualified domain name that is required by some mail servers and specifying this path allows you to set an alternative name, such as your machine's fully qualified domain name, which you might have obtained from an external function such as gethostname or getaddrinfo. The default smtp port is 25. Some servers use port 587 as an alternative. ## RTMP There's no official URL spec for RTMP so libcurl uses the URL syntax supported by the underlying librtmp library. It has a syntax where it wants a traditional URL, followed by a space and a series of space-separated `name=value` pairs. While space is not typically a "legal" letter, libcurl accepts them. When a user wants to pass in a `#` (hash) character it will be treated as a fragment and get cut off by libcurl if provided literally. You will instead have to escape it by providing it as backslash and its ASCII value in hexadecimal: `\23`.
0
repos/gpt4all.zig/src/zig-libcurl/curl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/BUG-BOUNTY.md
# The curl bug bounty The curl project runs a bug bounty program in association with [HackerOne](https://www.hackerone.com) and the [Internet Bug Bounty](https://internetbugbounty.org). # How does it work? Start out by posting your suspected security vulnerability directly to [curl's HackerOne program](https://hackerone.com/curl). After you have reported a security issue, it has been deemed credible, and a patch and advisory has been made public, you may be eligible for a bounty from this program. See all details at [https://hackerone.com/curl](https://hackerone.com/curl) This bounty is relying on funds from sponsors. If you use curl professionally, consider help funding this! See [https://opencollective.com/curl](https://opencollective.com/curl) for details. # What are the reward amounts? The curl project offers monetary compensation for reported and published security vulnerabilities. The amount of money that is rewarded depends on how serious the flaw is determined to be. We offer reward money *up to* a certain amount per severity. The curl security team determines the severity of each reported flaw on a case by case basis and the exact amount rewarded to the reporter is then decided. Check out the current award amounts at [https://hackerone.com/curl](https://hackerone.com/curl) # Who is eligible for a reward? Everyone and anyone who reports a security problem in a released curl version that has not already been reported can ask for a bounty. Vulnerabilities in features that are off by default and documented as experimental are not eligible for a reward. The vulnerability has to be fixed and publicly announced (by the curl project) before a bug bounty will be considered. Bounties need to be requested within twelve months from the publication of the vulnerability. # Product vulnerabilities only This bug bounty only concerns the curl and libcurl products and thus their respective source codes - when running on existing hardware. It does not include documentation, websites, or other infrastructure. The curl security team is the sole arbiter if a reported flaw is subject to a bounty or not. # How are vulnerabilities graded? The grading of each reported vulnerability that makes a reward claim will be performed by the curl security team. The grading will be based on the CVSS (Common Vulnerability Scoring System) 3.0. # How are reward amounts determined? The curl security team first gives the vulnerability a score, as mentioned above, and based on that level we set an amount depending on the specifics of the individual case. Other sponsors of the program might also get involved and can raise the amounts depending on the particular issue. # What happens if the bounty fund is drained? The bounty fund depends on sponsors. If we pay out more bounties than we add, the fund will eventually drain. If that end up happening, we will simply not be able to pay out as high bounties as we would like and hope that we can convince new sponsors to help us top up the fund again. # Regarding taxes, etc. on the bounties In the event that the individual receiving a curl bug bounty needs to pay taxes on the reward money, the responsibility lies with the receiver. The curl project or its security team never actually receive any of this money, hold the money, or pay out the money.
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/cmdline-opts/MANPAGE.md
# curl man page generator This is the curl man page generator. It generates a single nroff man page output from the set of sources files in this directory. There is one source file for each supported command line option. The output gets `page-header` prepended and `page-footer` appended. The format is described below. ## Option files Each command line option is described in a file named `<long name>.d`, where option name is written without any prefixing dashes. Like the file name for the -v, --verbose option is named `verbose.d`. Each file has a set of meta-data and a body of text. ### Meta-data Short: (single letter, without dash) Long: (long form name, without dashes) Arg: (the argument the option takes) Magic: (description of "magic" options) Tags: (space separated list) Protocols: (space separated list for which protocols this option works) Added: (version number in which this was added) Mutexed: (space separated list of options this overrides, no dashes) Requires: (space separated list of features this requires, no dashes) See-also: (space separated list of related options, no dashes) Help: (short text for the --help output for this option) Example: (example command line, without "curl" and can use `$URL`) --- (end of meta-data) ### Body The body of the description. Only refer to options with their long form option version, like `--verbose`. The output generator will replace such with the correct markup that shows both short and long version. Text written within `*asterisks*` will get shown using italics. Text within two `**asterisks**` will get shown using bold. ## Header and footer `page-header` is the file that will be output before the generated options output for the master man page. `page-footer` is appended after all the individual options. ## Generate `./gen.pl mainpage` This command outputs a single huge nroff file, meant to become `curl.1`. The full curl man page. `./gen.pl listhelp` Generates a full `curl --help` output for all known command line options.
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/cmdline-opts/gen.pl
#!/usr/bin/env perl #*************************************************************************** # _ _ ____ _ # Project ___| | | | _ \| | # / __| | | | |_) | | # | (__| |_| | _ <| |___ # \___|\___/|_| \_\_____| # # Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. # # This software is licensed as described in the file COPYING, which # you should have received as part of this distribution. The terms # are also available at https://curl.se/docs/copyright.html. # # You may opt to use, copy, modify, merge, publish, distribute and/or sell # copies of the Software, and permit persons to whom the Software is # furnished to do so, under the terms of the COPYING file. # # This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY # KIND, either express or implied. # ########################################################################### =begin comment This script generates the manpage. Example: gen.pl <command> [files] > curl.1 Dev notes: We open *input* files in :crlf translation (a no-op on many platforms) in case we have CRLF line endings in Windows but a perl that defaults to LF. Unfortunately it seems some perls like msysgit can't handle a global input-only :crlf so it has to be specified on each file open for text input. =end comment =cut my %optshort; my %optlong; my %helplong; my %arglong; my %redirlong; my %protolong; my %catlong; use POSIX qw(strftime); my $date = strftime "%B %d %Y", localtime; my $year = strftime "%Y", localtime; my $version = "unknown"; open(INC, "<../../include/curl/curlver.h"); while(<INC>) { if($_ =~ /^#define LIBCURL_VERSION \"([0-9.]*)/) { $version = $1; last; } } close(INC); # get the long name version, return the man page string sub manpageify { my ($k)=@_; my $l; if($optlong{$k} ne "") { # both short + long $l = "\\fI-".$optlong{$k}.", --$k\\fP"; } else { # only long $l = "\\fI--$k\\fP"; } return $l; } sub printdesc { my @desc = @_; for my $d (@desc) { if($d =~ /\(Added in ([0-9.]+)\)/i) { my $ver = $1; if(too_old($ver)) { $d =~ s/ *\(Added in $ver\)//gi; } } if($d !~ /^.\\"/) { # **bold** $d =~ s/\*\*([^ ]*)\*\*/\\fB$1\\fP/g; # *italics* $d =~ s/\*([^ ]*)\*/\\fI$1\\fP/g; } # skip lines starting with space (examples) if($d =~ /^[^ ]/) { for my $k (keys %optlong) { my $l = manpageify($k); $d =~ s/--$k([^a-z0-9_-])/$l$1/; } } # quote "bare" minuses in the output $d =~ s/( |\\fI|^)--/$1\\-\\-/g; $d =~ s/([ -]|\\fI|^)-/$1\\-/g; # handle single quotes first on the line $d =~ s/(\s*)\'/$1\\(aq/; print $d; } } sub seealso { my($standalone, $data)=@_; if($standalone) { return sprintf ".SH \"SEE ALSO\"\n$data\n"; } else { return "See also $data. "; } } sub overrides { my ($standalone, $data)=@_; if($standalone) { return ".SH \"OVERRIDES\"\n$data\n"; } else { return $data; } } sub protocols { my ($standalone, $data)=@_; if($standalone) { return ".SH \"PROTOCOLS\"\n$data\n"; } else { return "($data) "; } } sub too_old { my ($version)=@_; my $a = 999999; if($version =~ /^(\d+)\.(\d+)\.(\d+)/) { $a = $1 * 1000 + $2 * 10 + $3; } elsif($version =~ /^(\d+)\.(\d+)/) { $a = $1 * 1000 + $2 * 10; } if($a < 7300) { # we consider everything before 7.30.0 to be too old to mention # specific changes for return 1; } return 0; } sub added { my ($standalone, $data)=@_; if(too_old($data)) { # don't mention ancient additions return ""; } if($standalone) { return ".SH \"ADDED\"\nAdded in curl version $data\n"; } else { return "Added in $data. "; } } sub single { my ($f, $standalone)=@_; open(F, "<:crlf", "$f") || return 1; my $short; my $long; my $tags; my $added; my $protocols; my $arg; my $mutexed; my $requires; my $category; my $seealso; my @examples; # there can be more than one my $magic; # cmdline special option my $line; while(<F>) { $line++; if(/^Short: *(.)/i) { $short=$1; } elsif(/^Long: *(.*)/i) { $long=$1; } elsif(/^Added: *(.*)/i) { $added=$1; } elsif(/^Tags: *(.*)/i) { $tags=$1; } elsif(/^Arg: *(.*)/i) { $arg=$1; } elsif(/^Magic: *(.*)/i) { $magic=$1; } elsif(/^Mutexed: *(.*)/i) { $mutexed=$1; } elsif(/^Protocols: *(.*)/i) { $protocols=$1; } elsif(/^See-also: *(.*)/i) { $seealso=$1; } elsif(/^Requires: *(.*)/i) { $requires=$1; } elsif(/^Category: *(.*)/i) { $category=$1; } elsif(/^Example: *(.*)/i) { push @examples, $1; } elsif(/^Help: *(.*)/i) { ; } elsif(/^---/) { if(!$long) { print STDERR "ERROR: no 'Long:' in $f\n"; exit 1; } if(!$category) { print STDERR "ERROR: no 'Category:' in $f\n"; exit 2; } if(!$examples[0]) { print STDERR "$f:$line:1:ERROR: no 'Example:' present\n"; exit 2; } if(!$added) { print STDERR "$f:$line:1:ERROR: no 'Added:' version present\n"; exit 2; } last; } else { chomp; print STDERR "WARN: unrecognized line in $f, ignoring:\n:'$_';" } } my @desc; while(<F>) { push @desc, $_; } close(F); my $opt; if(defined($short) && $long) { $opt = "-$short, --$long"; } elsif($short && !$long) { $opt = "-$short"; } elsif($long && !$short) { $opt = "--$long"; } if($arg) { $opt .= " $arg"; } # quote "bare" minuses in opt $opt =~ s/( |^)--/$1\\-\\-/g; $opt =~ s/( |^)-/$1\\-/g; if($standalone) { print ".TH curl 1 \"30 Nov 2016\" \"curl 7.52.0\" \"curl manual\"\n"; print ".SH OPTION\n"; print "curl $opt\n"; } else { print ".IP \"$opt\"\n"; } if($protocols) { print protocols($standalone, $protocols); } if($standalone) { print ".SH DESCRIPTION\n"; } printdesc(@desc); undef @desc; my @foot; if($seealso) { my @m=split(/ /, $seealso); my $mstr; my $and = 0; my $num = scalar(@m); if($num > 2) { # use commas up to this point $and = $num - 1; } my $i = 0; for my $k (@m) { if(!$helplong{$k}) { print STDERR "WARN: $f see-alsos a non-existing option: $k\n"; } my $l = manpageify($k); my $sep = " and"; if($and && ($i < $and)) { $sep = ","; } $mstr .= sprintf "%s$l", $mstr?"$sep ":""; $i++; } push @foot, seealso($standalone, $mstr); } if($requires) { my $l = manpageify($long); push @foot, "$l requires that the underlying libcurl". " was built to support $requires. "; } if($mutexed) { my @m=split(/ /, $mutexed); my $mstr; for my $k (@m) { if(!$helplong{$k}) { print STDERR "WARN: $f mutexes a non-existing option: $k\n"; } my $l = manpageify($k); $mstr .= sprintf "%s$l", $mstr?" and ":""; } push @foot, overrides($standalone, "This option overrides $mstr. "); } if($examples[0]) { my $s =""; $s="s" if($examples[1]); print "\nExample$s:\n.nf\n"; foreach my $e (@examples) { $e =~ s!\$URL!https://example.com!g; print " curl $e\n"; } print ".fi\n"; } if($added) { push @foot, added($standalone, $added); } if($foot[0]) { print "\n"; my $f = join("", @foot); $f =~ s/ +\z//; # remove trailing space print "$f\n"; } return 0; } sub getshortlong { my ($f)=@_; open(F, "<:crlf", "$f"); my $short; my $long; my $help; my $arg; my $protocols; my $category; while(<F>) { if(/^Short: (.)/i) { $short=$1; } elsif(/^Long: (.*)/i) { $long=$1; } elsif(/^Help: (.*)/i) { $help=$1; } elsif(/^Arg: (.*)/i) { $arg=$1; } elsif(/^Protocols: (.*)/i) { $protocols=$1; } elsif(/^Category: (.*)/i) { $category=$1; } elsif(/^---/) { last; } } close(F); if($short) { $optshort{$short}=$long; } if($long) { $optlong{$long}=$short; $helplong{$long}=$help; $arglong{$long}=$arg; $protolong{$long}=$protocols; $catlong{$long}=$category; } } sub indexoptions { my (@files) = @_; foreach my $f (@files) { getshortlong($f); } } sub header { my ($f)=@_; open(F, "<:crlf", "$f"); my @d; while(<F>) { s/%DATE/$date/g; s/%VERSION/$version/g; push @d, $_; } close(F); printdesc(@d); } sub listhelp { print <<HEAD /*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \\| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \\___|\\___/|_| \\_\\_____| * * Copyright (C) 1998 - $year, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ #include "tool_setup.h" #include "tool_help.h" /* * DO NOT edit tool_listhelp.c manually. * This source file is generated with the following command: cd \$srcroot/docs/cmdline-opts ./gen.pl listhelp *.d > \$srcroot/src/tool_listhelp.c */ const struct helptxt helptext[] = { HEAD ; foreach my $f (sort keys %helplong) { my $long = $f; my $short = $optlong{$long}; my @categories = split ' ', $catlong{$long}; my $bitmask; my $opt; if(defined($short) && $long) { $opt = "-$short, --$long"; } elsif($long && !$short) { $opt = " --$long"; } for my $i (0 .. $#categories) { $bitmask .= 'CURLHELP_' . uc $categories[$i]; # If not last element, append | if($i < $#categories) { $bitmask .= ' | '; } } my $arg = $arglong{$long}; if($arg) { $opt .= " $arg"; } my $desc = $helplong{$f}; $desc =~ s/\"/\\\"/g; # escape double quotes my $line = sprintf " {\"%s\",\n \"%s\",\n %s},\n", $opt, $desc, $bitmask; if(length($opt) > 78) { print STDERR "WARN: the --$long name is too long\n"; } elsif(length($desc) > 78) { print STDERR "WARN: the --$long description is too long\n"; } print $line; } print <<FOOT { NULL, NULL, CURLHELP_HIDDEN } }; FOOT ; } sub listcats { my %allcats; foreach my $f (sort keys %helplong) { my @categories = split ' ', $catlong{$f}; foreach (@categories) { $allcats{$_} = undef; } } my @categories; foreach my $key (keys %allcats) { push @categories, $key; } @categories = sort @categories; unshift @categories, 'hidden'; for my $i (0..$#categories) { print '#define ' . 'CURLHELP_' . uc($categories[$i]) . ' ' . "1u << " . $i . "u\n"; } } sub mainpage { my (@files) = @_; # show the page header header("page-header"); # output docs for all options foreach my $f (sort @files) { if(single($f, 0)) { print STDERR "Can't read $f?\n"; } } header("page-footer"); } sub showonly { my ($f) = @_; if(single($f, 1)) { print STDERR "$f: failed\n"; } } sub showprotocols { my %prots; foreach my $f (keys %optlong) { my @p = split(/ /, $protolong{$f}); for my $p (@p) { $prots{$p}++; } } for(sort keys %prots) { printf "$_ (%d options)\n", $prots{$_}; } } sub getargs { my ($f, @s) = @_; if($f eq "mainpage") { mainpage(@s); return; } elsif($f eq "listhelp") { listhelp(); return; } elsif($f eq "single") { showonly($s[0]); return; } elsif($f eq "protos") { showprotocols(); return; } elsif($f eq "listcats") { listcats(); return; } print "Usage: gen.pl <mainpage/listhelp/single FILE/protos/listcats> [files]\n"; } #------------------------------------------------------------------------ my $cmd = shift @ARGV; my @files = @ARGV; # the rest are the files # learn all existing options indexoptions(@files); getargs($cmd, @files);
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/cmdline-opts/CMakeLists.txt
#*************************************************************************** # _ _ ____ _ # Project ___| | | | _ \| | # / __| | | | |_) | | # | (__| |_| | _ <| |___ # \___|\___/|_| \_\_____| # # Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. # # This software is licensed as described in the file COPYING, which # you should have received as part of this distribution. The terms # are also available at https://curl.se/docs/copyright.html. # # You may opt to use, copy, modify, merge, publish, distribute and/or sell # copies of the Software, and permit persons to whom the Software is # furnished to do so, under the terms of the COPYING file. # # This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY # KIND, either express or implied. # ########################################################################### set(MANPAGE "${CURL_BINARY_DIR}/docs/curl.1") # Load DPAGES and OTHERPAGES from shared file transform_makefile_inc("Makefile.inc" "${CMAKE_CURRENT_BINARY_DIR}/Makefile.inc.cmake") include("${CMAKE_CURRENT_BINARY_DIR}/Makefile.inc.cmake") add_custom_command(OUTPUT "${MANPAGE}" COMMAND "${PERL_EXECUTABLE}" "${CMAKE_CURRENT_SOURCE_DIR}/gen.pl" mainpage "${CMAKE_CURRENT_SOURCE_DIR}" > "${MANPAGE}" DEPENDS ${DPAGES} ${OTHERPAGES} VERBATIM ) add_custom_target(generate-curl.1 DEPENDS "${MANPAGE}")
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/cmdline-opts/Makefile.inc
#*************************************************************************** # _ _ ____ _ # Project ___| | | | _ \| | # / __| | | | |_) | | # | (__| |_| | _ <| |___ # \___|\___/|_| \_\_____| # # Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. # # This software is licensed as described in the file COPYING, which # you should have received as part of this distribution. The terms # are also available at https://curl.se/docs/copyright.html. # # You may opt to use, copy, modify, merge, publish, distribute and/or sell # copies of the Software, and permit persons to whom the Software is # furnished to do so, under the terms of the COPYING file. # # This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY # KIND, either express or implied. # ########################################################################### # Shared between Makefile.am and CMakeLists.txt DPAGES = \ abstract-unix-socket.d \ alt-svc.d \ anyauth.d \ append.d \ aws-sigv4.d \ basic.d \ cacert.d \ capath.d \ cert-status.d \ cert-type.d \ cert.d \ ciphers.d \ compressed-ssh.d \ compressed.d \ config.d \ connect-timeout.d \ connect-to.d \ continue-at.d \ cookie-jar.d \ cookie.d \ create-dirs.d \ create-file-mode.d \ crlf.d \ crlfile.d \ curves.d \ data-ascii.d \ data-binary.d \ data-raw.d \ data-urlencode.d \ data.d \ delegation.d \ digest.d \ disable-eprt.d \ disable-epsv.d \ disable.d \ disallow-username-in-url.d \ dns-interface.d \ dns-ipv4-addr.d \ dns-ipv6-addr.d \ dns-servers.d \ doh-cert-status.d \ doh-insecure.d \ doh-url.d \ dump-header.d \ egd-file.d \ engine.d \ etag-compare.d \ etag-save.d \ expect100-timeout.d \ fail-early.d \ fail-with-body.d \ fail.d \ false-start.d \ form-string.d \ form.d \ ftp-account.d \ ftp-alternative-to-user.d \ ftp-create-dirs.d \ ftp-method.d \ ftp-pasv.d \ ftp-port.d \ ftp-pret.d \ ftp-skip-pasv-ip.d \ ftp-ssl-ccc-mode.d \ ftp-ssl-ccc.d \ ftp-ssl-control.d \ get.d \ globoff.d \ happy-eyeballs-timeout-ms.d \ haproxy-protocol.d \ head.d \ header.d \ help.d \ hostpubmd5.d \ hostpubsha256.d \ hsts.d \ http0.9.d \ http1.0.d \ http1.1.d \ http2-prior-knowledge.d \ http2.d \ http3.d \ ignore-content-length.d \ include.d \ insecure.d \ interface.d \ ipv4.d \ ipv6.d \ junk-session-cookies.d \ keepalive-time.d \ key-type.d \ key.d \ krb.d \ libcurl.d \ limit-rate.d \ list-only.d \ local-port.d \ location-trusted.d \ location.d \ login-options.d \ mail-auth.d \ mail-from.d \ mail-rcpt-allowfails.d \ mail-rcpt.d \ manual.d \ max-filesize.d \ max-redirs.d \ max-time.d \ metalink.d \ negotiate.d \ netrc-file.d \ netrc-optional.d \ netrc.d \ next.d \ no-alpn.d \ no-buffer.d \ no-keepalive.d \ no-npn.d \ no-progress-meter.d \ no-sessionid.d \ noproxy.d \ ntlm-wb.d \ ntlm.d \ oauth2-bearer.d \ output-dir.d \ output.d \ parallel-immediate.d \ parallel-max.d \ parallel.d \ pass.d \ path-as-is.d \ pinnedpubkey.d \ post301.d \ post302.d \ post303.d \ preproxy.d \ progress-bar.d \ proto-default.d \ proto-redir.d \ proto.d \ proxy-anyauth.d \ proxy-basic.d \ proxy-cacert.d \ proxy-capath.d \ proxy-cert-type.d \ proxy-cert.d \ proxy-ciphers.d \ proxy-crlfile.d \ proxy-digest.d \ proxy-header.d \ proxy-insecure.d \ proxy-key-type.d \ proxy-key.d \ proxy-negotiate.d \ proxy-ntlm.d \ proxy-pass.d \ proxy-pinnedpubkey.d \ proxy-service-name.d \ proxy-ssl-allow-beast.d \ proxy-ssl-auto-client-cert.d \ proxy-tls13-ciphers.d \ proxy-tlsauthtype.d \ proxy-tlspassword.d \ proxy-tlsuser.d \ proxy-tlsv1.d \ proxy-user.d \ proxy.d \ proxy1.0.d \ proxytunnel.d \ pubkey.d \ quote.d \ random-file.d \ range.d \ raw.d \ referer.d \ remote-header-name.d \ remote-name-all.d \ remote-name.d \ remote-time.d \ request-target.d \ request.d \ resolve.d \ retry-all-errors.d \ retry-connrefused.d \ retry-delay.d \ retry-max-time.d \ retry.d \ sasl-authzid.d \ sasl-ir.d \ service-name.d \ show-error.d \ silent.d \ socks4.d \ socks4a.d \ socks5-basic.d \ socks5-gssapi-nec.d \ socks5-gssapi-service.d \ socks5-gssapi.d \ socks5-hostname.d \ socks5.d \ speed-limit.d \ speed-time.d \ ssl-allow-beast.d \ ssl-auto-client-cert.d \ ssl-no-revoke.d \ ssl-reqd.d \ ssl-revoke-best-effort.d \ ssl.d \ sslv2.d \ sslv3.d \ stderr.d \ styled-output.d \ suppress-connect-headers.d \ tcp-fastopen.d \ tcp-nodelay.d \ telnet-option.d \ tftp-blksize.d \ tftp-no-options.d \ time-cond.d \ tls-max.d \ tls13-ciphers.d \ tlsauthtype.d \ tlspassword.d \ tlsuser.d \ tlsv1.0.d \ tlsv1.1.d \ tlsv1.2.d \ tlsv1.3.d \ tlsv1.d \ tr-encoding.d \ trace-ascii.d \ trace-time.d \ trace.d \ unix-socket.d \ upload-file.d \ url.d \ use-ascii.d \ user-agent.d \ user.d \ verbose.d \ version.d \ write-out.d \ xattr.d OTHERPAGES = page-footer page-header
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/libcurl/ABI.md
ABI - Application Binary Interface ================================== "ABI" describes the low-level interface between an application program and a library. Calling conventions, function arguments, return values, struct sizes/defines and more. [Wikipedia has a longer description](https://en.wikipedia.org/wiki/Application_binary_interface) ## Upgrades A libcurl upgrade does not break the ABI or change established and documented behavior. Your application can remain using libcurl just as before, only with fewer bugs and possibly with added new features. ## Version Numbers In libcurl land, you cannot tell by the libcurl version number if that libcurl is binary compatible or not with another libcurl version. As a rule, we do not break the ABI so you can *always* upgrade to a later version without any loss or change in functionality. ## Soname Bumps Whenever there are changes done to the library that will cause an ABI breakage, that may require your application to get attention or possibly be changed to adhere to new things, we will bump the soname. Then the library will get a different output name and thus can in fact be installed in parallel with an older installed lib (on most systems). Thus, old applications built against the previous ABI version will remain working and using the older lib, while newer applications build and use the newer one. During the first seven years of libcurl releases, there have only been four ABI breakages. We are determined to bump the SONAME as rarely as possible. Ideally, we never do it again. ## Downgrades Going to an older libcurl version from one you are currently using can be a tricky thing. Mostly we add features and options to newer libcurls as that will not break ABI or hamper existing applications. This has the implication that going backwards may get you in a situation where you pick a libcurl that does not support the options your application needs. Or possibly you even downgrade so far so you cross an ABI break border and thus a different soname, and then your application may need to adapt to the modified ABI. ## History The previous major library soname number bumps (breaking backwards compatibility) happened the following times: 0 - libcurl 7.1, August 2000 1 - libcurl 7.5 December 2000 2 - libcurl 7.7 March 2001 3 - libcurl 7.12.0 June 2004 4 - libcurl 7.16.0 October 2006
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/libcurl/CMakeLists.txt
#*************************************************************************** # _ _ ____ _ # Project ___| | | | _ \| | # / __| | | | |_) | | # | (__| |_| | _ <| |___ # \___|\___/|_| \_\_____| # # Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. # # This software is licensed as described in the file COPYING, which # you should have received as part of this distribution. The terms # are also available at https://curl.se/docs/copyright.html. # # You may opt to use, copy, modify, merge, publish, distribute and/or sell # copies of the Software, and permit persons to whom the Software is # furnished to do so, under the terms of the COPYING file. # # This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY # KIND, either express or implied. # ########################################################################### # Load man_MANS from shared file transform_makefile_inc("Makefile.inc" "${CMAKE_CURRENT_BINARY_DIR}/Makefile.inc.cmake") include("${CMAKE_CURRENT_BINARY_DIR}/Makefile.inc.cmake") function(add_manual_pages _listname) foreach(_file IN LISTS ${_listname}) if(_file STREQUAL "libcurl-symbols.3") # Special case, an auto-generated file. set(_srcfile "${CMAKE_CURRENT_BINARY_DIR}/${_file}") else() set(_srcfile "${CMAKE_CURRENT_SOURCE_DIR}/${_file}") endif() string(REPLACE ".3" ".html" _htmlfile "${CMAKE_CURRENT_BINARY_DIR}/${_file}") add_custom_command(OUTPUT "${_htmlfile}" COMMAND roffit "--mandir=${CMAKE_CURRENT_SOURCE_DIR}" "${_srcfile}" > "${_htmlfile}" DEPENDS "${_srcfile}" VERBATIM ) string(REPLACE ".3" ".pdf" _pdffile "${CMAKE_CURRENT_BINARY_DIR}/${_file}") string(REPLACE ".3" ".ps" _psfile "${CMAKE_CURRENT_BINARY_DIR}/${_file}") # XXX any reason why groff -Tpdf (for gropdf) is not used? add_custom_command(OUTPUT "${_pdffile}" COMMAND groff -Tps -man "${_srcfile}" > "${_psfile}" COMMAND ps2pdf "${_psfile}" "${_pdffile}" COMMAND "${CMAKE_COMMAND}" -E remove "${_psfile}" DEPENDS "${_srcfile}" #BYPRODUCTS "${_psfile}" VERBATIM ) # "BYPRODUCTS" for add_custom_command requires CMake 3.2. For now hope that # the temporary files are removed (i.e. the command is not interrupted). endforeach() endfunction() add_custom_command(OUTPUT libcurl-symbols.3 COMMAND "${PERL_EXECUTABLE}" "${CMAKE_CURRENT_SOURCE_DIR}/mksymbolsmanpage.pl" < "${CMAKE_CURRENT_SOURCE_DIR}/symbols-in-versions" > libcurl-symbols.3 DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/symbols-in-versions" "${CMAKE_CURRENT_SOURCE_DIR}/mksymbolsmanpage.pl" VERBATIM ) add_manual_pages(man_MANS) string(REPLACE ".3" ".html" HTMLPAGES "${man_MANS}") string(REPLACE ".3" ".pdf" PDFPAGES "${man_MANS}") add_custom_target(html DEPENDS ${HTMLPAGES}) add_custom_target(pdf DEPENDS ${PDFPAGES}) add_subdirectory(opts)
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/libcurl/Makefile.inc
#*************************************************************************** # _ _ ____ _ # Project ___| | | | _ \| | # / __| | | | |_) | | # | (__| |_| | _ <| |___ # \___|\___/|_| \_\_____| # # Copyright (C) 2008 - 2021, Daniel Stenberg, <[email protected]>, et al. # # This software is licensed as described in the file COPYING, which # you should have received as part of this distribution. The terms # are also available at https://curl.se/docs/copyright.html. # # You may opt to use, copy, modify, merge, publish, distribute and/or sell # copies of the Software, and permit persons to whom the Software is # furnished to do so, under the terms of the COPYING file. # # This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY # KIND, either express or implied. # ########################################################################### # Shared between Makefile.am and CMakeLists.txt man_MANS = \ curl_easy_cleanup.3 \ curl_easy_duphandle.3 \ curl_easy_escape.3 \ curl_easy_getinfo.3 \ curl_easy_init.3 \ curl_easy_option_by_id.3 \ curl_easy_option_by_name.3 \ curl_easy_option_next.3 \ curl_easy_pause.3 \ curl_easy_perform.3 \ curl_easy_recv.3 \ curl_easy_reset.3 \ curl_easy_send.3 \ curl_easy_setopt.3 \ curl_easy_strerror.3 \ curl_easy_unescape.3 \ curl_easy_upkeep.3 \ curl_escape.3 \ curl_formadd.3 \ curl_formfree.3 \ curl_formget.3 \ curl_free.3 \ curl_getdate.3 \ curl_getenv.3 \ curl_global_cleanup.3 \ curl_global_init.3 \ curl_global_init_mem.3 \ curl_global_sslset.3 \ curl_mime_addpart.3 \ curl_mime_data.3 \ curl_mime_data_cb.3 \ curl_mime_encoder.3 \ curl_mime_filedata.3 \ curl_mime_filename.3 \ curl_mime_free.3 \ curl_mime_headers.3 \ curl_mime_init.3 \ curl_mime_name.3 \ curl_mime_subparts.3 \ curl_mime_type.3 \ curl_mprintf.3 \ curl_multi_add_handle.3 \ curl_multi_assign.3 \ curl_multi_cleanup.3 \ curl_multi_fdset.3 \ curl_multi_info_read.3 \ curl_multi_init.3 \ curl_multi_perform.3 \ curl_multi_poll.3 \ curl_multi_remove_handle.3 \ curl_multi_setopt.3 \ curl_multi_socket.3 \ curl_multi_socket_action.3 \ curl_multi_socket_all.3 \ curl_multi_strerror.3 \ curl_multi_timeout.3 \ curl_multi_wakeup.3 \ curl_multi_wait.3 \ curl_share_cleanup.3 \ curl_share_init.3 \ curl_share_setopt.3 \ curl_share_strerror.3 \ curl_slist_append.3 \ curl_slist_free_all.3 \ curl_strequal.3 \ curl_strnequal.3 \ curl_unescape.3 \ curl_url.3 \ curl_url_cleanup.3 \ curl_url_dup.3 \ curl_url_get.3 \ curl_url_set.3 \ curl_url_strerror.3 \ curl_version.3 \ curl_version_info.3 \ libcurl-easy.3 \ libcurl-env.3 \ libcurl-errors.3 \ libcurl-multi.3 \ libcurl-security.3 \ libcurl-share.3 \ libcurl-symbols.3 \ libcurl-thread.3 \ libcurl-tutorial.3 \ libcurl-url.3 \ libcurl.3
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/libcurl/mksymbolsmanpage.pl
#!/usr/bin/env perl # *************************************************************************** # * _ _ ____ _ # * Project ___| | | | _ \| | # * / __| | | | |_) | | # * | (__| |_| | _ <| |___ # * \___|\___/|_| \_\_____| # * # * Copyright (C) 2015 - 2021, Daniel Stenberg, <[email protected]>, et al. # * # * This software is licensed as described in the file COPYING, which # * you should have received as part of this distribution. The terms # * are also available at https://curl.se/docs/copyright.html. # * # * You may opt to use, copy, modify, merge, publish, distribute and/or sell # * copies of the Software, and permit persons to whom the Software is # * furnished to do so, under the terms of the COPYING file. # * # * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY # * KIND, either express or implied. # * # *************************************************************************** my $version="7.41.0"; use POSIX qw(strftime); my $date = strftime "%b %e, %Y", localtime; my $year = strftime "%Y", localtime; print <<HEADER .\\" ************************************************************************** .\\" * _ _ ____ _ .\\" * Project ___| | | | _ \\| | .\\" * / __| | | | |_) | | .\\" * | (__| |_| | _ <| |___ .\\" * \\___|\\___/|_| \\_\\_____| .\\" * .\\" * Copyright (C) 1998 - $year, Daniel Stenberg, <daniel\@haxx.se>, et al. .\\" * .\\" * This software is licensed as described in the file COPYING, which .\\" * you should have received as part of this distribution. The terms .\\" * are also available at https://curl.se/docs/copyright.html. .\\" * .\\" * You may opt to use, copy, modify, merge, publish, distribute and/or sell .\\" * copies of the Software, and permit persons to whom the Software is .\\" * furnished to do so, under the terms of the COPYING file. .\\" * .\\" * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY .\\" * KIND, either express or implied. .\\" * .\\" ************************************************************************** .TH libcurl-symbols 3 "$date" "libcurl $version" "libcurl symbols" .SH NAME libcurl-symbols \\- libcurl symbol version information .SH "libcurl symbols" This man page details version information for public symbols provided in the libcurl header files. This lists the first version in which the symbol was introduced and for some symbols two additional information pieces: The first version in which the symbol is marked "deprecated" - meaning that since that version no new code should be written to use the symbol as it is marked for getting removed in a future. The last version that featured the specific symbol. Using the symbol in source code will make it no longer compile error-free after that specified version. This man page is automatically generated from the symbols-in-versions file. HEADER ; while(<STDIN>) { if($_ =~ /^(CURL[A-Z0-9_.]*) *(.*)/i) { my ($symbol, $rest)=($1,$2); my ($intro, $dep, $rem); if($rest =~ s/^([0-9.]*) *//) { $intro = $1; } if($rest =~ s/^([0-9.]*) *//) { $dep = $1; } if($rest =~ s/^([0-9.]*) *//) { $rem = $1; } print ".IP $symbol\nIntroduced in $intro\n"; if($dep) { print "Deprecated since $dep\n"; } if($rem) { print "Last used in $rem\n"; } } }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/libcurl/symbols.pl
#!/usr/bin/env perl #*************************************************************************** # _ _ ____ _ # Project ___| | | | _ \| | # / __| | | | |_) | | # | (__| |_| | _ <| |___ # \___|\___/|_| \_\_____| # # Copyright (C) 2011 - 2020, Daniel Stenberg, <[email protected]>, et al. # # This software is licensed as described in the file COPYING, which # you should have received as part of this distribution. The terms # are also available at https://curl.se/docs/copyright.html. # # You may opt to use, copy, modify, merge, publish, distribute and/or sell # copies of the Software, and permit persons to whom the Software is # furnished to do so, under the terms of the COPYING file. # # This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY # KIND, either express or implied. # ########################################################################### # # Experience has shown that the symbols-in-versions file is very useful to # applications that want to build with a wide range of libcurl versions. # It is however easy to get it wrong and the source gets a bit messy with all # the fixed numerical comparisons. # # The point of this script is to provide an easy-to-use macro for libcurl- # using applications to do preprocessor checks for specific libcurl defines, # and yet make the code clearly show what the macro is used for. # # Run this script and generate libcurl-symbols.h and then use that header in # a fashion similar to: # # #include "libcurl-symbols.h" # # #if LIBCURL_HAS(CURLOPT_MUTE) # has mute # #else # no mute # #endif # # open F, "<symbols-in-versions"; sub str2num { my ($str)=@_; if($str =~ /([0-9]*)\.([0-9]*)\.*([0-9]*)/) { return sprintf("0x%06x", $1<<16 | $2 << 8 | $3); } } print <<EOS #include <curl/curl.h> #define LIBCURL_HAS(x) \\ (defined(x ## _FIRST) && (x ## _FIRST <= LIBCURL_VERSION_NUM) && \\ (!defined(x ## _LAST) || ( x ## _LAST >= LIBCURL_VERSION_NUM))) EOS ; while(<F>) { if(/^(CURL[^ ]*)[ \t]*(.*)/) { my ($sym, $vers)=($1, $2); my $intr; my $rm; my $dep; # is there removed info? if($vers =~ /([\d.]+)[ \t-]+([\d.-]+)[ \t]+([\d.]+)/) { ($intr, $dep, $rm)=($1, $2, $3); } # is it a dep-only line? elsif($vers =~ /([\d.]+)[ \t-]+([\d.]+)/) { ($intr, $dep)=($1, $2); } else { $intr = $vers; } my $inum = str2num($intr); print <<EOS #define ${sym}_FIRST $inum /* Added in $intr */ EOS ; my $irm = str2num($rm); if($rm) { print <<EOS #define ${sym}_LAST $irm /* Last featured in $rm */ EOS ; } } }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs/libcurl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/libcurl/opts/CMakeLists.txt
#*************************************************************************** # _ _ ____ _ # Project ___| | | | _ \| | # / __| | | | |_) | | # | (__| |_| | _ <| |___ # \___|\___/|_| \_\_____| # # Copyright (C) 2009 - 2020, Daniel Stenberg, <[email protected]>, et al. # # This software is licensed as described in the file COPYING, which # you should have received as part of this distribution. The terms # are also available at https://curl.se/docs/copyright.html. # # You may opt to use, copy, modify, merge, publish, distribute and/or sell # copies of the Software, and permit persons to whom the Software is # furnished to do so, under the terms of the COPYING file. # # This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY # KIND, either express or implied. # ########################################################################### # Load man_MANS from shared file transform_makefile_inc("Makefile.inc" "${CMAKE_CURRENT_BINARY_DIR}/Makefile.inc.cmake") include("${CMAKE_CURRENT_BINARY_DIR}/Makefile.inc.cmake") add_manual_pages(man_MANS) string(REPLACE ".3" ".html" HTMLPAGES "${man_MANS}") string(REPLACE ".3" ".pdf" PDFPAGES "${man_MANS}") add_custom_target(opts-html DEPENDS ${HTMLPAGES}) add_custom_target(opts-pdf DEPENDS ${PDFPAGES}) add_dependencies(html opts-html) add_dependencies(pdf opts-pdf)
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs/libcurl
repos/gpt4all.zig/src/zig-libcurl/curl/docs/libcurl/opts/Makefile.inc
#*************************************************************************** # _ _ ____ _ # Project ___| | | | _ \| | # / __| | | | |_) | | # | (__| |_| | _ <| |___ # \___|\___/|_| \_\_____| # # Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. # # This software is licensed as described in the file COPYING, which # you should have received as part of this distribution. The terms # are also available at https://curl.se/docs/copyright.html. # # You may opt to use, copy, modify, merge, publish, distribute and/or sell # copies of the Software, and permit persons to whom the Software is # furnished to do so, under the terms of the COPYING file. # # This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY # KIND, either express or implied. # ########################################################################### # Shared between Makefile.am and CMakeLists.txt man_MANS = \ CURLINFO_ACTIVESOCKET.3 \ CURLINFO_APPCONNECT_TIME.3 \ CURLINFO_APPCONNECT_TIME_T.3 \ CURLINFO_CERTINFO.3 \ CURLINFO_CONDITION_UNMET.3 \ CURLINFO_CONNECT_TIME.3 \ CURLINFO_CONNECT_TIME_T.3 \ CURLINFO_CONTENT_LENGTH_DOWNLOAD.3 \ CURLINFO_CONTENT_LENGTH_DOWNLOAD_T.3 \ CURLINFO_CONTENT_LENGTH_UPLOAD.3 \ CURLINFO_CONTENT_LENGTH_UPLOAD_T.3 \ CURLINFO_CONTENT_TYPE.3 \ CURLINFO_COOKIELIST.3 \ CURLINFO_EFFECTIVE_METHOD.3 \ CURLINFO_EFFECTIVE_URL.3 \ CURLINFO_FILETIME.3 \ CURLINFO_FILETIME_T.3 \ CURLINFO_FTP_ENTRY_PATH.3 \ CURLINFO_HEADER_SIZE.3 \ CURLINFO_HTTPAUTH_AVAIL.3 \ CURLINFO_HTTP_CONNECTCODE.3 \ CURLINFO_HTTP_VERSION.3 \ CURLINFO_LASTSOCKET.3 \ CURLINFO_LOCAL_IP.3 \ CURLINFO_LOCAL_PORT.3 \ CURLINFO_NAMELOOKUP_TIME.3 \ CURLINFO_NAMELOOKUP_TIME_T.3 \ CURLINFO_NUM_CONNECTS.3 \ CURLINFO_OS_ERRNO.3 \ CURLINFO_PRETRANSFER_TIME.3 \ CURLINFO_PRETRANSFER_TIME_T.3 \ CURLINFO_PRIMARY_IP.3 \ CURLINFO_PRIMARY_PORT.3 \ CURLINFO_PRIVATE.3 \ CURLINFO_PROTOCOL.3 \ CURLINFO_PROXY_ERROR.3 \ CURLINFO_PROXY_SSL_VERIFYRESULT.3 \ CURLINFO_PROXYAUTH_AVAIL.3 \ CURLINFO_REDIRECT_COUNT.3 \ CURLINFO_REDIRECT_TIME.3 \ CURLINFO_REDIRECT_TIME_T.3 \ CURLINFO_REDIRECT_URL.3 \ CURLINFO_REFERER.3 \ CURLINFO_REQUEST_SIZE.3 \ CURLINFO_RESPONSE_CODE.3 \ CURLINFO_RETRY_AFTER.3 \ CURLINFO_RTSP_CLIENT_CSEQ.3 \ CURLINFO_RTSP_CSEQ_RECV.3 \ CURLINFO_RTSP_SERVER_CSEQ.3 \ CURLINFO_RTSP_SESSION_ID.3 \ CURLINFO_SCHEME.3 \ CURLINFO_SIZE_DOWNLOAD.3 \ CURLINFO_SIZE_DOWNLOAD_T.3 \ CURLINFO_SIZE_UPLOAD.3 \ CURLINFO_SIZE_UPLOAD_T.3 \ CURLINFO_SPEED_DOWNLOAD.3 \ CURLINFO_SPEED_DOWNLOAD_T.3 \ CURLINFO_SPEED_UPLOAD.3 \ CURLINFO_SPEED_UPLOAD_T.3 \ CURLINFO_SSL_ENGINES.3 \ CURLINFO_SSL_VERIFYRESULT.3 \ CURLINFO_STARTTRANSFER_TIME.3 \ CURLINFO_STARTTRANSFER_TIME_T.3 \ CURLINFO_TLS_SESSION.3 \ CURLINFO_TLS_SSL_PTR.3 \ CURLINFO_TOTAL_TIME.3 \ CURLINFO_TOTAL_TIME_T.3 \ CURLMOPT_CHUNK_LENGTH_PENALTY_SIZE.3 \ CURLMOPT_CONTENT_LENGTH_PENALTY_SIZE.3 \ CURLMOPT_MAXCONNECTS.3 \ CURLMOPT_MAX_CONCURRENT_STREAMS.3 \ CURLMOPT_MAX_HOST_CONNECTIONS.3 \ CURLMOPT_MAX_PIPELINE_LENGTH.3 \ CURLMOPT_MAX_TOTAL_CONNECTIONS.3 \ CURLMOPT_PIPELINING.3 \ CURLMOPT_PIPELINING_SERVER_BL.3 \ CURLMOPT_PIPELINING_SITE_BL.3 \ CURLMOPT_PUSHDATA.3 \ CURLMOPT_PUSHFUNCTION.3 \ CURLMOPT_SOCKETDATA.3 \ CURLMOPT_SOCKETFUNCTION.3 \ CURLMOPT_TIMERDATA.3 \ CURLMOPT_TIMERFUNCTION.3 \ CURLOPT_ABSTRACT_UNIX_SOCKET.3 \ CURLOPT_ACCEPTTIMEOUT_MS.3 \ CURLOPT_ACCEPT_ENCODING.3 \ CURLOPT_ADDRESS_SCOPE.3 \ CURLOPT_ALTSVC.3 \ CURLOPT_ALTSVC_CTRL.3 \ CURLOPT_APPEND.3 \ CURLOPT_AUTOREFERER.3 \ CURLOPT_BUFFERSIZE.3 \ CURLOPT_CAINFO.3 \ CURLOPT_CAINFO_BLOB.3 \ CURLOPT_CAPATH.3 \ CURLOPT_CERTINFO.3 \ CURLOPT_CHUNK_BGN_FUNCTION.3 \ CURLOPT_CHUNK_DATA.3 \ CURLOPT_CHUNK_END_FUNCTION.3 \ CURLOPT_CLOSESOCKETDATA.3 \ CURLOPT_CLOSESOCKETFUNCTION.3 \ CURLOPT_CONNECTTIMEOUT.3 \ CURLOPT_CONNECTTIMEOUT_MS.3 \ CURLOPT_CONNECT_ONLY.3 \ CURLOPT_CONNECT_TO.3 \ CURLOPT_CONV_FROM_NETWORK_FUNCTION.3 \ CURLOPT_CONV_FROM_UTF8_FUNCTION.3 \ CURLOPT_CONV_TO_NETWORK_FUNCTION.3 \ CURLOPT_COOKIE.3 \ CURLOPT_COOKIEFILE.3 \ CURLOPT_COOKIEJAR.3 \ CURLOPT_COOKIELIST.3 \ CURLOPT_COOKIESESSION.3 \ CURLOPT_COPYPOSTFIELDS.3 \ CURLOPT_CRLF.3 \ CURLOPT_CRLFILE.3 \ CURLOPT_CURLU.3 \ CURLOPT_CUSTOMREQUEST.3 \ CURLOPT_DEBUGDATA.3 \ CURLOPT_DEBUGFUNCTION.3 \ CURLOPT_DEFAULT_PROTOCOL.3 \ CURLOPT_DIRLISTONLY.3 \ CURLOPT_DISALLOW_USERNAME_IN_URL.3 \ CURLOPT_DNS_CACHE_TIMEOUT.3 \ CURLOPT_DNS_INTERFACE.3 \ CURLOPT_DNS_LOCAL_IP4.3 \ CURLOPT_DNS_LOCAL_IP6.3 \ CURLOPT_DNS_SERVERS.3 \ CURLOPT_DNS_SHUFFLE_ADDRESSES.3 \ CURLOPT_DNS_USE_GLOBAL_CACHE.3 \ CURLOPT_DOH_SSL_VERIFYHOST.3 \ CURLOPT_DOH_SSL_VERIFYPEER.3 \ CURLOPT_DOH_SSL_VERIFYSTATUS.3 \ CURLOPT_DOH_URL.3 \ CURLOPT_EGDSOCKET.3 \ CURLOPT_ERRORBUFFER.3 \ CURLOPT_EXPECT_100_TIMEOUT_MS.3 \ CURLOPT_FAILONERROR.3 \ CURLOPT_FILETIME.3 \ CURLOPT_FNMATCH_DATA.3 \ CURLOPT_FNMATCH_FUNCTION.3 \ CURLOPT_FOLLOWLOCATION.3 \ CURLOPT_FORBID_REUSE.3 \ CURLOPT_FRESH_CONNECT.3 \ CURLOPT_FTPPORT.3 \ CURLOPT_FTPSSLAUTH.3 \ CURLOPT_FTP_ACCOUNT.3 \ CURLOPT_FTP_ALTERNATIVE_TO_USER.3 \ CURLOPT_FTP_CREATE_MISSING_DIRS.3 \ CURLOPT_FTP_FILEMETHOD.3 \ CURLOPT_FTP_RESPONSE_TIMEOUT.3 \ CURLOPT_FTP_SKIP_PASV_IP.3 \ CURLOPT_FTP_SSL_CCC.3 \ CURLOPT_FTP_USE_EPRT.3 \ CURLOPT_FTP_USE_EPSV.3 \ CURLOPT_FTP_USE_PRET.3 \ CURLOPT_GSSAPI_DELEGATION.3 \ CURLOPT_HAPPY_EYEBALLS_TIMEOUT_MS.3 \ CURLOPT_HAPROXYPROTOCOL.3 \ CURLOPT_HEADER.3 \ CURLOPT_HEADERDATA.3 \ CURLOPT_HEADERFUNCTION.3 \ CURLOPT_HEADEROPT.3 \ CURLOPT_HSTS.3 \ CURLOPT_HSTSREADDATA.3 \ CURLOPT_HSTSREADFUNCTION.3 \ CURLOPT_HSTSWRITEDATA.3 \ CURLOPT_HSTSWRITEFUNCTION.3 \ CURLOPT_HSTS_CTRL.3 \ CURLOPT_HTTP09_ALLOWED.3 \ CURLOPT_HTTP200ALIASES.3 \ CURLOPT_HTTPAUTH.3 \ CURLOPT_HTTPGET.3 \ CURLOPT_HTTPHEADER.3 \ CURLOPT_HTTPPOST.3 \ CURLOPT_HTTPPROXYTUNNEL.3 \ CURLOPT_HTTP_CONTENT_DECODING.3 \ CURLOPT_HTTP_TRANSFER_DECODING.3 \ CURLOPT_HTTP_VERSION.3 \ CURLOPT_IGNORE_CONTENT_LENGTH.3 \ CURLOPT_INFILESIZE.3 \ CURLOPT_INFILESIZE_LARGE.3 \ CURLOPT_INTERFACE.3 \ CURLOPT_INTERLEAVEDATA.3 \ CURLOPT_INTERLEAVEFUNCTION.3 \ CURLOPT_IOCTLDATA.3 \ CURLOPT_IOCTLFUNCTION.3 \ CURLOPT_IPRESOLVE.3 \ CURLOPT_ISSUERCERT.3 \ CURLOPT_ISSUERCERT_BLOB.3 \ CURLOPT_KEEP_SENDING_ON_ERROR.3 \ CURLOPT_KEYPASSWD.3 \ CURLOPT_KRBLEVEL.3 \ CURLOPT_LOCALPORT.3 \ CURLOPT_LOCALPORTRANGE.3 \ CURLOPT_LOGIN_OPTIONS.3 \ CURLOPT_LOW_SPEED_LIMIT.3 \ CURLOPT_LOW_SPEED_TIME.3 \ CURLOPT_MAIL_AUTH.3 \ CURLOPT_MAIL_FROM.3 \ CURLOPT_MAIL_RCPT.3 \ CURLOPT_MAIL_RCPT_ALLLOWFAILS.3 \ CURLOPT_MAXAGE_CONN.3 \ CURLOPT_MAXCONNECTS.3 \ CURLOPT_MAXFILESIZE.3 \ CURLOPT_MAXFILESIZE_LARGE.3 \ CURLOPT_MAXLIFETIME_CONN.3 \ CURLOPT_MAXREDIRS.3 \ CURLOPT_MAX_RECV_SPEED_LARGE.3 \ CURLOPT_MAX_SEND_SPEED_LARGE.3 \ CURLOPT_MIMEPOST.3 \ CURLOPT_NETRC.3 \ CURLOPT_NETRC_FILE.3 \ CURLOPT_NEW_DIRECTORY_PERMS.3 \ CURLOPT_NEW_FILE_PERMS.3 \ CURLOPT_NOBODY.3 \ CURLOPT_NOPROGRESS.3 \ CURLOPT_NOPROXY.3 \ CURLOPT_NOSIGNAL.3 \ CURLOPT_OPENSOCKETDATA.3 \ CURLOPT_OPENSOCKETFUNCTION.3 \ CURLOPT_PASSWORD.3 \ CURLOPT_PATH_AS_IS.3 \ CURLOPT_PINNEDPUBLICKEY.3 \ CURLOPT_PIPEWAIT.3 \ CURLOPT_PORT.3 \ CURLOPT_POST.3 \ CURLOPT_POSTFIELDS.3 \ CURLOPT_POSTFIELDSIZE.3 \ CURLOPT_POSTFIELDSIZE_LARGE.3 \ CURLOPT_POSTQUOTE.3 \ CURLOPT_POSTREDIR.3 \ CURLOPT_PREQUOTE.3 \ CURLOPT_PREREQDATA.3 \ CURLOPT_PREREQFUNCTION.3 \ CURLOPT_PRE_PROXY.3 \ CURLOPT_PRIVATE.3 \ CURLOPT_PROGRESSDATA.3 \ CURLOPT_PROGRESSFUNCTION.3 \ CURLOPT_PROTOCOLS.3 \ CURLOPT_PROXY.3 \ CURLOPT_PROXYAUTH.3 \ CURLOPT_PROXYHEADER.3 \ CURLOPT_PROXYPASSWORD.3 \ CURLOPT_PROXYPORT.3 \ CURLOPT_PROXYTYPE.3 \ CURLOPT_PROXYUSERNAME.3 \ CURLOPT_PROXYUSERPWD.3 \ CURLOPT_PROXY_CAINFO.3 \ CURLOPT_PROXY_CAINFO_BLOB.3 \ CURLOPT_PROXY_CAPATH.3 \ CURLOPT_PROXY_CRLFILE.3 \ CURLOPT_PROXY_KEYPASSWD.3 \ CURLOPT_PROXY_ISSUERCERT.3 \ CURLOPT_PROXY_ISSUERCERT_BLOB.3 \ CURLOPT_PROXY_PINNEDPUBLICKEY.3 \ CURLOPT_PROXY_SERVICE_NAME.3 \ CURLOPT_PROXY_SSLCERT.3 \ CURLOPT_PROXY_SSLCERT_BLOB.3 \ CURLOPT_PROXY_SSLCERTTYPE.3 \ CURLOPT_PROXY_SSLKEY.3 \ CURLOPT_PROXY_SSLKEY_BLOB.3 \ CURLOPT_PROXY_SSLKEYTYPE.3 \ CURLOPT_PROXY_SSLVERSION.3 \ CURLOPT_PROXY_SSL_CIPHER_LIST.3 \ CURLOPT_PROXY_SSL_OPTIONS.3 \ CURLOPT_PROXY_SSL_VERIFYHOST.3 \ CURLOPT_PROXY_SSL_VERIFYPEER.3 \ CURLOPT_PROXY_TLS13_CIPHERS.3 \ CURLOPT_PROXY_TLSAUTH_PASSWORD.3 \ CURLOPT_PROXY_TLSAUTH_TYPE.3 \ CURLOPT_PROXY_TLSAUTH_USERNAME.3 \ CURLOPT_PROXY_TRANSFER_MODE.3 \ CURLOPT_PUT.3 \ CURLOPT_QUOTE.3 \ CURLOPT_RANDOM_FILE.3 \ CURLOPT_RANGE.3 \ CURLOPT_READDATA.3 \ CURLOPT_READFUNCTION.3 \ CURLOPT_REDIR_PROTOCOLS.3 \ CURLOPT_REFERER.3 \ CURLOPT_REQUEST_TARGET.3 \ CURLOPT_RESOLVE.3 \ CURLOPT_RESOLVER_START_DATA.3 \ CURLOPT_RESOLVER_START_FUNCTION.3 \ CURLOPT_RESUME_FROM.3 \ CURLOPT_RESUME_FROM_LARGE.3 \ CURLOPT_RTSP_CLIENT_CSEQ.3 \ CURLOPT_RTSP_REQUEST.3 \ CURLOPT_RTSP_SERVER_CSEQ.3 \ CURLOPT_RTSP_SESSION_ID.3 \ CURLOPT_RTSP_STREAM_URI.3 \ CURLOPT_RTSP_TRANSPORT.3 \ CURLOPT_SASL_AUTHZID.3 \ CURLOPT_SASL_IR.3 \ CURLOPT_SEEKDATA.3 \ CURLOPT_SEEKFUNCTION.3 \ CURLOPT_SERVICE_NAME.3 \ CURLOPT_SHARE.3 \ CURLOPT_SOCKOPTDATA.3 \ CURLOPT_SOCKOPTFUNCTION.3 \ CURLOPT_SOCKS5_AUTH.3 \ CURLOPT_SOCKS5_GSSAPI_NEC.3 \ CURLOPT_SOCKS5_GSSAPI_SERVICE.3 \ CURLOPT_SSH_AUTH_TYPES.3 \ CURLOPT_SSH_COMPRESSION.3 \ CURLOPT_SSH_HOST_PUBLIC_KEY_MD5.3 \ CURLOPT_SSH_HOST_PUBLIC_KEY_SHA256.3 \ CURLOPT_SSH_KEYDATA.3 \ CURLOPT_SSH_KEYFUNCTION.3 \ CURLOPT_SSH_KNOWNHOSTS.3 \ CURLOPT_SSH_PRIVATE_KEYFILE.3 \ CURLOPT_SSH_PUBLIC_KEYFILE.3 \ CURLOPT_SSLCERT.3 \ CURLOPT_SSLCERT_BLOB.3 \ CURLOPT_SSLCERTTYPE.3 \ CURLOPT_SSLENGINE.3 \ CURLOPT_SSLENGINE_DEFAULT.3 \ CURLOPT_SSLKEY.3 \ CURLOPT_SSLKEY_BLOB.3 \ CURLOPT_SSLKEYTYPE.3 \ CURLOPT_SSLVERSION.3 \ CURLOPT_SSL_CIPHER_LIST.3 \ CURLOPT_SSL_CTX_DATA.3 \ CURLOPT_SSL_CTX_FUNCTION.3 \ CURLOPT_SSL_EC_CURVES.3 \ CURLOPT_SSL_ENABLE_ALPN.3 \ CURLOPT_SSL_ENABLE_NPN.3 \ CURLOPT_SSL_FALSESTART.3 \ CURLOPT_SSL_OPTIONS.3 \ CURLOPT_SSL_SESSIONID_CACHE.3 \ CURLOPT_SSL_VERIFYHOST.3 \ CURLOPT_SSL_VERIFYPEER.3 \ CURLOPT_SSL_VERIFYSTATUS.3 \ CURLOPT_STDERR.3 \ CURLOPT_STREAM_DEPENDS.3 \ CURLOPT_STREAM_DEPENDS_E.3 \ CURLOPT_STREAM_WEIGHT.3 \ CURLOPT_SUPPRESS_CONNECT_HEADERS.3 \ CURLOPT_TCP_FASTOPEN.3 \ CURLOPT_TCP_KEEPALIVE.3 \ CURLOPT_TCP_KEEPIDLE.3 \ CURLOPT_TCP_KEEPINTVL.3 \ CURLOPT_TCP_NODELAY.3 \ CURLOPT_TELNETOPTIONS.3 \ CURLOPT_TFTP_BLKSIZE.3 \ CURLOPT_TFTP_NO_OPTIONS.3 \ CURLOPT_TIMECONDITION.3 \ CURLOPT_TIMEOUT.3 \ CURLOPT_TIMEOUT_MS.3 \ CURLOPT_TIMEVALUE.3 \ CURLOPT_TIMEVALUE_LARGE.3 \ CURLOPT_TLS13_CIPHERS.3 \ CURLOPT_TLSAUTH_PASSWORD.3 \ CURLOPT_TLSAUTH_TYPE.3 \ CURLOPT_TLSAUTH_USERNAME.3 \ CURLOPT_TRAILERDATA.3 \ CURLOPT_TRAILERFUNCTION.3 \ CURLOPT_TRANSFERTEXT.3 \ CURLOPT_TRANSFER_ENCODING.3 \ CURLOPT_UNIX_SOCKET_PATH.3 \ CURLOPT_UNRESTRICTED_AUTH.3 \ CURLOPT_UPKEEP_INTERVAL_MS.3 \ CURLOPT_UPLOAD.3 \ CURLOPT_UPLOAD_BUFFERSIZE.3 \ CURLOPT_URL.3 \ CURLOPT_USERAGENT.3 \ CURLOPT_USERNAME.3 \ CURLOPT_USERPWD.3 \ CURLOPT_USE_SSL.3 \ CURLOPT_AWS_SIGV4.3 \ CURLOPT_VERBOSE.3 \ CURLOPT_WILDCARDMATCH.3 \ CURLOPT_WRITEDATA.3 \ CURLOPT_WRITEFUNCTION.3 \ CURLOPT_XFERINFODATA.3 \ CURLOPT_XFERINFOFUNCTION.3 \ CURLOPT_XOAUTH2_BEARER.3
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/sendrecv.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * An example of curl_easy_send() and curl_easy_recv() usage. * </DESC> */ #include <stdio.h> #include <string.h> #include <curl/curl.h> /* Auxiliary function that waits on the socket. */ static int wait_on_socket(curl_socket_t sockfd, int for_recv, long timeout_ms) { struct timeval tv; fd_set infd, outfd, errfd; int res; tv.tv_sec = timeout_ms / 1000; tv.tv_usec = (timeout_ms % 1000) * 1000; FD_ZERO(&infd); FD_ZERO(&outfd); FD_ZERO(&errfd); FD_SET(sockfd, &errfd); /* always check for error */ if(for_recv) { FD_SET(sockfd, &infd); } else { FD_SET(sockfd, &outfd); } /* select() returns the number of signalled sockets or -1 */ res = select((int)sockfd + 1, &infd, &outfd, &errfd, &tv); return res; } int main(void) { CURL *curl; /* Minimalistic http request */ const char *request = "GET / HTTP/1.0\r\nHost: example.com\r\n\r\n"; size_t request_len = strlen(request); /* A general note of caution here: if you are using curl_easy_recv() or curl_easy_send() to implement HTTP or _any_ other protocol libcurl supports "natively", you are doing it wrong and you should stop. This example uses HTTP only to show how to use this API, it does not suggest that writing an application doing this is sensible. */ curl = curl_easy_init(); if(curl) { CURLcode res; curl_socket_t sockfd; size_t nsent_total = 0; curl_easy_setopt(curl, CURLOPT_URL, "https://example.com"); /* Do not do the transfer - only connect to host */ curl_easy_setopt(curl, CURLOPT_CONNECT_ONLY, 1L); res = curl_easy_perform(curl); if(res != CURLE_OK) { printf("Error: %s\n", curl_easy_strerror(res)); return 1; } /* Extract the socket from the curl handle - we will need it for waiting. */ res = curl_easy_getinfo(curl, CURLINFO_ACTIVESOCKET, &sockfd); if(res != CURLE_OK) { printf("Error: %s\n", curl_easy_strerror(res)); return 1; } printf("Sending request.\n"); do { /* Warning: This example program may loop indefinitely. * A production-quality program must define a timeout and exit this loop * as soon as the timeout has expired. */ size_t nsent; do { nsent = 0; res = curl_easy_send(curl, request + nsent_total, request_len - nsent_total, &nsent); nsent_total += nsent; if(res == CURLE_AGAIN && !wait_on_socket(sockfd, 0, 60000L)) { printf("Error: timeout.\n"); return 1; } } while(res == CURLE_AGAIN); if(res != CURLE_OK) { printf("Error: %s\n", curl_easy_strerror(res)); return 1; } printf("Sent %" CURL_FORMAT_CURL_OFF_T " bytes.\n", (curl_off_t)nsent); } while(nsent_total < request_len); printf("Reading response.\n"); for(;;) { /* Warning: This example program may loop indefinitely (see above). */ char buf[1024]; size_t nread; do { nread = 0; res = curl_easy_recv(curl, buf, sizeof(buf), &nread); if(res == CURLE_AGAIN && !wait_on_socket(sockfd, 1, 60000L)) { printf("Error: timeout.\n"); return 1; } } while(res == CURLE_AGAIN); if(res != CURLE_OK) { printf("Error: %s\n", curl_easy_strerror(res)); break; } if(nread == 0) { /* end of the response */ break; } printf("Received %" CURL_FORMAT_CURL_OFF_T " bytes.\n", (curl_off_t)nread); } /* always cleanup */ curl_easy_cleanup(curl); } return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/pop3-list.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * POP3 example to list the contents of a mailbox * </DESC> */ #include <stdio.h> #include <curl/curl.h> /* This is a simple example using libcurl's POP3 capabilities to list the * contents of a mailbox. * * Note that this example requires libcurl 7.20.0 or above. */ int main(void) { CURL *curl; CURLcode res = CURLE_OK; curl = curl_easy_init(); if(curl) { /* Set username and password */ curl_easy_setopt(curl, CURLOPT_USERNAME, "user"); curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); /* This will list every message of the given mailbox */ curl_easy_setopt(curl, CURLOPT_URL, "pop3://pop.example.com"); /* Perform the list */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* Always cleanup */ curl_easy_cleanup(curl); } return (int)res; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/http2-pushinmemory.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * HTTP/2 server push. Receive all data in memory. * </DESC> */ #include <stdio.h> #include <stdlib.h> #include <string.h> /* somewhat unix-specific */ #include <sys/time.h> #include <unistd.h> /* curl stuff */ #include <curl/curl.h> struct Memory { char *memory; size_t size; }; static size_t write_cb(void *contents, size_t size, size_t nmemb, void *userp) { size_t realsize = size * nmemb; struct Memory *mem = (struct Memory *)userp; char *ptr = realloc(mem->memory, mem->size + realsize + 1); if(!ptr) { /* out of memory! */ printf("not enough memory (realloc returned NULL)\n"); return 0; } mem->memory = ptr; memcpy(&(mem->memory[mem->size]), contents, realsize); mem->size += realsize; mem->memory[mem->size] = 0; return realsize; } #define MAX_FILES 10 static struct Memory files[MAX_FILES]; static int pushindex = 1; static void init_memory(struct Memory *chunk) { chunk->memory = malloc(1); /* grown as needed with realloc */ chunk->size = 0; /* no data at this point */ } static void setup(CURL *hnd) { /* set the same URL */ curl_easy_setopt(hnd, CURLOPT_URL, "https://localhost:8443/index.html"); /* HTTP/2 please */ curl_easy_setopt(hnd, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_2_0); /* we use a self-signed test server, skip verification during debugging */ curl_easy_setopt(hnd, CURLOPT_SSL_VERIFYPEER, 0L); curl_easy_setopt(hnd, CURLOPT_SSL_VERIFYHOST, 0L); /* write data to a struct */ curl_easy_setopt(hnd, CURLOPT_WRITEFUNCTION, write_cb); init_memory(&files[0]); curl_easy_setopt(hnd, CURLOPT_WRITEDATA, &files[0]); /* wait for pipe connection to confirm */ curl_easy_setopt(hnd, CURLOPT_PIPEWAIT, 1L); } /* called when there's an incoming push */ static int server_push_callback(CURL *parent, CURL *easy, size_t num_headers, struct curl_pushheaders *headers, void *userp) { char *headp; int *transfers = (int *)userp; (void)parent; /* we have no use for this */ (void)num_headers; /* unused */ if(pushindex == MAX_FILES) /* cannot fit anymore */ return CURL_PUSH_DENY; /* write to this buffer */ init_memory(&files[pushindex]); curl_easy_setopt(easy, CURLOPT_WRITEDATA, &files[pushindex]); pushindex++; headp = curl_pushheader_byname(headers, ":path"); if(headp) fprintf(stderr, "* Pushed :path '%s'\n", headp /* skip :path + colon */); (*transfers)++; /* one more */ return CURL_PUSH_OK; } /* * Download a file over HTTP/2, take care of server push. */ int main(void) { CURL *easy; CURLM *multi; int still_running; /* keep number of running handles */ int transfers = 1; /* we start with one */ int i; struct CURLMsg *m; /* init a multi stack */ multi = curl_multi_init(); easy = curl_easy_init(); /* set options */ setup(easy); /* add the easy transfer */ curl_multi_add_handle(multi, easy); curl_multi_setopt(multi, CURLMOPT_PIPELINING, CURLPIPE_MULTIPLEX); curl_multi_setopt(multi, CURLMOPT_PUSHFUNCTION, server_push_callback); curl_multi_setopt(multi, CURLMOPT_PUSHDATA, &transfers); while(transfers) { int rc; CURLMcode mcode = curl_multi_perform(multi, &still_running); if(mcode) break; mcode = curl_multi_wait(multi, NULL, 0, 1000, &rc); if(mcode) break; /* * When doing server push, libcurl itself created and added one or more * easy handles but *we* need to clean them up when they are done. */ do { int msgq = 0; m = curl_multi_info_read(multi, &msgq); if(m && (m->msg == CURLMSG_DONE)) { CURL *e = m->easy_handle; transfers--; curl_multi_remove_handle(multi, e); curl_easy_cleanup(e); } } while(m); } curl_multi_cleanup(multi); /* 'pushindex' is now the number of received transfers */ for(i = 0; i < pushindex; i++) { /* do something fun with the data, and then free it when done */ free(files[i].memory); } return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/simple.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Very simple HTTP GET * </DESC> */ #include <stdio.h> #include <curl/curl.h> int main(void) { CURL *curl; CURLcode res; curl = curl_easy_init(); if(curl) { curl_easy_setopt(curl, CURLOPT_URL, "https://example.com"); /* example.com is redirected, so we tell libcurl to follow redirection */ curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1L); /* Perform the request, res will get the return code */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* always cleanup */ curl_easy_cleanup(curl); } return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/getreferrer.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Show how to extract referrer header. * </DESC> */ #include <stdio.h> #include <curl/curl.h> int main(void) { CURL *curl; curl = curl_easy_init(); if(curl) { CURLcode res; curl_easy_setopt(curl, CURLOPT_URL, "https://example.com"); curl_easy_setopt(curl, CURLOPT_REFERER, "https://example.org/referrer"); /* Perform the request, res will get the return code */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); else { char *hdr; res = curl_easy_getinfo(curl, CURLINFO_REFERER, &hdr); if((res == CURLE_OK) && hdr) printf("Referrer header: %s\n", hdr); } /* always cleanup */ curl_easy_cleanup(curl); } return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/ftpuploadfrommem.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * FTP upload a file from memory * </DESC> */ #include <stdio.h> #include <string.h> #include <curl/curl.h> static const char data[]= "Lorem ipsum dolor sit amet, consectetur adipiscing elit. " "Nam rhoncus odio id venenatis volutpat. Vestibulum dapibus " "bibendum ullamcorper. Maecenas finibus elit augue, vel " "condimentum odio maximus nec. In hac habitasse platea dictumst. " "Vestibulum vel dolor et turpis rutrum finibus ac at nulla. " "Vivamus nec neque ac elit blandit pretium vitae maximus ipsum. " "Quisque sodales magna vel erat auctor, sed pellentesque nisi " "rhoncus. Donec vehicula maximus pretium. Aliquam eu tincidunt " "lorem."; struct WriteThis { const char *readptr; size_t sizeleft; }; static size_t read_callback(char *ptr, size_t size, size_t nmemb, void *userp) { struct WriteThis *upload = (struct WriteThis *)userp; size_t max = size*nmemb; if(max < 1) return 0; if(upload->sizeleft) { size_t copylen = max; if(copylen > upload->sizeleft) copylen = upload->sizeleft; memcpy(ptr, upload->readptr, copylen); upload->readptr += copylen; upload->sizeleft -= copylen; return copylen; } return 0; /* no more data left to deliver */ } int main(void) { CURL *curl; CURLcode res; struct WriteThis upload; upload.readptr = data; upload.sizeleft = strlen(data); /* In windows, this will init the winsock stuff */ res = curl_global_init(CURL_GLOBAL_DEFAULT); /* Check for errors */ if(res != CURLE_OK) { fprintf(stderr, "curl_global_init() failed: %s\n", curl_easy_strerror(res)); return 1; } /* get a curl handle */ curl = curl_easy_init(); if(curl) { /* First set the URL, the target file */ curl_easy_setopt(curl, CURLOPT_URL, "ftp://example.com/path/to/upload/file"); /* User and password for the FTP login */ curl_easy_setopt(curl, CURLOPT_USERPWD, "login:secret"); /* Now specify we want to UPLOAD data */ curl_easy_setopt(curl, CURLOPT_UPLOAD, 1L); /* we want to use our own read function */ curl_easy_setopt(curl, CURLOPT_READFUNCTION, read_callback); /* pointer to pass to our read function */ curl_easy_setopt(curl, CURLOPT_READDATA, &upload); /* get verbose debug output please */ curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L); /* Set the expected upload size. */ curl_easy_setopt(curl, CURLOPT_INFILESIZE_LARGE, (curl_off_t)upload.sizeleft); /* Perform the request, res will get the return code */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* always cleanup */ curl_easy_cleanup(curl); } curl_global_cleanup(); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/htmltitle.cpp
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Get a web page, extract the title with libxml. * </DESC> Written by Lars Nilsson GNU C++ compile command line suggestion (edit paths accordingly): g++ -Wall -I/opt/curl/include -I/opt/libxml/include/libxml2 htmltitle.cpp \ -o htmltitle -L/opt/curl/lib -L/opt/libxml/lib -lcurl -lxml2 */ #include <stdio.h> #include <string.h> #include <stdlib.h> #include <string> #include <curl/curl.h> #include <libxml/HTMLparser.h> // // Case-insensitive string comparison // #ifdef _MSC_VER #define COMPARE(a, b) (!_stricmp((a), (b))) #else #define COMPARE(a, b) (!strcasecmp((a), (b))) #endif // // libxml callback context structure // struct Context { Context(): addTitle(false) { } bool addTitle; std::string title; }; // // libcurl variables for error strings and returned data static char errorBuffer[CURL_ERROR_SIZE]; static std::string buffer; // // libcurl write callback function // static int writer(char *data, size_t size, size_t nmemb, std::string *writerData) { if(writerData == NULL) return 0; writerData->append(data, size*nmemb); return size * nmemb; } // // libcurl connection initialization // static bool init(CURL *&conn, char *url) { CURLcode code; conn = curl_easy_init(); if(conn == NULL) { fprintf(stderr, "Failed to create CURL connection\n"); exit(EXIT_FAILURE); } code = curl_easy_setopt(conn, CURLOPT_ERRORBUFFER, errorBuffer); if(code != CURLE_OK) { fprintf(stderr, "Failed to set error buffer [%d]\n", code); return false; } code = curl_easy_setopt(conn, CURLOPT_URL, url); if(code != CURLE_OK) { fprintf(stderr, "Failed to set URL [%s]\n", errorBuffer); return false; } code = curl_easy_setopt(conn, CURLOPT_FOLLOWLOCATION, 1L); if(code != CURLE_OK) { fprintf(stderr, "Failed to set redirect option [%s]\n", errorBuffer); return false; } code = curl_easy_setopt(conn, CURLOPT_WRITEFUNCTION, writer); if(code != CURLE_OK) { fprintf(stderr, "Failed to set writer [%s]\n", errorBuffer); return false; } code = curl_easy_setopt(conn, CURLOPT_WRITEDATA, &buffer); if(code != CURLE_OK) { fprintf(stderr, "Failed to set write data [%s]\n", errorBuffer); return false; } return true; } // // libxml start element callback function // static void StartElement(void *voidContext, const xmlChar *name, const xmlChar **attributes) { Context *context = static_cast<Context *>(voidContext); if(COMPARE(reinterpret_cast<char *>(name), "TITLE")) { context->title = ""; context->addTitle = true; } (void) attributes; } // // libxml end element callback function // static void EndElement(void *voidContext, const xmlChar *name) { Context *context = static_cast<Context *>(voidContext); if(COMPARE(reinterpret_cast<char *>(name), "TITLE")) context->addTitle = false; } // // Text handling helper function // static void handleCharacters(Context *context, const xmlChar *chars, int length) { if(context->addTitle) context->title.append(reinterpret_cast<char *>(chars), length); } // // libxml PCDATA callback function // static void Characters(void *voidContext, const xmlChar *chars, int length) { Context *context = static_cast<Context *>(voidContext); handleCharacters(context, chars, length); } // // libxml CDATA callback function // static void cdata(void *voidContext, const xmlChar *chars, int length) { Context *context = static_cast<Context *>(voidContext); handleCharacters(context, chars, length); } // // libxml SAX callback structure // static htmlSAXHandler saxHandler = { NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, StartElement, EndElement, NULL, Characters, NULL, NULL, NULL, NULL, NULL, NULL, NULL, cdata, NULL }; // // Parse given (assumed to be) HTML text and return the title // static void parseHtml(const std::string &html, std::string &title) { htmlParserCtxtPtr ctxt; Context context; ctxt = htmlCreatePushParserCtxt(&saxHandler, &context, "", 0, "", XML_CHAR_ENCODING_NONE); htmlParseChunk(ctxt, html.c_str(), html.size(), 0); htmlParseChunk(ctxt, "", 0, 1); htmlFreeParserCtxt(ctxt); title = context.title; } int main(int argc, char *argv[]) { CURL *conn = NULL; CURLcode code; std::string title; // Ensure one argument is given if(argc != 2) { fprintf(stderr, "Usage: %s <url>\n", argv[0]); exit(EXIT_FAILURE); } curl_global_init(CURL_GLOBAL_DEFAULT); // Initialize CURL connection if(!init(conn, argv[1])) { fprintf(stderr, "Connection initializion failed\n"); exit(EXIT_FAILURE); } // Retrieve content for the URL code = curl_easy_perform(conn); curl_easy_cleanup(conn); if(code != CURLE_OK) { fprintf(stderr, "Failed to get '%s' [%s]\n", argv[1], errorBuffer); exit(EXIT_FAILURE); } // Parse the (assumed) HTML code parseHtml(buffer, title); // Display the extracted title printf("Title: %s\n", title.c_str()); return EXIT_SUCCESS; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/multi-legacy.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * A basic application source code using the multi interface doing two * transfers in parallel without curl_multi_wait/poll. * </DESC> */ #include <stdio.h> #include <string.h> /* somewhat unix-specific */ #include <sys/time.h> #include <unistd.h> /* curl stuff */ #include <curl/curl.h> /* * Download a HTTP file and upload an FTP file simultaneously. */ #define HANDLECOUNT 2 /* Number of simultaneous transfers */ #define HTTP_HANDLE 0 /* Index for the HTTP transfer */ #define FTP_HANDLE 1 /* Index for the FTP transfer */ int main(void) { CURL *handles[HANDLECOUNT]; CURLM *multi_handle; int still_running = 0; /* keep number of running handles */ int i; CURLMsg *msg; /* for picking up messages with the transfer status */ int msgs_left; /* how many messages are left */ /* Allocate one CURL handle per transfer */ for(i = 0; i<HANDLECOUNT; i++) handles[i] = curl_easy_init(); /* set the options (I left out a few, you will get the point anyway) */ curl_easy_setopt(handles[HTTP_HANDLE], CURLOPT_URL, "https://example.com"); curl_easy_setopt(handles[FTP_HANDLE], CURLOPT_URL, "ftp://example.com"); curl_easy_setopt(handles[FTP_HANDLE], CURLOPT_UPLOAD, 1L); /* init a multi stack */ multi_handle = curl_multi_init(); /* add the individual transfers */ for(i = 0; i<HANDLECOUNT; i++) curl_multi_add_handle(multi_handle, handles[i]); /* we start some action by calling perform right away */ curl_multi_perform(multi_handle, &still_running); while(still_running) { struct timeval timeout; int rc; /* select() return code */ CURLMcode mc; /* curl_multi_fdset() return code */ fd_set fdread; fd_set fdwrite; fd_set fdexcep; int maxfd = -1; long curl_timeo = -1; FD_ZERO(&fdread); FD_ZERO(&fdwrite); FD_ZERO(&fdexcep); /* set a suitable timeout to play around with */ timeout.tv_sec = 1; timeout.tv_usec = 0; curl_multi_timeout(multi_handle, &curl_timeo); if(curl_timeo >= 0) { timeout.tv_sec = curl_timeo / 1000; if(timeout.tv_sec > 1) timeout.tv_sec = 1; else timeout.tv_usec = (curl_timeo % 1000) * 1000; } /* get file descriptors from the transfers */ mc = curl_multi_fdset(multi_handle, &fdread, &fdwrite, &fdexcep, &maxfd); if(mc != CURLM_OK) { fprintf(stderr, "curl_multi_fdset() failed, code %d.\n", mc); break; } /* On success the value of maxfd is guaranteed to be >= -1. We call select(maxfd + 1, ...); specially in case of (maxfd == -1) there are no fds ready yet so we call select(0, ...) --or Sleep() on Windows-- to sleep 100ms, which is the minimum suggested value in the curl_multi_fdset() doc. */ if(maxfd == -1) { #ifdef _WIN32 Sleep(100); rc = 0; #else /* Portable sleep for platforms other than Windows. */ struct timeval wait = { 0, 100 * 1000 }; /* 100ms */ rc = select(0, NULL, NULL, NULL, &wait); #endif } else { /* Note that on some platforms 'timeout' may be modified by select(). If you need access to the original value save a copy beforehand. */ rc = select(maxfd + 1, &fdread, &fdwrite, &fdexcep, &timeout); } switch(rc) { case -1: /* select error */ break; case 0: /* timeout */ default: /* action */ curl_multi_perform(multi_handle, &still_running); break; } } /* See how the transfers went */ while((msg = curl_multi_info_read(multi_handle, &msgs_left))) { if(msg->msg == CURLMSG_DONE) { int idx; /* Find out which handle this message is about */ for(idx = 0; idx<HANDLECOUNT; idx++) { int found = (msg->easy_handle == handles[idx]); if(found) break; } switch(idx) { case HTTP_HANDLE: printf("HTTP transfer completed with status %d\n", msg->data.result); break; case FTP_HANDLE: printf("FTP transfer completed with status %d\n", msg->data.result); break; } } } curl_multi_cleanup(multi_handle); /* Free the CURL handles */ for(i = 0; i<HANDLECOUNT; i++) curl_easy_cleanup(handles[i]); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/sampleconv.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * This is a simple example showing how a program on a non-ASCII platform * would invoke callbacks to do its own codeset conversions instead of * using the built-in iconv functions in libcurl. * </DESC> */ /* The IBM-1047 EBCDIC codeset is used for this example but the code would be similar for other non-ASCII codesets. Three callback functions are created below: my_conv_from_ascii_to_ebcdic, my_conv_from_ebcdic_to_ascii, and my_conv_from_utf8_to_ebcdic The "platform_xxx" calls represent platform-specific conversion routines. */ #include <stdio.h> #include <curl/curl.h> static CURLcode my_conv_from_ascii_to_ebcdic(char *buffer, size_t length) { char *tempptrin, *tempptrout; size_t bytes = length; int rc; tempptrin = tempptrout = buffer; rc = platform_a2e(&tempptrin, &bytes, &tempptrout, &bytes); if(rc == PLATFORM_CONV_OK) { return CURLE_OK; } else { return CURLE_CONV_FAILED; } } static CURLcode my_conv_from_ebcdic_to_ascii(char *buffer, size_t length) { char *tempptrin, *tempptrout; size_t bytes = length; int rc; tempptrin = tempptrout = buffer; rc = platform_e2a(&tempptrin, &bytes, &tempptrout, &bytes); if(rc == PLATFORM_CONV_OK) { return CURLE_OK; } else { return CURLE_CONV_FAILED; } } static CURLcode my_conv_from_utf8_to_ebcdic(char *buffer, size_t length) { char *tempptrin, *tempptrout; size_t bytes = length; int rc; tempptrin = tempptrout = buffer; rc = platform_u2e(&tempptrin, &bytes, &tempptrout, &bytes); if(rc == PLATFORM_CONV_OK) { return CURLE_OK; } else { return CURLE_CONV_FAILED; } } int main(void) { CURL *curl; curl = curl_easy_init(); if(curl) { curl_easy_setopt(curl, CURLOPT_URL, "https://example.com"); /* use platform-specific functions for codeset conversions */ curl_easy_setopt(curl, CURLOPT_CONV_FROM_NETWORK_FUNCTION, my_conv_from_ascii_to_ebcdic); curl_easy_setopt(curl, CURLOPT_CONV_TO_NETWORK_FUNCTION, my_conv_from_ebcdic_to_ascii); curl_easy_setopt(curl, CURLOPT_CONV_FROM_UTF8_FUNCTION, my_conv_from_utf8_to_ebcdic); curl_easy_perform(curl); /* always cleanup */ curl_easy_cleanup(curl); } return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/crawler.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Web crawler based on curl and libxml2. * Copyright (C) 2018 - 2020 Jeroen Ooms <[email protected]> * License: MIT * * To compile: * gcc crawler.c $(pkg-config --cflags --libs libxml-2.0 libcurl) * */ /* <DESC> * Web crawler based on curl and libxml2 to stress-test curl with * hundreds of concurrent connections to various servers. * </DESC> */ /* Parameters */ int max_con = 200; int max_total = 20000; int max_requests = 500; int max_link_per_page = 5; int follow_relative_links = 0; char *start_page = "https://www.reuters.com"; #include <libxml/HTMLparser.h> #include <libxml/xpath.h> #include <libxml/uri.h> #include <curl/curl.h> #include <stdlib.h> #include <string.h> #include <math.h> #include <signal.h> int pending_interrupt = 0; void sighandler(int dummy) { pending_interrupt = 1; } /* resizable buffer */ typedef struct { char *buf; size_t size; } memory; size_t grow_buffer(void *contents, size_t sz, size_t nmemb, void *ctx) { size_t realsize = sz * nmemb; memory *mem = (memory*) ctx; char *ptr = realloc(mem->buf, mem->size + realsize); if(!ptr) { /* out of memory */ printf("not enough memory (realloc returned NULL)\n"); return 0; } mem->buf = ptr; memcpy(&(mem->buf[mem->size]), contents, realsize); mem->size += realsize; return realsize; } CURL *make_handle(char *url) { CURL *handle = curl_easy_init(); /* Important: use HTTP2 over HTTPS */ curl_easy_setopt(handle, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_2TLS); curl_easy_setopt(handle, CURLOPT_URL, url); /* buffer body */ memory *mem = malloc(sizeof(memory)); mem->size = 0; mem->buf = malloc(1); curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, grow_buffer); curl_easy_setopt(handle, CURLOPT_WRITEDATA, mem); curl_easy_setopt(handle, CURLOPT_PRIVATE, mem); /* For completeness */ curl_easy_setopt(handle, CURLOPT_ACCEPT_ENCODING, ""); curl_easy_setopt(handle, CURLOPT_TIMEOUT, 5L); curl_easy_setopt(handle, CURLOPT_FOLLOWLOCATION, 1L); curl_easy_setopt(handle, CURLOPT_MAXREDIRS, 10L); curl_easy_setopt(handle, CURLOPT_CONNECTTIMEOUT, 2L); curl_easy_setopt(handle, CURLOPT_COOKIEFILE, ""); curl_easy_setopt(handle, CURLOPT_FILETIME, 1L); curl_easy_setopt(handle, CURLOPT_USERAGENT, "mini crawler"); curl_easy_setopt(handle, CURLOPT_HTTPAUTH, CURLAUTH_ANY); curl_easy_setopt(handle, CURLOPT_UNRESTRICTED_AUTH, 1L); curl_easy_setopt(handle, CURLOPT_PROXYAUTH, CURLAUTH_ANY); curl_easy_setopt(handle, CURLOPT_EXPECT_100_TIMEOUT_MS, 0L); return handle; } /* HREF finder implemented in libxml2 but could be any HTML parser */ size_t follow_links(CURLM *multi_handle, memory *mem, char *url) { int opts = HTML_PARSE_NOBLANKS | HTML_PARSE_NOERROR | \ HTML_PARSE_NOWARNING | HTML_PARSE_NONET; htmlDocPtr doc = htmlReadMemory(mem->buf, mem->size, url, NULL, opts); if(!doc) return 0; xmlChar *xpath = (xmlChar*) "//a/@href"; xmlXPathContextPtr context = xmlXPathNewContext(doc); xmlXPathObjectPtr result = xmlXPathEvalExpression(xpath, context); xmlXPathFreeContext(context); if(!result) return 0; xmlNodeSetPtr nodeset = result->nodesetval; if(xmlXPathNodeSetIsEmpty(nodeset)) { xmlXPathFreeObject(result); return 0; } size_t count = 0; int i; for(i = 0; i < nodeset->nodeNr; i++) { double r = rand(); int x = r * nodeset->nodeNr / RAND_MAX; const xmlNode *node = nodeset->nodeTab[x]->xmlChildrenNode; xmlChar *href = xmlNodeListGetString(doc, node, 1); if(follow_relative_links) { xmlChar *orig = href; href = xmlBuildURI(href, (xmlChar *) url); xmlFree(orig); } char *link = (char *) href; if(!link || strlen(link) < 20) continue; if(!strncmp(link, "http://", 7) || !strncmp(link, "https://", 8)) { curl_multi_add_handle(multi_handle, make_handle(link)); if(count++ == max_link_per_page) break; } xmlFree(link); } xmlXPathFreeObject(result); return count; } int is_html(char *ctype) { return ctype != NULL && strlen(ctype) > 10 && strstr(ctype, "text/html"); } int main(void) { signal(SIGINT, sighandler); LIBXML_TEST_VERSION; curl_global_init(CURL_GLOBAL_DEFAULT); CURLM *multi_handle = curl_multi_init(); curl_multi_setopt(multi_handle, CURLMOPT_MAX_TOTAL_CONNECTIONS, max_con); curl_multi_setopt(multi_handle, CURLMOPT_MAX_HOST_CONNECTIONS, 6L); /* enables http/2 if available */ #ifdef CURLPIPE_MULTIPLEX curl_multi_setopt(multi_handle, CURLMOPT_PIPELINING, CURLPIPE_MULTIPLEX); #endif /* sets html start page */ curl_multi_add_handle(multi_handle, make_handle(start_page)); int msgs_left; int pending = 0; int complete = 0; int still_running = 1; while(still_running && !pending_interrupt) { int numfds; curl_multi_wait(multi_handle, NULL, 0, 1000, &numfds); curl_multi_perform(multi_handle, &still_running); /* See how the transfers went */ CURLMsg *m = NULL; while((m = curl_multi_info_read(multi_handle, &msgs_left))) { if(m->msg == CURLMSG_DONE) { CURL *handle = m->easy_handle; char *url; memory *mem; curl_easy_getinfo(handle, CURLINFO_PRIVATE, &mem); curl_easy_getinfo(handle, CURLINFO_EFFECTIVE_URL, &url); if(m->data.result == CURLE_OK) { long res_status; curl_easy_getinfo(handle, CURLINFO_RESPONSE_CODE, &res_status); if(res_status == 200) { char *ctype; curl_easy_getinfo(handle, CURLINFO_CONTENT_TYPE, &ctype); printf("[%d] HTTP 200 (%s): %s\n", complete, ctype, url); if(is_html(ctype) && mem->size > 100) { if(pending < max_requests && (complete + pending) < max_total) { pending += follow_links(multi_handle, mem, url); still_running = 1; } } } else { printf("[%d] HTTP %d: %s\n", complete, (int) res_status, url); } } else { printf("[%d] Connection failure: %s\n", complete, url); } curl_multi_remove_handle(multi_handle, handle); curl_easy_cleanup(handle); free(mem->buf); free(mem); complete++; pending--; } } } curl_multi_cleanup(multi_handle); curl_global_cleanup(); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/cookie_interface.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Import and export cookies with COOKIELIST. * </DESC> */ #include <stdio.h> #include <string.h> #include <stdlib.h> #include <errno.h> #include <time.h> #include <curl/curl.h> static void print_cookies(CURL *curl) { CURLcode res; struct curl_slist *cookies; struct curl_slist *nc; int i; printf("Cookies, curl knows:\n"); res = curl_easy_getinfo(curl, CURLINFO_COOKIELIST, &cookies); if(res != CURLE_OK) { fprintf(stderr, "Curl curl_easy_getinfo failed: %s\n", curl_easy_strerror(res)); exit(1); } nc = cookies; i = 1; while(nc) { printf("[%d]: %s\n", i, nc->data); nc = nc->next; i++; } if(i == 1) { printf("(none)\n"); } curl_slist_free_all(cookies); } int main(void) { CURL *curl; CURLcode res; curl_global_init(CURL_GLOBAL_ALL); curl = curl_easy_init(); if(curl) { char nline[512]; curl_easy_setopt(curl, CURLOPT_URL, "https://www.example.com/"); curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L); curl_easy_setopt(curl, CURLOPT_COOKIEFILE, ""); /* start cookie engine */ res = curl_easy_perform(curl); if(res != CURLE_OK) { fprintf(stderr, "Curl perform failed: %s\n", curl_easy_strerror(res)); return 1; } print_cookies(curl); printf("Erasing curl's knowledge of cookies!\n"); curl_easy_setopt(curl, CURLOPT_COOKIELIST, "ALL"); print_cookies(curl); printf("-----------------------------------------------\n" "Setting a cookie \"PREF\" via cookie interface:\n"); #ifdef WIN32 #define snprintf _snprintf #endif /* Netscape format cookie */ snprintf(nline, sizeof(nline), "%s\t%s\t%s\t%s\t%.0f\t%s\t%s", ".example.com", "TRUE", "/", "FALSE", difftime(time(NULL) + 31337, (time_t)0), "PREF", "hello example, i like you very much!"); res = curl_easy_setopt(curl, CURLOPT_COOKIELIST, nline); if(res != CURLE_OK) { fprintf(stderr, "Curl curl_easy_setopt failed: %s\n", curl_easy_strerror(res)); return 1; } /* HTTP-header style cookie. If you use the Set-Cookie format and do not specify a domain then the cookie is sent for any domain and will not be modified, likely not what you intended. Starting in 7.43.0 any-domain cookies will not be exported either. For more information refer to the CURLOPT_COOKIELIST documentation. */ snprintf(nline, sizeof(nline), "Set-Cookie: OLD_PREF=3d141414bf4209321; " "expires=Sun, 17-Jan-2038 19:14:07 GMT; path=/; domain=.example.com"); res = curl_easy_setopt(curl, CURLOPT_COOKIELIST, nline); if(res != CURLE_OK) { fprintf(stderr, "Curl curl_easy_setopt failed: %s\n", curl_easy_strerror(res)); return 1; } print_cookies(curl); res = curl_easy_perform(curl); if(res != CURLE_OK) { fprintf(stderr, "Curl perform failed: %s\n", curl_easy_strerror(res)); return 1; } curl_easy_cleanup(curl); } else { fprintf(stderr, "Curl init failed!\n"); return 1; } curl_global_cleanup(); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/curlx.c
/* curlx.c Authors: Peter Sylvester, Jean-Paul Merlin This is a little program to demonstrate the usage of - an ssl initialisation callback setting a user key and trustbases coming from a pkcs12 file - using an ssl application callback to find a URI in the certificate presented during ssl session establishment. */ /* <DESC> * demonstrates use of SSL context callback, requires OpenSSL * </DESC> */ /* * Copyright (c) 2003 - 2021 The OpenEvidence Project. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions, the following disclaimer, * and the original OpenSSL and SSLeay Licences below. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions, the following disclaimer * and the original OpenSSL and SSLeay Licences below in * the documentation and/or other materials provided with the * distribution. * * 3. All advertising materials mentioning features or use of this * software must display the following acknowledgments: * "This product includes software developed by the Openevidence Project * for use in the OpenEvidence Toolkit. (http://www.openevidence.org/)" * This product includes software developed by the OpenSSL Project * for use in the OpenSSL Toolkit (https://www.openssl.org/)" * This product includes cryptographic software written by Eric Young * ([email protected]). This product includes software written by Tim * Hudson ([email protected])." * * 4. The names "OpenEvidence Toolkit" and "OpenEvidence Project" must not be * used to endorse or promote products derived from this software without * prior written permission. For written permission, please contact * [email protected]. * * 5. Products derived from this software may not be called "OpenEvidence" * nor may "OpenEvidence" appear in their names without prior written * permission of the OpenEvidence Project. * * 6. Redistributions of any form whatsoever must retain the following * acknowledgments: * "This product includes software developed by the OpenEvidence Project * for use in the OpenEvidence Toolkit (http://www.openevidence.org/) * This product includes software developed by the OpenSSL Project * for use in the OpenSSL Toolkit (https://www.openssl.org/)" * This product includes cryptographic software written by Eric Young * ([email protected]). This product includes software written by Tim * Hudson ([email protected])." * * THIS SOFTWARE IS PROVIDED BY THE OpenEvidence PROJECT ``AS IS'' AND ANY * EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenEvidence PROJECT OR * ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED * OF THE POSSIBILITY OF SUCH DAMAGE. * ==================================================================== * * This product includes software developed by the OpenSSL Project * for use in the OpenSSL Toolkit (https://www.openssl.org/) * This product includes cryptographic software written by Eric Young * ([email protected]). This product includes software written by Tim * Hudson ([email protected]). * */ #include <stdio.h> #include <stdlib.h> #include <string.h> #include <curl/curl.h> #include <openssl/x509v3.h> #include <openssl/x509_vfy.h> #include <openssl/crypto.h> #include <openssl/lhash.h> #include <openssl/objects.h> #include <openssl/err.h> #include <openssl/evp.h> #include <openssl/x509.h> #include <openssl/pkcs12.h> #include <openssl/bio.h> #include <openssl/ssl.h> static const char *curlx_usage[]={ "usage: curlx args\n", " -p12 arg - tia file ", " -envpass arg - environment variable which content the tia private" " key password", " -out arg - output file (response)- default stdout", " -in arg - input file (request)- default stdin", " -connect arg - URL of the server for the connection ex:" " www.openevidence.org", " -mimetype arg - MIME type for data in ex : application/timestamp-query" " or application/dvcs -default application/timestamp-query", " -acceptmime arg - MIME type acceptable for the response ex : " "application/timestamp-response or application/dvcs -default none", " -accesstype arg - an Object identifier in an AIA/SIA method, e.g." " AD_DVCS or ad_timestamping", NULL }; /* ./curlx -p12 psy.p12 -envpass XX -in request -verbose -accesstype AD_DVCS -mimetype application/dvcs -acceptmime application/dvcs -out response */ /* * We use this ZERO_NULL to avoid picky compiler warnings, * when assigning a NULL pointer to a function pointer var. */ #define ZERO_NULL 0 /* This is a context that we pass to all callbacks */ typedef struct sslctxparm_st { unsigned char *p12file; const char *pst; PKCS12 *p12; EVP_PKEY *pkey; X509 *usercert; STACK_OF(X509) * ca; CURL *curl; BIO *errorbio; int accesstype; int verbose; } sslctxparm; /* some helper function. */ static char *ia5string(ASN1_IA5STRING *ia5) { char *tmp; if(!ia5 || !ia5->length) return NULL; tmp = OPENSSL_malloc(ia5->length + 1); memcpy(tmp, ia5->data, ia5->length); tmp[ia5->length] = 0; return tmp; } /* A convenience routine to get an access URI. */ static unsigned char *my_get_ext(X509 *cert, const int type, int extensiontype) { int i; STACK_OF(ACCESS_DESCRIPTION) * accessinfo; accessinfo = X509_get_ext_d2i(cert, extensiontype, NULL, NULL); if(!sk_ACCESS_DESCRIPTION_num(accessinfo)) return NULL; for(i = 0; i < sk_ACCESS_DESCRIPTION_num(accessinfo); i++) { ACCESS_DESCRIPTION * ad = sk_ACCESS_DESCRIPTION_value(accessinfo, i); if(OBJ_obj2nid(ad->method) == type) { if(ad->location->type == GEN_URI) { return ia5string(ad->location->d.ia5); } return NULL; } } return NULL; } /* This is an application verification call back, it does not perform any addition verification but tries to find a URL in the presented certificate. If found, this will become the URL to be used in the POST. */ static int ssl_app_verify_callback(X509_STORE_CTX *ctx, void *arg) { sslctxparm * p = (sslctxparm *) arg; int ok; if(p->verbose > 2) BIO_printf(p->errorbio, "entering ssl_app_verify_callback\n"); ok = X509_verify_cert(ctx); if(ok && ctx->cert) { unsigned char *accessinfo; if(p->verbose > 1) X509_print_ex(p->errorbio, ctx->cert, 0, 0); accessinfo = my_get_ext(ctx->cert, p->accesstype, NID_sinfo_access); if(accessinfo) { if(p->verbose) BIO_printf(p->errorbio, "Setting URL from SIA to: %s\n", accessinfo); curl_easy_setopt(p->curl, CURLOPT_URL, accessinfo); } else if(accessinfo = my_get_ext(ctx->cert, p->accesstype, NID_info_access)) { if(p->verbose) BIO_printf(p->errorbio, "Setting URL from AIA to: %s\n", accessinfo); curl_easy_setopt(p->curl, CURLOPT_URL, accessinfo); } } if(p->verbose > 2) BIO_printf(p->errorbio, "leaving ssl_app_verify_callback with %d\n", ok); return ok; } /* The SSL initialisation callback. The callback sets: - a private key and certificate - a trusted ca certificate - a preferred cipherlist - an application verification callback (the function above) */ static CURLcode sslctxfun(CURL *curl, void *sslctx, void *parm) { sslctxparm *p = (sslctxparm *) parm; SSL_CTX *ctx = (SSL_CTX *) sslctx; if(!SSL_CTX_use_certificate(ctx, p->usercert)) { BIO_printf(p->errorbio, "SSL_CTX_use_certificate problem\n"); goto err; } if(!SSL_CTX_use_PrivateKey(ctx, p->pkey)) { BIO_printf(p->errorbio, "SSL_CTX_use_PrivateKey\n"); goto err; } if(!SSL_CTX_check_private_key(ctx)) { BIO_printf(p->errorbio, "SSL_CTX_check_private_key\n"); goto err; } SSL_CTX_set_quiet_shutdown(ctx, 1); SSL_CTX_set_cipher_list(ctx, "RC4-MD5"); SSL_CTX_set_mode(ctx, SSL_MODE_AUTO_RETRY); X509_STORE_add_cert(SSL_CTX_get_cert_store(ctx), sk_X509_value(p->ca, sk_X509_num(p->ca)-1)); SSL_CTX_set_verify_depth(ctx, 2); SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER, ZERO_NULL); SSL_CTX_set_cert_verify_callback(ctx, ssl_app_verify_callback, parm); return CURLE_OK; err: ERR_print_errors(p->errorbio); return CURLE_SSL_CERTPROBLEM; } int main(int argc, char **argv) { BIO* in = NULL; BIO* out = NULL; char *outfile = NULL; char *infile = NULL; int tabLength = 100; char *binaryptr; char *mimetype = NULL; char *mimetypeaccept = NULL; char *contenttype; const char **pp; unsigned char *hostporturl = NULL; BIO *p12bio; char **args = argv + 1; unsigned char *serverurl; sslctxparm p; char *response; CURLcode res; struct curl_slist *headers = NULL; int badarg = 0; binaryptr = malloc(tabLength); memset(&p, '\0', sizeof(p)); p.errorbio = BIO_new_fp(stderr, BIO_NOCLOSE); curl_global_init(CURL_GLOBAL_DEFAULT); /* we need some more for the P12 decoding */ OpenSSL_add_all_ciphers(); OpenSSL_add_all_digests(); ERR_load_crypto_strings(); while(*args && *args[0] == '-') { if(!strcmp (*args, "-in")) { if(args[1]) { infile = *(++args); } else badarg = 1; } else if(!strcmp (*args, "-out")) { if(args[1]) { outfile = *(++args); } else badarg = 1; } else if(!strcmp (*args, "-p12")) { if(args[1]) { p.p12file = *(++args); } else badarg = 1; } else if(strcmp(*args, "-envpass") == 0) { if(args[1]) { p.pst = getenv(*(++args)); } else badarg = 1; } else if(strcmp(*args, "-connect") == 0) { if(args[1]) { hostporturl = *(++args); } else badarg = 1; } else if(strcmp(*args, "-mimetype") == 0) { if(args[1]) { mimetype = *(++args); } else badarg = 1; } else if(strcmp(*args, "-acceptmime") == 0) { if(args[1]) { mimetypeaccept = *(++args); } else badarg = 1; } else if(strcmp(*args, "-accesstype") == 0) { if(args[1]) { p.accesstype = OBJ_obj2nid(OBJ_txt2obj(*++args, 0)); if(p.accesstype == 0) badarg = 1; } else badarg = 1; } else if(strcmp(*args, "-verbose") == 0) { p.verbose++; } else badarg = 1; args++; } if(!mimetype || !mimetypeaccept || !p.p12file) badarg = 1; if(badarg) { for(pp = curlx_usage; (*pp != NULL); pp++) BIO_printf(p.errorbio, "%s\n", *pp); BIO_printf(p.errorbio, "\n"); goto err; } /* set input */ in = BIO_new(BIO_s_file()); if(!in) { BIO_printf(p.errorbio, "Error setting input bio\n"); goto err; } else if(!infile) BIO_set_fp(in, stdin, BIO_NOCLOSE|BIO_FP_TEXT); else if(BIO_read_filename(in, infile) <= 0) { BIO_printf(p.errorbio, "Error opening input file %s\n", infile); BIO_free(in); goto err; } /* set output */ out = BIO_new(BIO_s_file()); if(!out) { BIO_printf(p.errorbio, "Error setting output bio.\n"); goto err; } else if(!outfile) BIO_set_fp(out, stdout, BIO_NOCLOSE|BIO_FP_TEXT); else if(BIO_write_filename(out, outfile) <= 0) { BIO_printf(p.errorbio, "Error opening output file %s\n", outfile); BIO_free(out); goto err; } p.errorbio = BIO_new_fp(stderr, BIO_NOCLOSE); p.curl = curl_easy_init(); if(!p.curl) { BIO_printf(p.errorbio, "Cannot init curl lib\n"); goto err; } p12bio = BIO_new_file(p.p12file, "rb"); if(!p12bio) { BIO_printf(p.errorbio, "Error opening P12 file %s\n", p.p12file); goto err; } p.p12 = d2i_PKCS12_bio(p12bio, NULL); if(!p.p12) { BIO_printf(p.errorbio, "Cannot decode P12 structure %s\n", p.p12file); goto err; } p.ca = NULL; if(!(PKCS12_parse (p.p12, p.pst, &(p.pkey), &(p.usercert), &(p.ca) ) )) { BIO_printf(p.errorbio, "Invalid P12 structure in %s\n", p.p12file); goto err; } if(sk_X509_num(p.ca) <= 0) { BIO_printf(p.errorbio, "No trustworthy CA given.%s\n", p.p12file); goto err; } if(p.verbose > 1) X509_print_ex(p.errorbio, p.usercert, 0, 0); /* determine URL to go */ if(hostporturl) { size_t len = strlen(hostporturl) + 9; serverurl = malloc(len); snprintf(serverurl, len, "https://%s", hostporturl); } else if(p.accesstype) { /* see whether we can find an AIA or SIA for a given access type */ serverurl = my_get_ext(p.usercert, p.accesstype, NID_info_access); if(!serverurl) { int j = 0; BIO_printf(p.errorbio, "no service URL in user cert " "searching in others certificates\n"); for(j = 0; j<sk_X509_num(p.ca); j++) { serverurl = my_get_ext(sk_X509_value(p.ca, j), p.accesstype, NID_info_access); if(serverurl) break; serverurl = my_get_ext(sk_X509_value(p.ca, j), p.accesstype, NID_sinfo_access); if(serverurl) break; } } } if(!serverurl) { BIO_printf(p.errorbio, "no service URL in certificates," " check '-accesstype (AD_DVCS | ad_timestamping)'" " or use '-connect'\n"); goto err; } if(p.verbose) BIO_printf(p.errorbio, "Service URL: <%s>\n", serverurl); curl_easy_setopt(p.curl, CURLOPT_URL, serverurl); /* Now specify the POST binary data */ curl_easy_setopt(p.curl, CURLOPT_POSTFIELDS, binaryptr); curl_easy_setopt(p.curl, CURLOPT_POSTFIELDSIZE, (long)tabLength); /* pass our list of custom made headers */ contenttype = malloc(15 + strlen(mimetype)); snprintf(contenttype, 15 + strlen(mimetype), "Content-type: %s", mimetype); headers = curl_slist_append(headers, contenttype); curl_easy_setopt(p.curl, CURLOPT_HTTPHEADER, headers); if(p.verbose) BIO_printf(p.errorbio, "Service URL: <%s>\n", serverurl); { FILE *outfp; BIO_get_fp(out, &outfp); curl_easy_setopt(p.curl, CURLOPT_WRITEDATA, outfp); } res = curl_easy_setopt(p.curl, CURLOPT_SSL_CTX_FUNCTION, sslctxfun); if(res != CURLE_OK) BIO_printf(p.errorbio, "%d %s=%d %d\n", __LINE__, "CURLOPT_SSL_CTX_FUNCTION", CURLOPT_SSL_CTX_FUNCTION, res); curl_easy_setopt(p.curl, CURLOPT_SSL_CTX_DATA, &p); { char *ptr; int lu; int i = 0; while((lu = BIO_read(in, &binaryptr[i], tabLength-i)) >0) { i += lu; if(i == tabLength) { tabLength += 100; ptr = realloc(binaryptr, tabLength); /* should be more careful */ if(!ptr) { /* out of memory */ BIO_printf(p.errorbio, "out of memory (realloc returned NULL)\n"); goto fail; } binaryptr = ptr; ptr = NULL; } } tabLength = i; } /* Now specify the POST binary data */ curl_easy_setopt(p.curl, CURLOPT_POSTFIELDS, binaryptr); curl_easy_setopt(p.curl, CURLOPT_POSTFIELDSIZE, (long)tabLength); /* Perform the request, res will get the return code */ BIO_printf(p.errorbio, "%d %s %d\n", __LINE__, "curl_easy_perform", res = curl_easy_perform(p.curl)); { curl_easy_getinfo(p.curl, CURLINFO_CONTENT_TYPE, &response); if(mimetypeaccept && p.verbose) { if(!strcmp(mimetypeaccept, response)) BIO_printf(p.errorbio, "the response has a correct mimetype : %s\n", response); else BIO_printf(p.errorbio, "the response doesn\'t have an acceptable " "mime type, it is %s instead of %s\n", response, mimetypeaccept); } } /*** code d'erreur si accept mime ***, egalement code return HTTP != 200 ***/ /* free the header list*/ fail: curl_slist_free_all(headers); /* always cleanup */ curl_easy_cleanup(p.curl); BIO_free(in); BIO_free(out); return (EXIT_SUCCESS); err: BIO_printf(p.errorbio, "error"); exit(1); }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/persistent.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * re-using handles to do HTTP persistent connections * </DESC> */ #include <stdio.h> #include <unistd.h> #include <curl/curl.h> int main(void) { CURL *curl; CURLcode res; curl_global_init(CURL_GLOBAL_ALL); curl = curl_easy_init(); if(curl) { curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L); curl_easy_setopt(curl, CURLOPT_HEADER, 1L); /* get the first document */ curl_easy_setopt(curl, CURLOPT_URL, "https://example.com/"); /* Perform the request, res will get the return code */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* get another document from the same server using the same connection */ curl_easy_setopt(curl, CURLOPT_URL, "https://example.com/docs/"); /* Perform the request, res will get the return code */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* always cleanup */ curl_easy_cleanup(curl); } return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/httpput-postfields.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * HTTP PUT using CURLOPT_POSTFIELDS * </DESC> */ #include <stdio.h> #include <fcntl.h> #include <sys/stat.h> #include <curl/curl.h> static const char olivertwist[]= "Among other public buildings in a certain town, which for many reasons " "it will be prudent to refrain from mentioning, and to which I will assign " "no fictitious name, there is one anciently common to most towns, great or " "small: to wit, a workhouse; and in this workhouse was born; on a day and " "date which I need not trouble myself to repeat, inasmuch as it can be of " "no possible consequence to the reader, in this stage of the business at " "all events; the item of mortality whose name is prefixed to the head of " "this chapter."; /* * This example shows a HTTP PUT operation that sends a fixed buffer with * CURLOPT_POSTFIELDS to the URL given as an argument. */ int main(int argc, char **argv) { CURL *curl; CURLcode res; char *url; if(argc < 2) return 1; url = argv[1]; /* In windows, this will init the winsock stuff */ curl_global_init(CURL_GLOBAL_ALL); /* get a curl handle */ curl = curl_easy_init(); if(curl) { struct curl_slist *headers = NULL; /* default type with postfields is application/x-www-form-urlencoded, change it if you want */ headers = curl_slist_append(headers, "Content-Type: literature/classic"); curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers); /* pass on content in request body. When CURLOPT_POSTFIELDSIZE is not used, curl does strlen to get the size. */ curl_easy_setopt(curl, CURLOPT_POSTFIELDS, olivertwist); /* override the POST implied by CURLOPT_POSTFIELDS * * Warning: CURLOPT_CUSTOMREQUEST is problematic, especially if you want * to follow redirects. Be aware. */ curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "PUT"); /* specify target URL, and note that this URL should include a file name, not only a directory */ curl_easy_setopt(curl, CURLOPT_URL, url); /* Now run off and do what you have been told! */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* always cleanup */ curl_easy_cleanup(curl); /* free headers */ curl_slist_free_all(headers); } curl_global_cleanup(); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/debug.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Show how CURLOPT_DEBUGFUNCTION can be used. * </DESC> */ #include <stdio.h> #include <curl/curl.h> struct data { char trace_ascii; /* 1 or 0 */ }; static void dump(const char *text, FILE *stream, unsigned char *ptr, size_t size, char nohex) { size_t i; size_t c; unsigned int width = 0x10; if(nohex) /* without the hex output, we can fit more on screen */ width = 0x40; fprintf(stream, "%s, %10.10lu bytes (0x%8.8lx)\n", text, (unsigned long)size, (unsigned long)size); for(i = 0; i<size; i += width) { fprintf(stream, "%4.4lx: ", (unsigned long)i); if(!nohex) { /* hex not disabled, show it */ for(c = 0; c < width; c++) if(i + c < size) fprintf(stream, "%02x ", ptr[i + c]); else fputs(" ", stream); } for(c = 0; (c < width) && (i + c < size); c++) { /* check for 0D0A; if found, skip past and start a new line of output */ if(nohex && (i + c + 1 < size) && ptr[i + c] == 0x0D && ptr[i + c + 1] == 0x0A) { i += (c + 2 - width); break; } fprintf(stream, "%c", (ptr[i + c] >= 0x20) && (ptr[i + c]<0x80)?ptr[i + c]:'.'); /* check again for 0D0A, to avoid an extra \n if it's at width */ if(nohex && (i + c + 2 < size) && ptr[i + c + 1] == 0x0D && ptr[i + c + 2] == 0x0A) { i += (c + 3 - width); break; } } fputc('\n', stream); /* newline */ } fflush(stream); } static int my_trace(CURL *handle, curl_infotype type, char *data, size_t size, void *userp) { struct data *config = (struct data *)userp; const char *text; (void)handle; /* prevent compiler warning */ switch(type) { case CURLINFO_TEXT: fprintf(stderr, "== Info: %s", data); /* FALLTHROUGH */ default: /* in case a new one is introduced to shock us */ return 0; case CURLINFO_HEADER_OUT: text = "=> Send header"; break; case CURLINFO_DATA_OUT: text = "=> Send data"; break; case CURLINFO_SSL_DATA_OUT: text = "=> Send SSL data"; break; case CURLINFO_HEADER_IN: text = "<= Recv header"; break; case CURLINFO_DATA_IN: text = "<= Recv data"; break; case CURLINFO_SSL_DATA_IN: text = "<= Recv SSL data"; break; } dump(text, stderr, (unsigned char *)data, size, config->trace_ascii); return 0; } int main(void) { CURL *curl; CURLcode res; struct data config; config.trace_ascii = 1; /* enable ascii tracing */ curl = curl_easy_init(); if(curl) { curl_easy_setopt(curl, CURLOPT_DEBUGFUNCTION, my_trace); curl_easy_setopt(curl, CURLOPT_DEBUGDATA, &config); /* the DEBUGFUNCTION has no effect until we enable VERBOSE */ curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L); /* example.com is redirected, so we tell libcurl to follow redirection */ curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1L); curl_easy_setopt(curl, CURLOPT_URL, "https://example.com/"); res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* always cleanup */ curl_easy_cleanup(curl); } return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/imap-examine.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * IMAP example showing how to obtain information about a folder * </DESC> */ #include <stdio.h> #include <curl/curl.h> /* This is a simple example showing how to obtain information about a mailbox * folder using libcurl's IMAP capabilities via the EXAMINE command. * * Note that this example requires libcurl 7.30.0 or above. */ int main(void) { CURL *curl; CURLcode res = CURLE_OK; curl = curl_easy_init(); if(curl) { /* Set username and password */ curl_easy_setopt(curl, CURLOPT_USERNAME, "user"); curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); /* This is just the server URL */ curl_easy_setopt(curl, CURLOPT_URL, "imap://imap.example.com"); /* Set the EXAMINE command specifying the mailbox folder */ curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "EXAMINE OUTBOX"); /* Perform the custom request */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* Always cleanup */ curl_easy_cleanup(curl); } return (int)res; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/http2-upload.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Multiplexed HTTP/2 uploads over a single connection * </DESC> */ #include <stdio.h> #include <stdlib.h> #include <string.h> #include <fcntl.h> #include <sys/stat.h> #include <errno.h> /* somewhat unix-specific */ #include <sys/time.h> #include <unistd.h> /* curl stuff */ #include <curl/curl.h> #include <curl/mprintf.h> #ifndef CURLPIPE_MULTIPLEX /* This little trick will just make sure that we do not enable pipelining for libcurls old enough to not have this symbol. It is _not_ defined to zero in a recent libcurl header. */ #define CURLPIPE_MULTIPLEX 0 #endif #define NUM_HANDLES 1000 struct input { FILE *in; size_t bytes_read; /* count up */ CURL *hnd; int num; }; static void dump(const char *text, int num, unsigned char *ptr, size_t size, char nohex) { size_t i; size_t c; unsigned int width = 0x10; if(nohex) /* without the hex output, we can fit more on screen */ width = 0x40; fprintf(stderr, "%d %s, %lu bytes (0x%lx)\n", num, text, (unsigned long)size, (unsigned long)size); for(i = 0; i<size; i += width) { fprintf(stderr, "%4.4lx: ", (unsigned long)i); if(!nohex) { /* hex not disabled, show it */ for(c = 0; c < width; c++) if(i + c < size) fprintf(stderr, "%02x ", ptr[i + c]); else fputs(" ", stderr); } for(c = 0; (c < width) && (i + c < size); c++) { /* check for 0D0A; if found, skip past and start a new line of output */ if(nohex && (i + c + 1 < size) && ptr[i + c] == 0x0D && ptr[i + c + 1] == 0x0A) { i += (c + 2 - width); break; } fprintf(stderr, "%c", (ptr[i + c] >= 0x20) && (ptr[i + c]<0x80)?ptr[i + c]:'.'); /* check again for 0D0A, to avoid an extra \n if it's at width */ if(nohex && (i + c + 2 < size) && ptr[i + c + 1] == 0x0D && ptr[i + c + 2] == 0x0A) { i += (c + 3 - width); break; } } fputc('\n', stderr); /* newline */ } } static int my_trace(CURL *handle, curl_infotype type, char *data, size_t size, void *userp) { char timebuf[60]; const char *text; struct input *i = (struct input *)userp; int num = i->num; static time_t epoch_offset; static int known_offset; struct timeval tv; time_t secs; struct tm *now; (void)handle; /* prevent compiler warning */ gettimeofday(&tv, NULL); if(!known_offset) { epoch_offset = time(NULL) - tv.tv_sec; known_offset = 1; } secs = epoch_offset + tv.tv_sec; now = localtime(&secs); /* not thread safe but we do not care */ curl_msnprintf(timebuf, sizeof(timebuf), "%02d:%02d:%02d.%06ld", now->tm_hour, now->tm_min, now->tm_sec, (long)tv.tv_usec); switch(type) { case CURLINFO_TEXT: fprintf(stderr, "%s [%d] Info: %s", timebuf, num, data); /* FALLTHROUGH */ default: /* in case a new one is introduced to shock us */ return 0; case CURLINFO_HEADER_OUT: text = "=> Send header"; break; case CURLINFO_DATA_OUT: text = "=> Send data"; break; case CURLINFO_SSL_DATA_OUT: text = "=> Send SSL data"; break; case CURLINFO_HEADER_IN: text = "<= Recv header"; break; case CURLINFO_DATA_IN: text = "<= Recv data"; break; case CURLINFO_SSL_DATA_IN: text = "<= Recv SSL data"; break; } dump(text, num, (unsigned char *)data, size, 1); return 0; } static size_t read_callback(char *ptr, size_t size, size_t nmemb, void *userp) { struct input *i = userp; size_t retcode = fread(ptr, size, nmemb, i->in); i->bytes_read += retcode; return retcode; } static void setup(struct input *i, int num, const char *upload) { FILE *out; char url[256]; char filename[128]; struct stat file_info; curl_off_t uploadsize; CURL *hnd; hnd = i->hnd = curl_easy_init(); i->num = num; curl_msnprintf(filename, 128, "dl-%d", num); out = fopen(filename, "wb"); if(!out) { fprintf(stderr, "error: could not open file %s for writing: %s\n", upload, strerror(errno)); exit(1); } curl_msnprintf(url, 256, "https://localhost:8443/upload-%d", num); /* get the file size of the local file */ if(stat(upload, &file_info)) { fprintf(stderr, "error: could not stat file %s: %s\n", upload, strerror(errno)); exit(1); } uploadsize = file_info.st_size; i->in = fopen(upload, "rb"); if(!i->in) { fprintf(stderr, "error: could not open file %s for reading: %s\n", upload, strerror(errno)); exit(1); } /* write to this file */ curl_easy_setopt(hnd, CURLOPT_WRITEDATA, out); /* we want to use our own read function */ curl_easy_setopt(hnd, CURLOPT_READFUNCTION, read_callback); /* read from this file */ curl_easy_setopt(hnd, CURLOPT_READDATA, i); /* provide the size of the upload */ curl_easy_setopt(hnd, CURLOPT_INFILESIZE_LARGE, uploadsize); /* send in the URL to store the upload as */ curl_easy_setopt(hnd, CURLOPT_URL, url); /* upload please */ curl_easy_setopt(hnd, CURLOPT_UPLOAD, 1L); /* please be verbose */ curl_easy_setopt(hnd, CURLOPT_VERBOSE, 1L); curl_easy_setopt(hnd, CURLOPT_DEBUGFUNCTION, my_trace); curl_easy_setopt(hnd, CURLOPT_DEBUGDATA, i); /* HTTP/2 please */ curl_easy_setopt(hnd, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_2_0); /* we use a self-signed test server, skip verification during debugging */ curl_easy_setopt(hnd, CURLOPT_SSL_VERIFYPEER, 0L); curl_easy_setopt(hnd, CURLOPT_SSL_VERIFYHOST, 0L); #if (CURLPIPE_MULTIPLEX > 0) /* wait for pipe connection to confirm */ curl_easy_setopt(hnd, CURLOPT_PIPEWAIT, 1L); #endif } /* * Upload all files over HTTP/2, using the same physical connection! */ int main(int argc, char **argv) { struct input trans[NUM_HANDLES]; CURLM *multi_handle; int i; int still_running = 0; /* keep number of running handles */ const char *filename = "index.html"; int num_transfers; if(argc > 1) { /* if given a number, do that many transfers */ num_transfers = atoi(argv[1]); if(!num_transfers || (num_transfers > NUM_HANDLES)) num_transfers = 3; /* a suitable low default */ if(argc > 2) /* if given a file name, upload this! */ filename = argv[2]; } else num_transfers = 3; /* init a multi stack */ multi_handle = curl_multi_init(); for(i = 0; i<num_transfers; i++) { setup(&trans[i], i, filename); /* add the individual transfer */ curl_multi_add_handle(multi_handle, trans[i].hnd); } curl_multi_setopt(multi_handle, CURLMOPT_PIPELINING, CURLPIPE_MULTIPLEX); /* We do HTTP/2 so let's stick to one connection per host */ curl_multi_setopt(multi_handle, CURLMOPT_MAX_HOST_CONNECTIONS, 1L); do { CURLMcode mc = curl_multi_perform(multi_handle, &still_running); if(still_running) /* wait for activity, timeout or "nothing" */ mc = curl_multi_poll(multi_handle, NULL, 0, 1000, NULL); if(mc) break; } while(still_running); curl_multi_cleanup(multi_handle); for(i = 0; i<num_transfers; i++) { curl_multi_remove_handle(multi_handle, trans[i].hnd); curl_easy_cleanup(trans[i].hnd); } return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/httpput.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * HTTP PUT with easy interface and read callback * </DESC> */ #include <stdio.h> #include <fcntl.h> #include <sys/stat.h> #include <curl/curl.h> /* * This example shows a HTTP PUT operation. PUTs a file given as a command * line argument to the URL also given on the command line. * * This example also uses its own read callback. * * Here's an article on how to setup a PUT handler for Apache: * http://www.apacheweek.com/features/put */ static size_t read_callback(char *ptr, size_t size, size_t nmemb, void *stream) { size_t retcode; curl_off_t nread; /* in real-world cases, this would probably get this data differently as this fread() stuff is exactly what the library already would do by default internally */ retcode = fread(ptr, size, nmemb, stream); nread = (curl_off_t)retcode; fprintf(stderr, "*** We read %" CURL_FORMAT_CURL_OFF_T " bytes from file\n", nread); return retcode; } int main(int argc, char **argv) { CURL *curl; CURLcode res; FILE * hd_src; struct stat file_info; char *file; char *url; if(argc < 3) return 1; file = argv[1]; url = argv[2]; /* get the file size of the local file */ stat(file, &file_info); /* get a FILE * of the same file, could also be made with fdopen() from the previous descriptor, but hey this is just an example! */ hd_src = fopen(file, "rb"); /* In windows, this will init the winsock stuff */ curl_global_init(CURL_GLOBAL_ALL); /* get a curl handle */ curl = curl_easy_init(); if(curl) { /* we want to use our own read function */ curl_easy_setopt(curl, CURLOPT_READFUNCTION, read_callback); /* enable uploading (implies PUT over HTTP) */ curl_easy_setopt(curl, CURLOPT_UPLOAD, 1L); /* specify target URL, and note that this URL should include a file name, not only a directory */ curl_easy_setopt(curl, CURLOPT_URL, url); /* now specify which file to upload */ curl_easy_setopt(curl, CURLOPT_READDATA, hd_src); /* provide the size of the upload, we specicially typecast the value to curl_off_t since we must be sure to use the correct data size */ curl_easy_setopt(curl, CURLOPT_INFILESIZE_LARGE, (curl_off_t)file_info.st_size); /* Now run off and do what you have been told! */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* always cleanup */ curl_easy_cleanup(curl); } fclose(hd_src); /* close the local file */ curl_global_cleanup(); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/progressfunc.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Use the progress callbacks, old and/or new one depending on available * libcurl version. * </DESC> */ #include <stdio.h> #include <curl/curl.h> #if LIBCURL_VERSION_NUM >= 0x073d00 /* In libcurl 7.61.0, support was added for extracting the time in plain microseconds. Older libcurl versions are stuck in using 'double' for this information so we complicate this example a bit by supporting either approach. */ #define TIME_IN_US 1 /* microseconds */ #define TIMETYPE curl_off_t #define TIMEOPT CURLINFO_TOTAL_TIME_T #define MINIMAL_PROGRESS_FUNCTIONALITY_INTERVAL 3000000 #else #define TIMETYPE double #define TIMEOPT CURLINFO_TOTAL_TIME #define MINIMAL_PROGRESS_FUNCTIONALITY_INTERVAL 3 #endif #define STOP_DOWNLOAD_AFTER_THIS_MANY_BYTES 6000 struct myprogress { TIMETYPE lastruntime; /* type depends on version, see above */ CURL *curl; }; /* this is how the CURLOPT_XFERINFOFUNCTION callback works */ static int xferinfo(void *p, curl_off_t dltotal, curl_off_t dlnow, curl_off_t ultotal, curl_off_t ulnow) { struct myprogress *myp = (struct myprogress *)p; CURL *curl = myp->curl; TIMETYPE curtime = 0; curl_easy_getinfo(curl, TIMEOPT, &curtime); /* under certain circumstances it may be desirable for certain functionality to only run every N seconds, in order to do this the transaction time can be used */ if((curtime - myp->lastruntime) >= MINIMAL_PROGRESS_FUNCTIONALITY_INTERVAL) { myp->lastruntime = curtime; #ifdef TIME_IN_US fprintf(stderr, "TOTAL TIME: %" CURL_FORMAT_CURL_OFF_T ".%06ld\r\n", (curtime / 1000000), (long)(curtime % 1000000)); #else fprintf(stderr, "TOTAL TIME: %f \r\n", curtime); #endif } fprintf(stderr, "UP: %" CURL_FORMAT_CURL_OFF_T " of %" CURL_FORMAT_CURL_OFF_T " DOWN: %" CURL_FORMAT_CURL_OFF_T " of %" CURL_FORMAT_CURL_OFF_T "\r\n", ulnow, ultotal, dlnow, dltotal); if(dlnow > STOP_DOWNLOAD_AFTER_THIS_MANY_BYTES) return 1; return 0; } #if LIBCURL_VERSION_NUM < 0x072000 /* for libcurl older than 7.32.0 (CURLOPT_PROGRESSFUNCTION) */ static int older_progress(void *p, double dltotal, double dlnow, double ultotal, double ulnow) { return xferinfo(p, (curl_off_t)dltotal, (curl_off_t)dlnow, (curl_off_t)ultotal, (curl_off_t)ulnow); } #endif int main(void) { CURL *curl; CURLcode res = CURLE_OK; struct myprogress prog; curl = curl_easy_init(); if(curl) { prog.lastruntime = 0; prog.curl = curl; curl_easy_setopt(curl, CURLOPT_URL, "https://example.com/"); #if LIBCURL_VERSION_NUM >= 0x072000 /* xferinfo was introduced in 7.32.0, no earlier libcurl versions will compile as they will not have the symbols around. If built with a newer libcurl, but running with an older libcurl: curl_easy_setopt() will fail in run-time trying to set the new callback, making the older callback get used. New libcurls will prefer the new callback and instead use that one even if both callbacks are set. */ curl_easy_setopt(curl, CURLOPT_XFERINFOFUNCTION, xferinfo); /* pass the struct pointer into the xferinfo function, note that this is an alias to CURLOPT_PROGRESSDATA */ curl_easy_setopt(curl, CURLOPT_XFERINFODATA, &prog); #else curl_easy_setopt(curl, CURLOPT_PROGRESSFUNCTION, older_progress); /* pass the struct pointer into the progress function */ curl_easy_setopt(curl, CURLOPT_PROGRESSDATA, &prog); #endif curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 0L); res = curl_easy_perform(curl); if(res != CURLE_OK) fprintf(stderr, "%s\n", curl_easy_strerror(res)); /* always cleanup */ curl_easy_cleanup(curl); } return (int)res; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/http3-present.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Checks if HTTP/3 support is present in libcurl. * </DESC> */ #include <stdio.h> #include <curl/curl.h> int main(void) { curl_version_info_data *ver; curl_global_init(CURL_GLOBAL_ALL); ver = curl_version_info(CURLVERSION_NOW); if(ver->features & CURL_VERSION_HTTP2) printf("HTTP/2 support is present\n"); if(ver->features & CURL_VERSION_HTTP3) printf("HTTP/3 support is present\n"); if(ver->features & CURL_VERSION_ALTSVC) printf("Alt-svc support is present\n"); curl_global_cleanup(); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/pop3-noop.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * POP3 example showing how to perform a noop * </DESC> */ #include <stdio.h> #include <curl/curl.h> /* This is a simple example showing how to perform a noop using libcurl's POP3 * capabilities. * * Note that this example requires libcurl 7.26.0 or above. */ int main(void) { CURL *curl; CURLcode res = CURLE_OK; curl = curl_easy_init(); if(curl) { /* Set username and password */ curl_easy_setopt(curl, CURLOPT_USERNAME, "user"); curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); /* This is just the server URL */ curl_easy_setopt(curl, CURLOPT_URL, "pop3://pop.example.com"); /* Set the NOOP command */ curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "NOOP"); /* Do not perform a transfer as NOOP returns no data */ curl_easy_setopt(curl, CURLOPT_NOBODY, 1L); /* Perform the custom request */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* Always cleanup */ curl_easy_cleanup(curl); } return (int)res; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/http3.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Very simple HTTP/3 GET * </DESC> */ #include <stdio.h> #include <curl/curl.h> int main(void) { CURL *curl; CURLcode res; curl = curl_easy_init(); if(curl) { curl_easy_setopt(curl, CURLOPT_URL, "https://example.com"); /* Forcing HTTP/3 will make the connection fail if the server is not accessible over QUIC + HTTP/3 on the given host and port. Consider using CURLOPT_ALTSVC instead! */ curl_easy_setopt(curl, CURLOPT_HTTP_VERSION, (long)CURL_HTTP_VERSION_3); /* Perform the request, res will get the return code */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* always cleanup */ curl_easy_cleanup(curl); } return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/href_extractor.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 2012 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Uses the "Streaming HTML parser" to extract the href pieces in a streaming * manner from a downloaded HTML. * </DESC> */ /* * The HTML parser is found at https://github.com/arjunc77/htmlstreamparser */ #include <stdio.h> #include <curl/curl.h> #include <htmlstreamparser.h> static size_t write_callback(void *buffer, size_t size, size_t nmemb, void *hsp) { size_t realsize = size * nmemb, p; for(p = 0; p < realsize; p++) { html_parser_char_parse(hsp, ((char *)buffer)[p]); if(html_parser_cmp_tag(hsp, "a", 1)) if(html_parser_cmp_attr(hsp, "href", 4)) if(html_parser_is_in(hsp, HTML_VALUE_ENDED)) { html_parser_val(hsp)[html_parser_val_length(hsp)] = '\0'; printf("%s\n", html_parser_val(hsp)); } } return realsize; } int main(int argc, char *argv[]) { char tag[1], attr[4], val[128]; CURL *curl; HTMLSTREAMPARSER *hsp; if(argc != 2) { printf("Usage: %s URL\n", argv[0]); return EXIT_FAILURE; } curl = curl_easy_init(); hsp = html_parser_init(); html_parser_set_tag_to_lower(hsp, 1); html_parser_set_attr_to_lower(hsp, 1); html_parser_set_tag_buffer(hsp, tag, sizeof(tag)); html_parser_set_attr_buffer(hsp, attr, sizeof(attr)); html_parser_set_val_buffer(hsp, val, sizeof(val)-1); curl_easy_setopt(curl, CURLOPT_URL, argv[1]); curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_callback); curl_easy_setopt(curl, CURLOPT_WRITEDATA, hsp); curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1L); curl_easy_perform(curl); curl_easy_cleanup(curl); html_parser_cleanup(hsp); return EXIT_SUCCESS; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/hiperfifo.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * multi socket API usage with libevent 2 * </DESC> */ /* Example application source code using the multi socket interface to download many files at once. Written by Jeff Pohlmeyer Requires libevent version 2 and a (POSIX?) system that has mkfifo(). This is an adaptation of libcurl's "hipev.c" and libevent's "event-test.c" sample programs. When running, the program creates the named pipe "hiper.fifo" Whenever there is input into the fifo, the program reads the input as a list of URL's and creates some new easy handles to fetch each URL via the curl_multi "hiper" API. Thus, you can try a single URL: % echo http://www.yahoo.com > hiper.fifo Or a whole bunch of them: % cat my-url-list > hiper.fifo The fifo buffer is handled almost instantly, so you can even add more URL's while the previous requests are still being downloaded. Note: For the sake of simplicity, URL length is limited to 1023 char's ! This is purely a demo app, all retrieved data is simply discarded by the write callback. */ #include <stdio.h> #include <string.h> #include <stdlib.h> #include <sys/time.h> #include <time.h> #include <unistd.h> #include <sys/poll.h> #include <curl/curl.h> #include <event2/event.h> #include <event2/event_struct.h> #include <fcntl.h> #include <sys/stat.h> #include <errno.h> #include <sys/cdefs.h> #define MSG_OUT stdout /* Send info to stdout, change to stderr if you want */ /* Global information, common to all connections */ typedef struct _GlobalInfo { struct event_base *evbase; struct event fifo_event; struct event timer_event; CURLM *multi; int still_running; FILE *input; int stopped; } GlobalInfo; /* Information associated with a specific easy handle */ typedef struct _ConnInfo { CURL *easy; char *url; GlobalInfo *global; char error[CURL_ERROR_SIZE]; } ConnInfo; /* Information associated with a specific socket */ typedef struct _SockInfo { curl_socket_t sockfd; CURL *easy; int action; long timeout; struct event ev; GlobalInfo *global; } SockInfo; #define mycase(code) \ case code: s = __STRING(code) /* Die if we get a bad CURLMcode somewhere */ static void mcode_or_die(const char *where, CURLMcode code) { if(CURLM_OK != code) { const char *s; switch(code) { mycase(CURLM_BAD_HANDLE); break; mycase(CURLM_BAD_EASY_HANDLE); break; mycase(CURLM_OUT_OF_MEMORY); break; mycase(CURLM_INTERNAL_ERROR); break; mycase(CURLM_UNKNOWN_OPTION); break; mycase(CURLM_LAST); break; default: s = "CURLM_unknown"; break; mycase(CURLM_BAD_SOCKET); fprintf(MSG_OUT, "ERROR: %s returns %s\n", where, s); /* ignore this error */ return; } fprintf(MSG_OUT, "ERROR: %s returns %s\n", where, s); exit(code); } } /* Update the event timer after curl_multi library calls */ static int multi_timer_cb(CURLM *multi, long timeout_ms, GlobalInfo *g) { struct timeval timeout; (void)multi; timeout.tv_sec = timeout_ms/1000; timeout.tv_usec = (timeout_ms%1000)*1000; fprintf(MSG_OUT, "multi_timer_cb: Setting timeout to %ld ms\n", timeout_ms); /* * if timeout_ms is -1, just delete the timer * * For all other values of timeout_ms, this should set or *update* the timer * to the new value */ if(timeout_ms == -1) evtimer_del(&g->timer_event); else /* includes timeout zero */ evtimer_add(&g->timer_event, &timeout); return 0; } /* Check for completed transfers, and remove their easy handles */ static void check_multi_info(GlobalInfo *g) { char *eff_url; CURLMsg *msg; int msgs_left; ConnInfo *conn; CURL *easy; CURLcode res; fprintf(MSG_OUT, "REMAINING: %d\n", g->still_running); while((msg = curl_multi_info_read(g->multi, &msgs_left))) { if(msg->msg == CURLMSG_DONE) { easy = msg->easy_handle; res = msg->data.result; curl_easy_getinfo(easy, CURLINFO_PRIVATE, &conn); curl_easy_getinfo(easy, CURLINFO_EFFECTIVE_URL, &eff_url); fprintf(MSG_OUT, "DONE: %s => (%d) %s\n", eff_url, res, conn->error); curl_multi_remove_handle(g->multi, easy); free(conn->url); curl_easy_cleanup(easy); free(conn); } } if(g->still_running == 0 && g->stopped) event_base_loopbreak(g->evbase); } /* Called by libevent when we get action on a multi socket */ static void event_cb(int fd, short kind, void *userp) { GlobalInfo *g = (GlobalInfo*) userp; CURLMcode rc; int action = ((kind & EV_READ) ? CURL_CSELECT_IN : 0) | ((kind & EV_WRITE) ? CURL_CSELECT_OUT : 0); rc = curl_multi_socket_action(g->multi, fd, action, &g->still_running); mcode_or_die("event_cb: curl_multi_socket_action", rc); check_multi_info(g); if(g->still_running <= 0) { fprintf(MSG_OUT, "last transfer done, kill timeout\n"); if(evtimer_pending(&g->timer_event, NULL)) { evtimer_del(&g->timer_event); } } } /* Called by libevent when our timeout expires */ static void timer_cb(int fd, short kind, void *userp) { GlobalInfo *g = (GlobalInfo *)userp; CURLMcode rc; (void)fd; (void)kind; rc = curl_multi_socket_action(g->multi, CURL_SOCKET_TIMEOUT, 0, &g->still_running); mcode_or_die("timer_cb: curl_multi_socket_action", rc); check_multi_info(g); } /* Clean up the SockInfo structure */ static void remsock(SockInfo *f) { if(f) { if(event_initialized(&f->ev)) { event_del(&f->ev); } free(f); } } /* Assign information to a SockInfo structure */ static void setsock(SockInfo *f, curl_socket_t s, CURL *e, int act, GlobalInfo *g) { int kind = ((act & CURL_POLL_IN) ? EV_READ : 0) | ((act & CURL_POLL_OUT) ? EV_WRITE : 0) | EV_PERSIST; f->sockfd = s; f->action = act; f->easy = e; if(event_initialized(&f->ev)) { event_del(&f->ev); } event_assign(&f->ev, g->evbase, f->sockfd, kind, event_cb, g); event_add(&f->ev, NULL); } /* Initialize a new SockInfo structure */ static void addsock(curl_socket_t s, CURL *easy, int action, GlobalInfo *g) { SockInfo *fdp = calloc(1, sizeof(SockInfo)); fdp->global = g; setsock(fdp, s, easy, action, g); curl_multi_assign(g->multi, s, fdp); } /* CURLMOPT_SOCKETFUNCTION */ static int sock_cb(CURL *e, curl_socket_t s, int what, void *cbp, void *sockp) { GlobalInfo *g = (GlobalInfo*) cbp; SockInfo *fdp = (SockInfo*) sockp; const char *whatstr[]={ "none", "IN", "OUT", "INOUT", "REMOVE" }; fprintf(MSG_OUT, "socket callback: s=%d e=%p what=%s ", s, e, whatstr[what]); if(what == CURL_POLL_REMOVE) { fprintf(MSG_OUT, "\n"); remsock(fdp); } else { if(!fdp) { fprintf(MSG_OUT, "Adding data: %s\n", whatstr[what]); addsock(s, e, what, g); } else { fprintf(MSG_OUT, "Changing action from %s to %s\n", whatstr[fdp->action], whatstr[what]); setsock(fdp, s, e, what, g); } } return 0; } /* CURLOPT_WRITEFUNCTION */ static size_t write_cb(void *ptr, size_t size, size_t nmemb, void *data) { (void)ptr; (void)data; return size * nmemb; } /* CURLOPT_PROGRESSFUNCTION */ static int prog_cb(void *p, double dltotal, double dlnow, double ult, double uln) { ConnInfo *conn = (ConnInfo *)p; (void)ult; (void)uln; fprintf(MSG_OUT, "Progress: %s (%g/%g)\n", conn->url, dlnow, dltotal); return 0; } /* Create a new easy handle, and add it to the global curl_multi */ static void new_conn(char *url, GlobalInfo *g) { ConnInfo *conn; CURLMcode rc; conn = calloc(1, sizeof(ConnInfo)); conn->error[0]='\0'; conn->easy = curl_easy_init(); if(!conn->easy) { fprintf(MSG_OUT, "curl_easy_init() failed, exiting!\n"); exit(2); } conn->global = g; conn->url = strdup(url); curl_easy_setopt(conn->easy, CURLOPT_URL, conn->url); curl_easy_setopt(conn->easy, CURLOPT_WRITEFUNCTION, write_cb); curl_easy_setopt(conn->easy, CURLOPT_WRITEDATA, conn); curl_easy_setopt(conn->easy, CURLOPT_VERBOSE, 1L); curl_easy_setopt(conn->easy, CURLOPT_ERRORBUFFER, conn->error); curl_easy_setopt(conn->easy, CURLOPT_PRIVATE, conn); curl_easy_setopt(conn->easy, CURLOPT_NOPROGRESS, 0L); curl_easy_setopt(conn->easy, CURLOPT_PROGRESSFUNCTION, prog_cb); curl_easy_setopt(conn->easy, CURLOPT_PROGRESSDATA, conn); curl_easy_setopt(conn->easy, CURLOPT_FOLLOWLOCATION, 1L); fprintf(MSG_OUT, "Adding easy %p to multi %p (%s)\n", conn->easy, g->multi, url); rc = curl_multi_add_handle(g->multi, conn->easy); mcode_or_die("new_conn: curl_multi_add_handle", rc); /* note that the add_handle() will set a time-out to trigger very soon so that the necessary socket_action() call will be called by this app */ } /* This gets called whenever data is received from the fifo */ static void fifo_cb(int fd, short event, void *arg) { char s[1024]; long int rv = 0; int n = 0; GlobalInfo *g = (GlobalInfo *)arg; (void)fd; (void)event; do { s[0]='\0'; rv = fscanf(g->input, "%1023s%n", s, &n); s[n]='\0'; if(n && s[0]) { if(!strcmp(s, "stop")) { g->stopped = 1; if(g->still_running == 0) event_base_loopbreak(g->evbase); } else new_conn(s, arg); /* if we read a URL, go get it! */ } else break; } while(rv != EOF); } /* Create a named pipe and tell libevent to monitor it */ static const char *fifo = "hiper.fifo"; static int init_fifo(GlobalInfo *g) { struct stat st; curl_socket_t sockfd; fprintf(MSG_OUT, "Creating named pipe \"%s\"\n", fifo); if(lstat (fifo, &st) == 0) { if((st.st_mode & S_IFMT) == S_IFREG) { errno = EEXIST; perror("lstat"); exit(1); } } unlink(fifo); if(mkfifo (fifo, 0600) == -1) { perror("mkfifo"); exit(1); } sockfd = open(fifo, O_RDWR | O_NONBLOCK, 0); if(sockfd == -1) { perror("open"); exit(1); } g->input = fdopen(sockfd, "r"); fprintf(MSG_OUT, "Now, pipe some URL's into > %s\n", fifo); event_assign(&g->fifo_event, g->evbase, sockfd, EV_READ|EV_PERSIST, fifo_cb, g); event_add(&g->fifo_event, NULL); return (0); } static void clean_fifo(GlobalInfo *g) { event_del(&g->fifo_event); fclose(g->input); unlink(fifo); } int main(int argc, char **argv) { GlobalInfo g; (void)argc; (void)argv; memset(&g, 0, sizeof(GlobalInfo)); g.evbase = event_base_new(); init_fifo(&g); g.multi = curl_multi_init(); evtimer_assign(&g.timer_event, g.evbase, timer_cb, &g); /* setup the generic multi interface options we want */ curl_multi_setopt(g.multi, CURLMOPT_SOCKETFUNCTION, sock_cb); curl_multi_setopt(g.multi, CURLMOPT_SOCKETDATA, &g); curl_multi_setopt(g.multi, CURLMOPT_TIMERFUNCTION, multi_timer_cb); curl_multi_setopt(g.multi, CURLMOPT_TIMERDATA, &g); /* we do not call any curl_multi_socket*() function yet as we have no handles added! */ event_base_dispatch(g.evbase); /* this, of course, will not get called since only way to stop this program is via ctrl-C, but it is here to show how cleanup /would/ be done. */ clean_fifo(&g); event_del(&g.timer_event); event_base_free(g.evbase); curl_multi_cleanup(g.multi); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/multi-formadd.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * using the multi interface to do a multipart formpost without blocking * </DESC> */ #include <stdio.h> #include <string.h> #include <sys/time.h> #include <curl/curl.h> int main(void) { CURL *curl; CURLM *multi_handle; int still_running = 0; struct curl_httppost *formpost = NULL; struct curl_httppost *lastptr = NULL; struct curl_slist *headerlist = NULL; static const char buf[] = "Expect:"; /* Fill in the file upload field. This makes libcurl load data from the given file name when curl_easy_perform() is called. */ curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "sendfile", CURLFORM_FILE, "postit2.c", CURLFORM_END); /* Fill in the filename field */ curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "filename", CURLFORM_COPYCONTENTS, "postit2.c", CURLFORM_END); /* Fill in the submit field too, even if this is rarely needed */ curl_formadd(&formpost, &lastptr, CURLFORM_COPYNAME, "submit", CURLFORM_COPYCONTENTS, "send", CURLFORM_END); curl = curl_easy_init(); multi_handle = curl_multi_init(); /* initialize custom header list (stating that Expect: 100-continue is not wanted */ headerlist = curl_slist_append(headerlist, buf); if(curl && multi_handle) { /* what URL that receives this POST */ curl_easy_setopt(curl, CURLOPT_URL, "https://www.example.com/upload.cgi"); curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L); curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headerlist); curl_easy_setopt(curl, CURLOPT_HTTPPOST, formpost); curl_multi_add_handle(multi_handle, curl); do { CURLMcode mc = curl_multi_perform(multi_handle, &still_running); if(still_running) /* wait for activity, timeout or "nothing" */ mc = curl_multi_poll(multi_handle, NULL, 0, 1000, NULL); if(mc) break; } while(still_running); curl_multi_cleanup(multi_handle); /* always cleanup */ curl_easy_cleanup(curl); /* then cleanup the formpost chain */ curl_formfree(formpost); /* free slist */ curl_slist_free_all(headerlist); } return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/multithread.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * A multi-threaded example that uses pthreads to fetch several files at once * </DESC> */ #include <stdio.h> #include <pthread.h> #include <curl/curl.h> #define NUMT 4 /* List of URLs to fetch. If you intend to use a SSL-based protocol here you might need to setup TLS library mutex callbacks as described here: https://curl.se/libcurl/c/threadsafe.html */ const char * const urls[NUMT]= { "https://curl.se/", "ftp://example.com/", "https://example.net/", "www.example" }; static void *pull_one_url(void *url) { CURL *curl; curl = curl_easy_init(); curl_easy_setopt(curl, CURLOPT_URL, url); curl_easy_perform(curl); /* ignores error */ curl_easy_cleanup(curl); return NULL; } /* int pthread_create(pthread_t *new_thread_ID, const pthread_attr_t *attr, void * (*start_func)(void *), void *arg); */ int main(int argc, char **argv) { pthread_t tid[NUMT]; int i; /* Must initialize libcurl before any threads are started */ curl_global_init(CURL_GLOBAL_ALL); for(i = 0; i< NUMT; i++) { int error = pthread_create(&tid[i], NULL, /* default attributes please */ pull_one_url, (void *)urls[i]); if(0 != error) fprintf(stderr, "Couldn't run thread number %d, errno %d\n", i, error); else fprintf(stderr, "Thread %d, gets %s\n", i, urls[i]); } /* now wait for all threads to terminate */ for(i = 0; i< NUMT; i++) { pthread_join(tid[i], NULL); fprintf(stderr, "Thread %d terminated\n", i); } curl_global_cleanup(); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/altsvc.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * HTTP with Alt-Svc support * </DESC> */ #include <stdio.h> #include <curl/curl.h> int main(void) { CURL *curl; CURLcode res; curl = curl_easy_init(); if(curl) { curl_easy_setopt(curl, CURLOPT_URL, "https://example.com"); /* cache the alternatives in this file */ curl_easy_setopt(curl, CURLOPT_ALTSVC, "altsvc.txt"); /* restrict which HTTP versions to use alternatives */ curl_easy_setopt(curl, CURLOPT_ALTSVC_CTRL, (long) CURLALTSVC_H1|CURLALTSVC_H2|CURLALTSVC_H3); /* Perform the request, res will get the return code */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* always cleanup */ curl_easy_cleanup(curl); } return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/adddocsref.pl
#!/usr/bin/env perl #*************************************************************************** # _ _ ____ _ # Project ___| | | | _ \| | # / __| | | | |_) | | # | (__| |_| | _ <| |___ # \___|\___/|_| \_\_____| # # Copyright (C) 2004 - 2020, Daniel Stenberg, <[email protected]>, et al. # # This software is licensed as described in the file COPYING, which # you should have received as part of this distribution. The terms # are also available at https://curl.se/docs/copyright.html. # # You may opt to use, copy, modify, merge, publish, distribute and/or sell # copies of the Software, and permit persons to whom the Software is # furnished to do so, under the terms of the COPYING file. # # This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY # KIND, either express or implied. # ########################################################################### # pass files as argument(s) my $docroot="https://curl.se/libcurl/c"; for $f (@ARGV) { open(NEW, ">$f.new"); open(F, "<$f"); while(<F>) { my $l = $_; if($l =~ /\/* $docroot/) { # just ignore preciously added refs } elsif($l =~ /^( *).*curl_easy_setopt\([^,]*, *([^ ,]*) *,/) { my ($prefix, $anc) = ($1, $2); $anc =~ s/_//g; print NEW "$prefix/* $docroot/curl_easy_setopt.html#$anc */\n"; print NEW $l; } elsif($l =~ /^( *).*(curl_([^\(]*))\(/) { my ($prefix, $func) = ($1, $2); print NEW "$prefix/* $docroot/$func.html */\n"; print NEW $l; } else { print NEW $l; } } close(F); close(NEW); system("mv $f $f.org"); system("mv $f.new $f"); }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/urlapi.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Set working URL with CURLU *. * </DESC> */ #include <stdio.h> #include <curl/curl.h> #if !CURL_AT_LEAST_VERSION(7, 80, 0) #error "this example requires curl 7.80.0 or later" #endif int main(void) { CURL *curl; CURLcode res; CURLU *urlp; CURLUcode uc; /* get a curl handle */ curl = curl_easy_init(); /* init Curl URL */ urlp = curl_url(); uc = curl_url_set(urlp, CURLUPART_URL, "http://example.com/path/index.html", 0); if(uc) { fprintf(stderr, "curl_url_set() failed: %s", curl_url_strerror(uc)); goto cleanup; } if(curl) { /* set urlp to use as working URL */ curl_easy_setopt(curl, CURLOPT_CURLU, urlp); curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L); res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); goto cleanup; } cleanup: curl_url_cleanup(urlp); curl_easy_cleanup(curl); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/externalsocket.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * An example demonstrating how an application can pass in a custom * socket to libcurl to use. This example also handles the connect itself. * </DESC> */ #include <stdio.h> #include <string.h> #include <stdlib.h> #include <curl/curl.h> #ifdef WIN32 #include <windows.h> #include <winsock2.h> #include <ws2tcpip.h> #define close closesocket #else #include <sys/types.h> /* socket types */ #include <sys/socket.h> /* socket definitions */ #include <netinet/in.h> #include <arpa/inet.h> /* inet (3) functions */ #include <unistd.h> /* misc. Unix functions */ #endif #include <errno.h> /* The IP address and port number to connect to */ #define IPADDR "127.0.0.1" #define PORTNUM 80 #ifndef INADDR_NONE #define INADDR_NONE 0xffffffff #endif static size_t write_data(void *ptr, size_t size, size_t nmemb, void *stream) { size_t written = fwrite(ptr, size, nmemb, (FILE *)stream); return written; } static int closecb(void *clientp, curl_socket_t item) { (void)clientp; printf("libcurl wants to close %d now\n", (int)item); return 0; } static curl_socket_t opensocket(void *clientp, curlsocktype purpose, struct curl_sockaddr *address) { curl_socket_t sockfd; (void)purpose; (void)address; sockfd = *(curl_socket_t *)clientp; /* the actual externally set socket is passed in via the OPENSOCKETDATA option */ return sockfd; } static int sockopt_callback(void *clientp, curl_socket_t curlfd, curlsocktype purpose) { (void)clientp; (void)curlfd; (void)purpose; /* This return code was added in libcurl 7.21.5 */ return CURL_SOCKOPT_ALREADY_CONNECTED; } int main(void) { CURL *curl; CURLcode res; struct sockaddr_in servaddr; /* socket address structure */ curl_socket_t sockfd; #ifdef WIN32 WSADATA wsaData; int initwsa = WSAStartup(MAKEWORD(2, 2), &wsaData); if(initwsa) { printf("WSAStartup failed: %d\n", initwsa); return 1; } #endif curl = curl_easy_init(); if(curl) { /* * Note that libcurl will internally think that you connect to the host * and port that you specify in the URL option. */ curl_easy_setopt(curl, CURLOPT_URL, "http://99.99.99.99:9999"); /* Create the socket "manually" */ sockfd = socket(AF_INET, SOCK_STREAM, 0); if(sockfd == CURL_SOCKET_BAD) { printf("Error creating listening socket.\n"); return 3; } memset(&servaddr, 0, sizeof(servaddr)); servaddr.sin_family = AF_INET; servaddr.sin_port = htons(PORTNUM); servaddr.sin_addr.s_addr = inet_addr(IPADDR); if(INADDR_NONE == servaddr.sin_addr.s_addr) { close(sockfd); return 2; } if(connect(sockfd, (struct sockaddr *) &servaddr, sizeof(servaddr)) == -1) { close(sockfd); printf("client error: connect: %s\n", strerror(errno)); return 1; } /* no progress meter please */ curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 1L); /* send all data to this function */ curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_data); /* call this function to get a socket */ curl_easy_setopt(curl, CURLOPT_OPENSOCKETFUNCTION, opensocket); curl_easy_setopt(curl, CURLOPT_OPENSOCKETDATA, &sockfd); /* call this function to close sockets */ curl_easy_setopt(curl, CURLOPT_CLOSESOCKETFUNCTION, closecb); curl_easy_setopt(curl, CURLOPT_CLOSESOCKETDATA, &sockfd); /* call this function to set options for the socket */ curl_easy_setopt(curl, CURLOPT_SOCKOPTFUNCTION, sockopt_callback); curl_easy_setopt(curl, CURLOPT_VERBOSE, 1); res = curl_easy_perform(curl); curl_easy_cleanup(curl); close(sockfd); if(res) { printf("libcurl error: %d\n", res); return 4; } } #ifdef WIN32 WSACleanup(); #endif return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/smooth-gtk-thread.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * A multi threaded application that uses a progress bar to show * status. It uses Gtk+ to make a smooth pulse. * </DESC> */ /* * Written by Jud Bishop after studying the other examples provided with * libcurl. * * To compile (on a single line): * gcc -ggdb `pkg-config --cflags --libs gtk+-2.0` -lcurl -lssl -lcrypto * -lgthread-2.0 -dl smooth-gtk-thread.c -o smooth-gtk-thread */ #include <stdio.h> #include <gtk/gtk.h> #include <glib.h> #include <unistd.h> #include <pthread.h> #include <curl/curl.h> #define NUMT 4 pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER; int j = 0; gint num_urls = 9; /* Just make sure this is less than urls[]*/ const char * const urls[]= { "90022", "90023", "90024", "90025", "90026", "90027", "90028", "90029", "90030" }; size_t write_file(void *ptr, size_t size, size_t nmemb, FILE *stream) { return fwrite(ptr, size, nmemb, stream); } static void run_one(gchar *http, int j) { FILE *outfile = fopen(urls[j], "wb"); CURL *curl; curl = curl_easy_init(); if(curl) { printf("j = %d\n", j); /* Set the URL and transfer type */ curl_easy_setopt(curl, CURLOPT_URL, http); /* Write to the file */ curl_easy_setopt(curl, CURLOPT_WRITEDATA, outfile); curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_file); curl_easy_perform(curl); fclose(outfile); curl_easy_cleanup(curl); } } void *pull_one_url(void *NaN) { /* protect the reading and increasing of 'j' with a mutex */ pthread_mutex_lock(&lock); while(j < num_urls) { int i = j; j++; pthread_mutex_unlock(&lock); http = g_strdup_printf("https://example.com/%s", urls[i]); if(http) { run_one(http, i); g_free(http); } pthread_mutex_lock(&lock); } pthread_mutex_unlock(&lock); return NULL; } gboolean pulse_bar(gpointer data) { gdk_threads_enter(); gtk_progress_bar_pulse(GTK_PROGRESS_BAR (data)); gdk_threads_leave(); /* Return true so the function will be called again; * returning false removes this timeout function. */ return TRUE; } void *create_thread(void *progress_bar) { pthread_t tid[NUMT]; int i; /* Make sure I do not create more threads than urls. */ for(i = 0; i < NUMT && i < num_urls ; i++) { int error = pthread_create(&tid[i], NULL, /* default attributes please */ pull_one_url, NULL); if(0 != error) fprintf(stderr, "Couldn't run thread number %d, errno %d\n", i, error); else fprintf(stderr, "Thread %d, gets %s\n", i, urls[i]); } /* Wait for all threads to terminate. */ for(i = 0; i < NUMT && i < num_urls; i++) { pthread_join(tid[i], NULL); fprintf(stderr, "Thread %d terminated\n", i); } /* This stops the pulsing if you have it turned on in the progress bar section */ g_source_remove(GPOINTER_TO_INT(g_object_get_data(G_OBJECT(progress_bar), "pulse_id"))); /* This destroys the progress bar */ gtk_widget_destroy(progress_bar); /* [Un]Comment this out to kill the program rather than pushing close. */ /* gtk_main_quit(); */ return NULL; } static gboolean cb_delete(GtkWidget *window, gpointer data) { gtk_main_quit(); return FALSE; } int main(int argc, char **argv) { GtkWidget *top_window, *outside_frame, *inside_frame, *progress_bar; /* Must initialize libcurl before any threads are started */ curl_global_init(CURL_GLOBAL_ALL); /* Init thread */ g_thread_init(NULL); gdk_threads_init(); gdk_threads_enter(); gtk_init(&argc, &argv); /* Base window */ top_window = gtk_window_new(GTK_WINDOW_TOPLEVEL); /* Frame */ outside_frame = gtk_frame_new(NULL); gtk_frame_set_shadow_type(GTK_FRAME(outside_frame), GTK_SHADOW_OUT); gtk_container_add(GTK_CONTAINER(top_window), outside_frame); /* Frame */ inside_frame = gtk_frame_new(NULL); gtk_frame_set_shadow_type(GTK_FRAME(inside_frame), GTK_SHADOW_IN); gtk_container_set_border_width(GTK_CONTAINER(inside_frame), 5); gtk_container_add(GTK_CONTAINER(outside_frame), inside_frame); /* Progress bar */ progress_bar = gtk_progress_bar_new(); gtk_progress_bar_pulse(GTK_PROGRESS_BAR (progress_bar)); /* Make uniform pulsing */ gint pulse_ref = g_timeout_add(300, pulse_bar, progress_bar); g_object_set_data(G_OBJECT(progress_bar), "pulse_id", GINT_TO_POINTER(pulse_ref)); gtk_container_add(GTK_CONTAINER(inside_frame), progress_bar); gtk_widget_show_all(top_window); printf("gtk_widget_show_all\n"); g_signal_connect(G_OBJECT (top_window), "delete-event", G_CALLBACK(cb_delete), NULL); if(!g_thread_create(&create_thread, progress_bar, FALSE, NULL) != 0) g_warning("cannot create the thread"); gtk_main(); gdk_threads_leave(); printf("gdk_threads_leave\n"); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/ftpupload.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ #include <stdio.h> #include <string.h> #include <curl/curl.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <errno.h> #ifdef WIN32 #include <io.h> #else #include <unistd.h> #endif /* <DESC> * Performs an FTP upload and renames the file just after a successful * transfer. * </DESC> */ #define LOCAL_FILE "/tmp/uploadthis.txt" #define UPLOAD_FILE_AS "while-uploading.txt" #define REMOTE_URL "ftp://example.com/" UPLOAD_FILE_AS #define RENAME_FILE_TO "renamed-and-fine.txt" /* NOTE: if you want this example to work on Windows with libcurl as a DLL, you MUST also provide a read callback with CURLOPT_READFUNCTION. Failing to do so will give you a crash since a DLL may not use the variable's memory when passed in to it from an app like this. */ static size_t read_callback(char *ptr, size_t size, size_t nmemb, void *stream) { curl_off_t nread; /* in real-world cases, this would probably get this data differently as this fread() stuff is exactly what the library already would do by default internally */ size_t retcode = fread(ptr, size, nmemb, stream); nread = (curl_off_t)retcode; fprintf(stderr, "*** We read %" CURL_FORMAT_CURL_OFF_T " bytes from file\n", nread); return retcode; } int main(void) { CURL *curl; CURLcode res; FILE *hd_src; struct stat file_info; curl_off_t fsize; struct curl_slist *headerlist = NULL; static const char buf_1 [] = "RNFR " UPLOAD_FILE_AS; static const char buf_2 [] = "RNTO " RENAME_FILE_TO; /* get the file size of the local file */ if(stat(LOCAL_FILE, &file_info)) { printf("Couldn't open '%s': %s\n", LOCAL_FILE, strerror(errno)); return 1; } fsize = (curl_off_t)file_info.st_size; printf("Local file size: %" CURL_FORMAT_CURL_OFF_T " bytes.\n", fsize); /* get a FILE * of the same file */ hd_src = fopen(LOCAL_FILE, "rb"); /* In windows, this will init the winsock stuff */ curl_global_init(CURL_GLOBAL_ALL); /* get a curl handle */ curl = curl_easy_init(); if(curl) { /* build a list of commands to pass to libcurl */ headerlist = curl_slist_append(headerlist, buf_1); headerlist = curl_slist_append(headerlist, buf_2); /* we want to use our own read function */ curl_easy_setopt(curl, CURLOPT_READFUNCTION, read_callback); /* enable uploading */ curl_easy_setopt(curl, CURLOPT_UPLOAD, 1L); /* specify target */ curl_easy_setopt(curl, CURLOPT_URL, REMOTE_URL); /* pass in that last of FTP commands to run after the transfer */ curl_easy_setopt(curl, CURLOPT_POSTQUOTE, headerlist); /* now specify which file to upload */ curl_easy_setopt(curl, CURLOPT_READDATA, hd_src); /* Set the size of the file to upload (optional). If you give a *_LARGE option you MUST make sure that the type of the passed-in argument is a curl_off_t. If you use CURLOPT_INFILESIZE (without _LARGE) you must make sure that to pass in a type 'long' argument. */ curl_easy_setopt(curl, CURLOPT_INFILESIZE_LARGE, (curl_off_t)fsize); /* Now run off and do what you have been told! */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* clean up the FTP commands list */ curl_slist_free_all(headerlist); /* always cleanup */ curl_easy_cleanup(curl); } fclose(hd_src); /* close the local file */ curl_global_cleanup(); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/smtp-mime.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * SMTP example showing how to send mime e-mails * </DESC> */ #include <stdio.h> #include <string.h> #include <curl/curl.h> /* This is a simple example showing how to send mime mail using libcurl's SMTP * capabilities. For an example of using the multi interface please see * smtp-multi.c. * * Note that this example requires libcurl 7.56.0 or above. */ #define FROM "<[email protected]>" #define TO "<[email protected]>" #define CC "<[email protected]>" static const char *headers_text[] = { "Date: Tue, 22 Aug 2017 14:08:43 +0100", "To: " TO, "From: " FROM " (Example User)", "Cc: " CC " (Another example User)", "Message-ID: <dcd7cb36-11db-487a-9f3a-e652a9458efd@" "rfcpedant.example.org>", "Subject: example sending a MIME-formatted message", NULL }; static const char inline_text[] = "This is the inline text message of the e-mail.\r\n" "\r\n" " It could be a lot of lines that would be displayed in an e-mail\r\n" "viewer that is not able to handle HTML.\r\n"; static const char inline_html[] = "<html><body>\r\n" "<p>This is the inline <b>HTML</b> message of the e-mail.</p>" "<br />\r\n" "<p>It could be a lot of HTML data that would be displayed by " "e-mail viewers able to handle HTML.</p>" "</body></html>\r\n"; int main(void) { CURL *curl; CURLcode res = CURLE_OK; curl = curl_easy_init(); if(curl) { struct curl_slist *headers = NULL; struct curl_slist *recipients = NULL; struct curl_slist *slist = NULL; curl_mime *mime; curl_mime *alt; curl_mimepart *part; const char **cpp; /* This is the URL for your mailserver */ curl_easy_setopt(curl, CURLOPT_URL, "smtp://mail.example.com"); /* Note that this option is not strictly required, omitting it will result * in libcurl sending the MAIL FROM command with empty sender data. All * autoresponses should have an empty reverse-path, and should be directed * to the address in the reverse-path which triggered them. Otherwise, * they could cause an endless loop. See RFC 5321 Section 4.5.5 for more * details. */ curl_easy_setopt(curl, CURLOPT_MAIL_FROM, FROM); /* Add two recipients, in this particular case they correspond to the * To: and Cc: addressees in the header, but they could be any kind of * recipient. */ recipients = curl_slist_append(recipients, TO); recipients = curl_slist_append(recipients, CC); curl_easy_setopt(curl, CURLOPT_MAIL_RCPT, recipients); /* Build and set the message header list. */ for(cpp = headers_text; *cpp; cpp++) headers = curl_slist_append(headers, *cpp); curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers); /* Build the mime message. */ mime = curl_mime_init(curl); /* The inline part is an alternative proposing the html and the text versions of the e-mail. */ alt = curl_mime_init(curl); /* HTML message. */ part = curl_mime_addpart(alt); curl_mime_data(part, inline_html, CURL_ZERO_TERMINATED); curl_mime_type(part, "text/html"); /* Text message. */ part = curl_mime_addpart(alt); curl_mime_data(part, inline_text, CURL_ZERO_TERMINATED); /* Create the inline part. */ part = curl_mime_addpart(mime); curl_mime_subparts(part, alt); curl_mime_type(part, "multipart/alternative"); slist = curl_slist_append(NULL, "Content-Disposition: inline"); curl_mime_headers(part, slist, 1); /* Add the current source program as an attachment. */ part = curl_mime_addpart(mime); curl_mime_filedata(part, "smtp-mime.c"); curl_easy_setopt(curl, CURLOPT_MIMEPOST, mime); /* Send the message */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* Free lists. */ curl_slist_free_all(recipients); curl_slist_free_all(headers); /* curl will not send the QUIT command until you call cleanup, so you * should be able to re-use this connection for additional messages * (setting CURLOPT_MAIL_FROM and CURLOPT_MAIL_RCPT as required, and * calling curl_easy_perform() again. It may not be a good idea to keep * the connection open for a very long time though (more than a few * minutes may result in the server timing out the connection), and you do * want to clean up in the end. */ curl_easy_cleanup(curl); /* Free multipart message. */ curl_mime_free(mime); } return (int)res; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/evhiperfifo.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * multi socket interface together with libev * </DESC> */ /* Example application source code using the multi socket interface to * download many files at once. * * This example features the same basic functionality as hiperfifo.c does, * but this uses libev instead of libevent. * * Written by Jeff Pohlmeyer, converted to use libev by Markus Koetter Requires libev and a (POSIX?) system that has mkfifo(). This is an adaptation of libcurl's "hipev.c" and libevent's "event-test.c" sample programs. When running, the program creates the named pipe "hiper.fifo" Whenever there is input into the fifo, the program reads the input as a list of URL's and creates some new easy handles to fetch each URL via the curl_multi "hiper" API. Thus, you can try a single URL: % echo http://www.yahoo.com > hiper.fifo Or a whole bunch of them: % cat my-url-list > hiper.fifo The fifo buffer is handled almost instantly, so you can even add more URL's while the previous requests are still being downloaded. Note: For the sake of simplicity, URL length is limited to 1023 char's ! This is purely a demo app, all retrieved data is simply discarded by the write callback. */ #include <stdio.h> #include <string.h> #include <stdlib.h> #include <sys/time.h> #include <time.h> #include <unistd.h> #include <sys/poll.h> #include <curl/curl.h> #include <ev.h> #include <fcntl.h> #include <sys/stat.h> #include <errno.h> #define DPRINT(x...) printf(x) #define MSG_OUT stdout /* Send info to stdout, change to stderr if you want */ /* Global information, common to all connections */ typedef struct _GlobalInfo { struct ev_loop *loop; struct ev_io fifo_event; struct ev_timer timer_event; CURLM *multi; int still_running; FILE *input; } GlobalInfo; /* Information associated with a specific easy handle */ typedef struct _ConnInfo { CURL *easy; char *url; GlobalInfo *global; char error[CURL_ERROR_SIZE]; } ConnInfo; /* Information associated with a specific socket */ typedef struct _SockInfo { curl_socket_t sockfd; CURL *easy; int action; long timeout; struct ev_io ev; int evset; GlobalInfo *global; } SockInfo; static void timer_cb(EV_P_ struct ev_timer *w, int revents); /* Update the event timer after curl_multi library calls */ static int multi_timer_cb(CURLM *multi, long timeout_ms, GlobalInfo *g) { DPRINT("%s %li\n", __PRETTY_FUNCTION__, timeout_ms); ev_timer_stop(g->loop, &g->timer_event); if(timeout_ms >= 0) { /* -1 means delete, other values are timeout times in milliseconds */ double t = timeout_ms / 1000; ev_timer_init(&g->timer_event, timer_cb, t, 0.); ev_timer_start(g->loop, &g->timer_event); } return 0; } /* Die if we get a bad CURLMcode somewhere */ static void mcode_or_die(const char *where, CURLMcode code) { if(CURLM_OK != code) { const char *s; switch(code) { case CURLM_BAD_HANDLE: s = "CURLM_BAD_HANDLE"; break; case CURLM_BAD_EASY_HANDLE: s = "CURLM_BAD_EASY_HANDLE"; break; case CURLM_OUT_OF_MEMORY: s = "CURLM_OUT_OF_MEMORY"; break; case CURLM_INTERNAL_ERROR: s = "CURLM_INTERNAL_ERROR"; break; case CURLM_UNKNOWN_OPTION: s = "CURLM_UNKNOWN_OPTION"; break; case CURLM_LAST: s = "CURLM_LAST"; break; default: s = "CURLM_unknown"; break; case CURLM_BAD_SOCKET: s = "CURLM_BAD_SOCKET"; fprintf(MSG_OUT, "ERROR: %s returns %s\n", where, s); /* ignore this error */ return; } fprintf(MSG_OUT, "ERROR: %s returns %s\n", where, s); exit(code); } } /* Check for completed transfers, and remove their easy handles */ static void check_multi_info(GlobalInfo *g) { char *eff_url; CURLMsg *msg; int msgs_left; ConnInfo *conn; CURL *easy; CURLcode res; fprintf(MSG_OUT, "REMAINING: %d\n", g->still_running); while((msg = curl_multi_info_read(g->multi, &msgs_left))) { if(msg->msg == CURLMSG_DONE) { easy = msg->easy_handle; res = msg->data.result; curl_easy_getinfo(easy, CURLINFO_PRIVATE, &conn); curl_easy_getinfo(easy, CURLINFO_EFFECTIVE_URL, &eff_url); fprintf(MSG_OUT, "DONE: %s => (%d) %s\n", eff_url, res, conn->error); curl_multi_remove_handle(g->multi, easy); free(conn->url); curl_easy_cleanup(easy); free(conn); } } } /* Called by libevent when we get action on a multi socket */ static void event_cb(EV_P_ struct ev_io *w, int revents) { DPRINT("%s w %p revents %i\n", __PRETTY_FUNCTION__, w, revents); GlobalInfo *g = (GlobalInfo*) w->data; CURLMcode rc; int action = ((revents & EV_READ) ? CURL_POLL_IN : 0) | ((revents & EV_WRITE) ? CURL_POLL_OUT : 0); rc = curl_multi_socket_action(g->multi, w->fd, action, &g->still_running); mcode_or_die("event_cb: curl_multi_socket_action", rc); check_multi_info(g); if(g->still_running <= 0) { fprintf(MSG_OUT, "last transfer done, kill timeout\n"); ev_timer_stop(g->loop, &g->timer_event); } } /* Called by libevent when our timeout expires */ static void timer_cb(EV_P_ struct ev_timer *w, int revents) { DPRINT("%s w %p revents %i\n", __PRETTY_FUNCTION__, w, revents); GlobalInfo *g = (GlobalInfo *)w->data; CURLMcode rc; rc = curl_multi_socket_action(g->multi, CURL_SOCKET_TIMEOUT, 0, &g->still_running); mcode_or_die("timer_cb: curl_multi_socket_action", rc); check_multi_info(g); } /* Clean up the SockInfo structure */ static void remsock(SockInfo *f, GlobalInfo *g) { printf("%s \n", __PRETTY_FUNCTION__); if(f) { if(f->evset) ev_io_stop(g->loop, &f->ev); free(f); } } /* Assign information to a SockInfo structure */ static void setsock(SockInfo *f, curl_socket_t s, CURL *e, int act, GlobalInfo *g) { printf("%s \n", __PRETTY_FUNCTION__); int kind = ((act & CURL_POLL_IN) ? EV_READ : 0) | ((act & CURL_POLL_OUT) ? EV_WRITE : 0); f->sockfd = s; f->action = act; f->easy = e; if(f->evset) ev_io_stop(g->loop, &f->ev); ev_io_init(&f->ev, event_cb, f->sockfd, kind); f->ev.data = g; f->evset = 1; ev_io_start(g->loop, &f->ev); } /* Initialize a new SockInfo structure */ static void addsock(curl_socket_t s, CURL *easy, int action, GlobalInfo *g) { SockInfo *fdp = calloc(1, sizeof(SockInfo)); fdp->global = g; setsock(fdp, s, easy, action, g); curl_multi_assign(g->multi, s, fdp); } /* CURLMOPT_SOCKETFUNCTION */ static int sock_cb(CURL *e, curl_socket_t s, int what, void *cbp, void *sockp) { DPRINT("%s e %p s %i what %i cbp %p sockp %p\n", __PRETTY_FUNCTION__, e, s, what, cbp, sockp); GlobalInfo *g = (GlobalInfo*) cbp; SockInfo *fdp = (SockInfo*) sockp; const char *whatstr[]={ "none", "IN", "OUT", "INOUT", "REMOVE"}; fprintf(MSG_OUT, "socket callback: s=%d e=%p what=%s ", s, e, whatstr[what]); if(what == CURL_POLL_REMOVE) { fprintf(MSG_OUT, "\n"); remsock(fdp, g); } else { if(!fdp) { fprintf(MSG_OUT, "Adding data: %s\n", whatstr[what]); addsock(s, e, what, g); } else { fprintf(MSG_OUT, "Changing action from %s to %s\n", whatstr[fdp->action], whatstr[what]); setsock(fdp, s, e, what, g); } } return 0; } /* CURLOPT_WRITEFUNCTION */ static size_t write_cb(void *ptr, size_t size, size_t nmemb, void *data) { size_t realsize = size * nmemb; ConnInfo *conn = (ConnInfo*) data; (void)ptr; (void)conn; return realsize; } /* CURLOPT_PROGRESSFUNCTION */ static int prog_cb(void *p, double dltotal, double dlnow, double ult, double uln) { ConnInfo *conn = (ConnInfo *)p; (void)ult; (void)uln; fprintf(MSG_OUT, "Progress: %s (%g/%g)\n", conn->url, dlnow, dltotal); return 0; } /* Create a new easy handle, and add it to the global curl_multi */ static void new_conn(char *url, GlobalInfo *g) { ConnInfo *conn; CURLMcode rc; conn = calloc(1, sizeof(ConnInfo)); conn->error[0]='\0'; conn->easy = curl_easy_init(); if(!conn->easy) { fprintf(MSG_OUT, "curl_easy_init() failed, exiting!\n"); exit(2); } conn->global = g; conn->url = strdup(url); curl_easy_setopt(conn->easy, CURLOPT_URL, conn->url); curl_easy_setopt(conn->easy, CURLOPT_WRITEFUNCTION, write_cb); curl_easy_setopt(conn->easy, CURLOPT_WRITEDATA, conn); curl_easy_setopt(conn->easy, CURLOPT_VERBOSE, 1L); curl_easy_setopt(conn->easy, CURLOPT_ERRORBUFFER, conn->error); curl_easy_setopt(conn->easy, CURLOPT_PRIVATE, conn); curl_easy_setopt(conn->easy, CURLOPT_NOPROGRESS, 0L); curl_easy_setopt(conn->easy, CURLOPT_PROGRESSFUNCTION, prog_cb); curl_easy_setopt(conn->easy, CURLOPT_PROGRESSDATA, conn); curl_easy_setopt(conn->easy, CURLOPT_LOW_SPEED_TIME, 3L); curl_easy_setopt(conn->easy, CURLOPT_LOW_SPEED_LIMIT, 10L); fprintf(MSG_OUT, "Adding easy %p to multi %p (%s)\n", conn->easy, g->multi, url); rc = curl_multi_add_handle(g->multi, conn->easy); mcode_or_die("new_conn: curl_multi_add_handle", rc); /* note that the add_handle() will set a time-out to trigger very soon so that the necessary socket_action() call will be called by this app */ } /* This gets called whenever data is received from the fifo */ static void fifo_cb(EV_P_ struct ev_io *w, int revents) { char s[1024]; long int rv = 0; int n = 0; GlobalInfo *g = (GlobalInfo *)w->data; do { s[0]='\0'; rv = fscanf(g->input, "%1023s%n", s, &n); s[n]='\0'; if(n && s[0]) { new_conn(s, g); /* if we read a URL, go get it! */ } else break; } while(rv != EOF); } /* Create a named pipe and tell libevent to monitor it */ static int init_fifo(GlobalInfo *g) { struct stat st; static const char *fifo = "hiper.fifo"; curl_socket_t sockfd; fprintf(MSG_OUT, "Creating named pipe \"%s\"\n", fifo); if(lstat (fifo, &st) == 0) { if((st.st_mode & S_IFMT) == S_IFREG) { errno = EEXIST; perror("lstat"); exit(1); } } unlink(fifo); if(mkfifo (fifo, 0600) == -1) { perror("mkfifo"); exit(1); } sockfd = open(fifo, O_RDWR | O_NONBLOCK, 0); if(sockfd == -1) { perror("open"); exit(1); } g->input = fdopen(sockfd, "r"); fprintf(MSG_OUT, "Now, pipe some URL's into > %s\n", fifo); ev_io_init(&g->fifo_event, fifo_cb, sockfd, EV_READ); ev_io_start(g->loop, &g->fifo_event); return (0); } int main(int argc, char **argv) { GlobalInfo g; (void)argc; (void)argv; memset(&g, 0, sizeof(GlobalInfo)); g.loop = ev_default_loop(0); init_fifo(&g); g.multi = curl_multi_init(); ev_timer_init(&g.timer_event, timer_cb, 0., 0.); g.timer_event.data = &g; g.fifo_event.data = &g; curl_multi_setopt(g.multi, CURLMOPT_SOCKETFUNCTION, sock_cb); curl_multi_setopt(g.multi, CURLMOPT_SOCKETDATA, &g); curl_multi_setopt(g.multi, CURLMOPT_TIMERFUNCTION, multi_timer_cb); curl_multi_setopt(g.multi, CURLMOPT_TIMERDATA, &g); /* we do not call any curl_multi_socket*() function yet as we have no handles added! */ ev_loop(g.loop, 0); curl_multi_cleanup(g.multi); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/htmltidy.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Download a document and use libtidy to parse the HTML. * </DESC> */ /* * LibTidy => https://www.html-tidy.org/ */ #include <stdio.h> #include <tidy/tidy.h> #include <tidy/tidybuffio.h> #include <curl/curl.h> /* curl write callback, to fill tidy's input buffer... */ uint write_cb(char *in, uint size, uint nmemb, TidyBuffer *out) { uint r; r = size * nmemb; tidyBufAppend(out, in, r); return r; } /* Traverse the document tree */ void dumpNode(TidyDoc doc, TidyNode tnod, int indent) { TidyNode child; for(child = tidyGetChild(tnod); child; child = tidyGetNext(child) ) { ctmbstr name = tidyNodeGetName(child); if(name) { /* if it has a name, then it's an HTML tag ... */ TidyAttr attr; printf("%*.*s%s ", indent, indent, "<", name); /* walk the attribute list */ for(attr = tidyAttrFirst(child); attr; attr = tidyAttrNext(attr) ) { printf("%s", tidyAttrName(attr)); tidyAttrValue(attr)?printf("=\"%s\" ", tidyAttrValue(attr)):printf(" "); } printf(">\n"); } else { /* if it does not have a name, then it's probably text, cdata, etc... */ TidyBuffer buf; tidyBufInit(&buf); tidyNodeGetText(doc, child, &buf); printf("%*.*s\n", indent, indent, buf.bp?(char *)buf.bp:""); tidyBufFree(&buf); } dumpNode(doc, child, indent + 4); /* recursive */ } } int main(int argc, char **argv) { if(argc == 2) { CURL *curl; char curl_errbuf[CURL_ERROR_SIZE]; TidyDoc tdoc; TidyBuffer docbuf = {0}; TidyBuffer tidy_errbuf = {0}; int err; curl = curl_easy_init(); curl_easy_setopt(curl, CURLOPT_URL, argv[1]); curl_easy_setopt(curl, CURLOPT_ERRORBUFFER, curl_errbuf); curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 0L); curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L); curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_cb); tdoc = tidyCreate(); tidyOptSetBool(tdoc, TidyForceOutput, yes); /* try harder */ tidyOptSetInt(tdoc, TidyWrapLen, 4096); tidySetErrorBuffer(tdoc, &tidy_errbuf); tidyBufInit(&docbuf); curl_easy_setopt(curl, CURLOPT_WRITEDATA, &docbuf); err = curl_easy_perform(curl); if(!err) { err = tidyParseBuffer(tdoc, &docbuf); /* parse the input */ if(err >= 0) { err = tidyCleanAndRepair(tdoc); /* fix any problems */ if(err >= 0) { err = tidyRunDiagnostics(tdoc); /* load tidy error buffer */ if(err >= 0) { dumpNode(tdoc, tidyGetRoot(tdoc), 0); /* walk the tree */ fprintf(stderr, "%s\n", tidy_errbuf.bp); /* show errors */ } } } } else fprintf(stderr, "%s\n", curl_errbuf); /* clean-up */ curl_easy_cleanup(curl); tidyBufFree(&docbuf); tidyBufFree(&tidy_errbuf); tidyRelease(tdoc); return err; } else printf("usage: %s <url>\n", argv[0]); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/imap-delete.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * IMAP example showing how to delete a folder * </DESC> */ #include <stdio.h> #include <curl/curl.h> /* This is a simple example showing how to delete an existing mailbox folder * using libcurl's IMAP capabilities. * * Note that this example requires libcurl 7.30.0 or above. */ int main(void) { CURL *curl; CURLcode res = CURLE_OK; curl = curl_easy_init(); if(curl) { /* Set username and password */ curl_easy_setopt(curl, CURLOPT_USERNAME, "user"); curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); /* This is just the server URL */ curl_easy_setopt(curl, CURLOPT_URL, "imap://imap.example.com"); /* Set the DELETE command specifying the existing folder */ curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "DELETE FOLDER"); /* Perform the custom request */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* Always cleanup */ curl_easy_cleanup(curl); } return (int)res; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/ftp-wildcard.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * FTP wildcard pattern matching * </DESC> */ #include <curl/curl.h> #include <stdio.h> struct callback_data { FILE *output; }; static long file_is_coming(struct curl_fileinfo *finfo, struct callback_data *data, int remains); static long file_is_downloaded(struct callback_data *data); static size_t write_it(char *buff, size_t size, size_t nmemb, void *cb_data); int main(int argc, char **argv) { /* curl easy handle */ CURL *handle; /* help data */ struct callback_data data = { 0 }; /* global initialization */ int rc = curl_global_init(CURL_GLOBAL_ALL); if(rc) return rc; /* initialization of easy handle */ handle = curl_easy_init(); if(!handle) { curl_global_cleanup(); return CURLE_OUT_OF_MEMORY; } /* turn on wildcard matching */ curl_easy_setopt(handle, CURLOPT_WILDCARDMATCH, 1L); /* callback is called before download of concrete file started */ curl_easy_setopt(handle, CURLOPT_CHUNK_BGN_FUNCTION, file_is_coming); /* callback is called after data from the file have been transferred */ curl_easy_setopt(handle, CURLOPT_CHUNK_END_FUNCTION, file_is_downloaded); /* this callback will write contents into files */ curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, write_it); /* put transfer data into callbacks */ curl_easy_setopt(handle, CURLOPT_CHUNK_DATA, &data); curl_easy_setopt(handle, CURLOPT_WRITEDATA, &data); /* curl_easy_setopt(handle, CURLOPT_VERBOSE, 1L); */ /* set an URL containing wildcard pattern (only in the last part) */ if(argc == 2) curl_easy_setopt(handle, CURLOPT_URL, argv[1]); else curl_easy_setopt(handle, CURLOPT_URL, "ftp://example.com/test/*"); /* and start transfer! */ rc = curl_easy_perform(handle); curl_easy_cleanup(handle); curl_global_cleanup(); return rc; } static long file_is_coming(struct curl_fileinfo *finfo, struct callback_data *data, int remains) { printf("%3d %40s %10luB ", remains, finfo->filename, (unsigned long)finfo->size); switch(finfo->filetype) { case CURLFILETYPE_DIRECTORY: printf(" DIR\n"); break; case CURLFILETYPE_FILE: printf("FILE "); break; default: printf("OTHER\n"); break; } if(finfo->filetype == CURLFILETYPE_FILE) { /* do not transfer files >= 50B */ if(finfo->size > 50) { printf("SKIPPED\n"); return CURL_CHUNK_BGN_FUNC_SKIP; } data->output = fopen(finfo->filename, "wb"); if(!data->output) { return CURL_CHUNK_BGN_FUNC_FAIL; } } return CURL_CHUNK_BGN_FUNC_OK; } static long file_is_downloaded(struct callback_data *data) { if(data->output) { printf("DOWNLOADED\n"); fclose(data->output); data->output = 0x0; } return CURL_CHUNK_END_FUNC_OK; } static size_t write_it(char *buff, size_t size, size_t nmemb, void *cb_data) { struct callback_data *data = cb_data; size_t written = 0; if(data->output) written = fwrite(buff, size, nmemb, data->output); else /* listing output */ written = fwrite(buff, size, nmemb, stdout); return written; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/url2file.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Download a given URL into a local file named page.out. * </DESC> */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <curl/curl.h> static size_t write_data(void *ptr, size_t size, size_t nmemb, void *stream) { size_t written = fwrite(ptr, size, nmemb, (FILE *)stream); return written; } int main(int argc, char *argv[]) { CURL *curl_handle; static const char *pagefilename = "page.out"; FILE *pagefile; if(argc < 2) { printf("Usage: %s <URL>\n", argv[0]); return 1; } curl_global_init(CURL_GLOBAL_ALL); /* init the curl session */ curl_handle = curl_easy_init(); /* set URL to get here */ curl_easy_setopt(curl_handle, CURLOPT_URL, argv[1]); /* Switch on full protocol/debug output while testing */ curl_easy_setopt(curl_handle, CURLOPT_VERBOSE, 1L); /* disable progress meter, set to 0L to enable it */ curl_easy_setopt(curl_handle, CURLOPT_NOPROGRESS, 1L); /* send all data to this function */ curl_easy_setopt(curl_handle, CURLOPT_WRITEFUNCTION, write_data); /* open the file */ pagefile = fopen(pagefilename, "wb"); if(pagefile) { /* write the page body to this file handle */ curl_easy_setopt(curl_handle, CURLOPT_WRITEDATA, pagefile); /* get it! */ curl_easy_perform(curl_handle); /* close the header file */ fclose(pagefile); } /* cleanup curl stuff */ curl_easy_cleanup(curl_handle); curl_global_cleanup(); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/imap-tls.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * IMAP example using TLS * </DESC> */ #include <stdio.h> #include <curl/curl.h> /* This is a simple example showing how to fetch mail using libcurl's IMAP * capabilities. It builds on the imap-fetch.c example adding transport * security to protect the authentication details from being snooped. * * Note that this example requires libcurl 7.30.0 or above. */ int main(void) { CURL *curl; CURLcode res = CURLE_OK; curl = curl_easy_init(); if(curl) { /* Set username and password */ curl_easy_setopt(curl, CURLOPT_USERNAME, "user"); curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); /* This will fetch message 1 from the user's inbox */ curl_easy_setopt(curl, CURLOPT_URL, "imap://imap.example.com/INBOX/;UID=1"); /* In this example, we will start with a plain text connection, and upgrade * to Transport Layer Security (TLS) using the STARTTLS command. Be careful * of using CURLUSESSL_TRY here, because if TLS upgrade fails, the transfer * will continue anyway - see the security discussion in the libcurl * tutorial for more details. */ curl_easy_setopt(curl, CURLOPT_USE_SSL, (long)CURLUSESSL_ALL); /* If your server does not have a valid certificate, then you can disable * part of the Transport Layer Security protection by setting the * CURLOPT_SSL_VERIFYPEER and CURLOPT_SSL_VERIFYHOST options to 0 (false). * curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, 0L); * curl_easy_setopt(curl, CURLOPT_SSL_VERIFYHOST, 0L); * * That is, in general, a bad idea. It is still better than sending your * authentication details in plain text though. Instead, you should get * the issuer certificate (or the host certificate if the certificate is * self-signed) and add it to the set of certificates that are known to * libcurl using CURLOPT_CAINFO and/or CURLOPT_CAPATH. See docs/SSLCERTS * for more information. */ curl_easy_setopt(curl, CURLOPT_CAINFO, "/path/to/certificate.pem"); /* Since the traffic will be encrypted, it is very useful to turn on debug * information within libcurl to see what is happening during the * transfer */ curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L); /* Perform the fetch */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* Always cleanup */ curl_easy_cleanup(curl); } return (int)res; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/10-at-a-time.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Download many files in parallel, in the same thread. * </DESC> */ #include <errno.h> #include <stdlib.h> #include <string.h> #ifndef WIN32 # include <unistd.h> #endif #include <curl/curl.h> static const char *urls[] = { "https://www.microsoft.com", "https://opensource.org", "https://www.google.com", "https://www.yahoo.com", "https://www.ibm.com", "https://www.mysql.com", "https://www.oracle.com", "https://www.ripe.net", "https://www.iana.org", "https://www.amazon.com", "https://www.netcraft.com", "https://www.heise.de", "https://www.chip.de", "https://www.ca.com", "https://www.cnet.com", "https://www.mozilla.org", "https://www.cnn.com", "https://www.wikipedia.org", "https://www.dell.com", "https://www.hp.com", "https://www.cert.org", "https://www.mit.edu", "https://www.nist.gov", "https://www.ebay.com", "https://www.playstation.com", "https://www.uefa.com", "https://www.ieee.org", "https://www.apple.com", "https://www.symantec.com", "https://www.zdnet.com", "https://www.fujitsu.com/global/", "https://www.supermicro.com", "https://www.hotmail.com", "https://www.ietf.org", "https://www.bbc.co.uk", "https://news.google.com", "https://www.foxnews.com", "https://www.msn.com", "https://www.wired.com", "https://www.sky.com", "https://www.usatoday.com", "https://www.cbs.com", "https://www.nbc.com/", "https://slashdot.org", "https://www.informationweek.com", "https://apache.org", "https://www.un.org", }; #define MAX_PARALLEL 10 /* number of simultaneous transfers */ #define NUM_URLS sizeof(urls)/sizeof(char *) static size_t write_cb(char *data, size_t n, size_t l, void *userp) { /* take care of the data here, ignored in this example */ (void)data; (void)userp; return n*l; } static void add_transfer(CURLM *cm, int i) { CURL *eh = curl_easy_init(); curl_easy_setopt(eh, CURLOPT_WRITEFUNCTION, write_cb); curl_easy_setopt(eh, CURLOPT_URL, urls[i]); curl_easy_setopt(eh, CURLOPT_PRIVATE, urls[i]); curl_multi_add_handle(cm, eh); } int main(void) { CURLM *cm; CURLMsg *msg; unsigned int transfers = 0; int msgs_left = -1; int still_alive = 1; curl_global_init(CURL_GLOBAL_ALL); cm = curl_multi_init(); /* Limit the amount of simultaneous connections curl should allow: */ curl_multi_setopt(cm, CURLMOPT_MAXCONNECTS, (long)MAX_PARALLEL); for(transfers = 0; transfers < MAX_PARALLEL; transfers++) add_transfer(cm, transfers); do { curl_multi_perform(cm, &still_alive); while((msg = curl_multi_info_read(cm, &msgs_left))) { if(msg->msg == CURLMSG_DONE) { char *url; CURL *e = msg->easy_handle; curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE, &url); fprintf(stderr, "R: %d - %s <%s>\n", msg->data.result, curl_easy_strerror(msg->data.result), url); curl_multi_remove_handle(cm, e); curl_easy_cleanup(e); } else { fprintf(stderr, "E: CURLMsg (%d)\n", msg->msg); } if(transfers < NUM_URLS) add_transfer(cm, transfers++); } if(still_alive) curl_multi_wait(cm, NULL, 0, 1000, NULL); } while(still_alive || (transfers < NUM_URLS)); curl_multi_cleanup(cm); curl_global_cleanup(); return EXIT_SUCCESS; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/rtsp.c
/* * Copyright (c) 2011 - 2021, Jim Hollinger * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * Neither the name of Jim Hollinger nor the names of its contributors * may be used to endorse or promote products derived from this * software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * */ /* <DESC> * A basic RTSP transfer * </DESC> */ #include <stdio.h> #include <stdlib.h> #include <string.h> #if defined (WIN32) # include <conio.h> /* _getch() */ #else # include <termios.h> # include <unistd.h> static int _getch(void) { struct termios oldt, newt; int ch; tcgetattr(STDIN_FILENO, &oldt); newt = oldt; newt.c_lflag &= ~( ICANON | ECHO); tcsetattr(STDIN_FILENO, TCSANOW, &newt); ch = getchar(); tcsetattr(STDIN_FILENO, TCSANOW, &oldt); return ch; } #endif #include <curl/curl.h> #define VERSION_STR "V1.0" /* error handling macros */ #define my_curl_easy_setopt(A, B, C) \ do { \ res = curl_easy_setopt((A), (B), (C)); \ if(res != CURLE_OK) \ fprintf(stderr, "curl_easy_setopt(%s, %s, %s) failed: %d\n", \ #A, #B, #C, res); \ } while(0) #define my_curl_easy_perform(A) \ do { \ res = curl_easy_perform(A); \ if(res != CURLE_OK) \ fprintf(stderr, "curl_easy_perform(%s) failed: %d\n", #A, res); \ } while(0) /* send RTSP OPTIONS request */ static void rtsp_options(CURL *curl, const char *uri) { CURLcode res = CURLE_OK; printf("\nRTSP: OPTIONS %s\n", uri); my_curl_easy_setopt(curl, CURLOPT_RTSP_STREAM_URI, uri); my_curl_easy_setopt(curl, CURLOPT_RTSP_REQUEST, (long)CURL_RTSPREQ_OPTIONS); my_curl_easy_perform(curl); } /* send RTSP DESCRIBE request and write sdp response to a file */ static void rtsp_describe(CURL *curl, const char *uri, const char *sdp_filename) { CURLcode res = CURLE_OK; FILE *sdp_fp = fopen(sdp_filename, "wb"); printf("\nRTSP: DESCRIBE %s\n", uri); if(!sdp_fp) { fprintf(stderr, "Could not open '%s' for writing\n", sdp_filename); sdp_fp = stdout; } else { printf("Writing SDP to '%s'\n", sdp_filename); } my_curl_easy_setopt(curl, CURLOPT_WRITEDATA, sdp_fp); my_curl_easy_setopt(curl, CURLOPT_RTSP_REQUEST, (long)CURL_RTSPREQ_DESCRIBE); my_curl_easy_perform(curl); my_curl_easy_setopt(curl, CURLOPT_WRITEDATA, stdout); if(sdp_fp != stdout) { fclose(sdp_fp); } } /* send RTSP SETUP request */ static void rtsp_setup(CURL *curl, const char *uri, const char *transport) { CURLcode res = CURLE_OK; printf("\nRTSP: SETUP %s\n", uri); printf(" TRANSPORT %s\n", transport); my_curl_easy_setopt(curl, CURLOPT_RTSP_STREAM_URI, uri); my_curl_easy_setopt(curl, CURLOPT_RTSP_TRANSPORT, transport); my_curl_easy_setopt(curl, CURLOPT_RTSP_REQUEST, (long)CURL_RTSPREQ_SETUP); my_curl_easy_perform(curl); } /* send RTSP PLAY request */ static void rtsp_play(CURL *curl, const char *uri, const char *range) { CURLcode res = CURLE_OK; printf("\nRTSP: PLAY %s\n", uri); my_curl_easy_setopt(curl, CURLOPT_RTSP_STREAM_URI, uri); my_curl_easy_setopt(curl, CURLOPT_RANGE, range); my_curl_easy_setopt(curl, CURLOPT_RTSP_REQUEST, (long)CURL_RTSPREQ_PLAY); my_curl_easy_perform(curl); /* switch off using range again */ my_curl_easy_setopt(curl, CURLOPT_RANGE, NULL); } /* send RTSP TEARDOWN request */ static void rtsp_teardown(CURL *curl, const char *uri) { CURLcode res = CURLE_OK; printf("\nRTSP: TEARDOWN %s\n", uri); my_curl_easy_setopt(curl, CURLOPT_RTSP_REQUEST, (long)CURL_RTSPREQ_TEARDOWN); my_curl_easy_perform(curl); } /* convert url into an sdp filename */ static void get_sdp_filename(const char *url, char *sdp_filename, size_t namelen) { const char *s = strrchr(url, '/'); strcpy(sdp_filename, "video.sdp"); if(s != NULL) { s++; if(s[0] != '\0') { snprintf(sdp_filename, namelen, "%s.sdp", s); } } } /* scan sdp file for media control attribute */ static void get_media_control_attribute(const char *sdp_filename, char *control) { int max_len = 256; char *s = malloc(max_len); FILE *sdp_fp = fopen(sdp_filename, "rb"); control[0] = '\0'; if(sdp_fp != NULL) { while(fgets(s, max_len - 2, sdp_fp) != NULL) { sscanf(s, " a = control: %32s", control); } fclose(sdp_fp); } free(s); } /* main app */ int main(int argc, char * const argv[]) { #if 1 const char *transport = "RTP/AVP;unicast;client_port=1234-1235"; /* UDP */ #else /* TCP */ const char *transport = "RTP/AVP/TCP;unicast;client_port=1234-1235"; #endif const char *range = "0.000-"; int rc = EXIT_SUCCESS; char *base_name = NULL; printf("\nRTSP request %s\n", VERSION_STR); printf(" Project website: " "https://github.com/BackupGGCode/rtsprequest\n"); printf(" Requires curl V7.20 or greater\n\n"); /* check command line */ if((argc != 2) && (argc != 3)) { base_name = strrchr(argv[0], '/'); if(!base_name) { base_name = strrchr(argv[0], '\\'); } if(!base_name) { base_name = argv[0]; } else { base_name++; } printf("Usage: %s url [transport]\n", base_name); printf(" url of video server\n"); printf(" transport (optional) specifier for media stream" " protocol\n"); printf(" default transport: %s\n", transport); printf("Example: %s rtsp://192.168.0.2/media/video1\n\n", base_name); rc = EXIT_FAILURE; } else { const char *url = argv[1]; char *uri = malloc(strlen(url) + 32); char *sdp_filename = malloc(strlen(url) + 32); char *control = malloc(strlen(url) + 32); CURLcode res; get_sdp_filename(url, sdp_filename, strlen(url) + 32); if(argc == 3) { transport = argv[2]; } /* initialize curl */ res = curl_global_init(CURL_GLOBAL_ALL); if(res == CURLE_OK) { curl_version_info_data *data = curl_version_info(CURLVERSION_NOW); CURL *curl; fprintf(stderr, " curl V%s loaded\n", data->version); /* initialize this curl session */ curl = curl_easy_init(); if(curl != NULL) { my_curl_easy_setopt(curl, CURLOPT_VERBOSE, 0L); my_curl_easy_setopt(curl, CURLOPT_NOPROGRESS, 1L); my_curl_easy_setopt(curl, CURLOPT_HEADERDATA, stdout); my_curl_easy_setopt(curl, CURLOPT_URL, url); /* request server options */ snprintf(uri, strlen(url) + 32, "%s", url); rtsp_options(curl, uri); /* request session description and write response to sdp file */ rtsp_describe(curl, uri, sdp_filename); /* get media control attribute from sdp file */ get_media_control_attribute(sdp_filename, control); /* setup media stream */ snprintf(uri, strlen(url) + 32, "%s/%s", url, control); rtsp_setup(curl, uri, transport); /* start playing media stream */ snprintf(uri, strlen(url) + 32, "%s/", url); rtsp_play(curl, uri, range); printf("Playing video, press any key to stop ..."); _getch(); printf("\n"); /* teardown session */ rtsp_teardown(curl, uri); /* cleanup */ curl_easy_cleanup(curl); curl = NULL; } else { fprintf(stderr, "curl_easy_init() failed\n"); } curl_global_cleanup(); } else { fprintf(stderr, "curl_global_init(%s) failed: %d\n", "CURL_GLOBAL_ALL", res); } free(control); free(sdp_filename); free(uri); } return rc; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/pop3-dele.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * POP3 example showing how to delete e-mails * </DESC> */ #include <stdio.h> #include <curl/curl.h> /* This is a simple example showing how to delete an existing mail using * libcurl's POP3 capabilities. * * Note that this example requires libcurl 7.26.0 or above. */ int main(void) { CURL *curl; CURLcode res = CURLE_OK; curl = curl_easy_init(); if(curl) { /* Set username and password */ curl_easy_setopt(curl, CURLOPT_USERNAME, "user"); curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); /* You can specify the message either in the URL or DELE command */ curl_easy_setopt(curl, CURLOPT_URL, "pop3://pop.example.com/1"); /* Set the DELE command */ curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "DELE"); /* Do not perform a transfer as DELE returns no data */ curl_easy_setopt(curl, CURLOPT_NOBODY, 1L); /* Perform the custom request */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* Always cleanup */ curl_easy_cleanup(curl); } return (int)res; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/smtp-ssl.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * SMTP example using SSL * </DESC> */ #include <stdio.h> #include <string.h> #include <curl/curl.h> /* This is a simple example showing how to send mail using libcurl's SMTP * capabilities. It builds on the smtp-mail.c example to add authentication * and, more importantly, transport security to protect the authentication * details from being snooped. * * Note that this example requires libcurl 7.20.0 or above. */ #define FROM_MAIL "<[email protected]>" #define TO_MAIL "<[email protected]>" #define CC_MAIL "<[email protected]>" static const char *payload_text = "Date: Mon, 29 Nov 2010 21:54:29 +1100\r\n" "To: " TO_MAIL "\r\n" "From: " FROM_MAIL "\r\n" "Cc: " CC_MAIL "\r\n" "Message-ID: <dcd7cb36-11db-487a-9f3a-e652a9458efd@" "rfcpedant.example.org>\r\n" "Subject: SMTP example message\r\n" "\r\n" /* empty line to divide headers from body, see RFC5322 */ "The body of the message starts here.\r\n" "\r\n" "It could be a lot of lines, could be MIME encoded, whatever.\r\n" "Check RFC5322.\r\n"; struct upload_status { size_t bytes_read; }; static size_t payload_source(char *ptr, size_t size, size_t nmemb, void *userp) { struct upload_status *upload_ctx = (struct upload_status *)userp; const char *data; size_t room = size * nmemb; if((size == 0) || (nmemb == 0) || ((size*nmemb) < 1)) { return 0; } data = &payload_text[upload_ctx->bytes_read]; if(data) { size_t len = strlen(data); if(room < len) len = room; memcpy(ptr, data, len); upload_ctx->bytes_read += len; return len; } return 0; } int main(void) { CURL *curl; CURLcode res = CURLE_OK; struct curl_slist *recipients = NULL; struct upload_status upload_ctx = { 0 }; curl = curl_easy_init(); if(curl) { /* Set username and password */ curl_easy_setopt(curl, CURLOPT_USERNAME, "user"); curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); /* This is the URL for your mailserver. Note the use of smtps:// rather * than smtp:// to request a SSL based connection. */ curl_easy_setopt(curl, CURLOPT_URL, "smtps://mainserver.example.net"); /* If you want to connect to a site who is not using a certificate that is * signed by one of the certs in the CA bundle you have, you can skip the * verification of the server's certificate. This makes the connection * A LOT LESS SECURE. * * If you have a CA cert for the server stored someplace else than in the * default bundle, then the CURLOPT_CAPATH option might come handy for * you. */ #ifdef SKIP_PEER_VERIFICATION curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, 0L); #endif /* If the site you are connecting to uses a different host name that what * they have mentioned in their server certificate's commonName (or * subjectAltName) fields, libcurl will refuse to connect. You can skip * this check, but this will make the connection less secure. */ #ifdef SKIP_HOSTNAME_VERIFICATION curl_easy_setopt(curl, CURLOPT_SSL_VERIFYHOST, 0L); #endif /* Note that this option is not strictly required, omitting it will result * in libcurl sending the MAIL FROM command with empty sender data. All * autoresponses should have an empty reverse-path, and should be directed * to the address in the reverse-path which triggered them. Otherwise, * they could cause an endless loop. See RFC 5321 Section 4.5.5 for more * details. */ curl_easy_setopt(curl, CURLOPT_MAIL_FROM, FROM_MAIL); /* Add two recipients, in this particular case they correspond to the * To: and Cc: addressees in the header, but they could be any kind of * recipient. */ recipients = curl_slist_append(recipients, TO_MAIL); recipients = curl_slist_append(recipients, CC_MAIL); curl_easy_setopt(curl, CURLOPT_MAIL_RCPT, recipients); /* We are using a callback function to specify the payload (the headers and * body of the message). You could just use the CURLOPT_READDATA option to * specify a FILE pointer to read from. */ curl_easy_setopt(curl, CURLOPT_READFUNCTION, payload_source); curl_easy_setopt(curl, CURLOPT_READDATA, &upload_ctx); curl_easy_setopt(curl, CURLOPT_UPLOAD, 1L); /* Since the traffic will be encrypted, it is very useful to turn on debug * information within libcurl to see what is happening during the * transfer */ curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L); /* Send the message */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* Free the list of recipients */ curl_slist_free_all(recipients); /* Always cleanup */ curl_easy_cleanup(curl); } return (int)res; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/imap-ssl.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * IMAP example using SSL * </DESC> */ #include <stdio.h> #include <curl/curl.h> /* This is a simple example showing how to fetch mail using libcurl's IMAP * capabilities. It builds on the imap-fetch.c example adding transport * security to protect the authentication details from being snooped. * * Note that this example requires libcurl 7.30.0 or above. */ int main(void) { CURL *curl; CURLcode res = CURLE_OK; curl = curl_easy_init(); if(curl) { /* Set username and password */ curl_easy_setopt(curl, CURLOPT_USERNAME, "user"); curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); /* This will fetch message 1 from the user's inbox. Note the use of * imaps:// rather than imap:// to request a SSL based connection. */ curl_easy_setopt(curl, CURLOPT_URL, "imaps://imap.example.com/INBOX/;UID=1"); /* If you want to connect to a site who is not using a certificate that is * signed by one of the certs in the CA bundle you have, you can skip the * verification of the server's certificate. This makes the connection * A LOT LESS SECURE. * * If you have a CA cert for the server stored someplace else than in the * default bundle, then the CURLOPT_CAPATH option might come handy for * you. */ #ifdef SKIP_PEER_VERIFICATION curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, 0L); #endif /* If the site you are connecting to uses a different host name that what * they have mentioned in their server certificate's commonName (or * subjectAltName) fields, libcurl will refuse to connect. You can skip * this check, but this will make the connection less secure. */ #ifdef SKIP_HOSTNAME_VERIFICATION curl_easy_setopt(curl, CURLOPT_SSL_VERIFYHOST, 0L); #endif /* Since the traffic will be encrypted, it is very useful to turn on debug * information within libcurl to see what is happening during the * transfer */ curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L); /* Perform the fetch */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* Always cleanup */ curl_easy_cleanup(curl); } return (int)res; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/smtp-multi.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * SMTP example using the multi interface * </DESC> */ #include <string.h> #include <curl/curl.h> /* This is an example showing how to send mail using libcurl's SMTP * capabilities. It builds on the smtp-mail.c example to demonstrate how to use * libcurl's multi interface. */ #define FROM_MAIL "<[email protected]>" #define TO_MAIL "<[email protected]>" #define CC_MAIL "<[email protected]>" static const char *payload_text = "Date: Mon, 29 Nov 2010 21:54:29 +1100\r\n" "To: " TO_MAIL "\r\n" "From: " FROM_MAIL "\r\n" "Cc: " CC_MAIL "\r\n" "Message-ID: <dcd7cb36-11db-487a-9f3a-e652a9458efd@" "rfcpedant.example.org>\r\n" "Subject: SMTP example message\r\n" "\r\n" /* empty line to divide headers from body, see RFC5322 */ "The body of the message starts here.\r\n" "\r\n" "It could be a lot of lines, could be MIME encoded, whatever.\r\n" "Check RFC5322.\r\n"; struct upload_status { size_t bytes_read; }; static size_t payload_source(char *ptr, size_t size, size_t nmemb, void *userp) { struct upload_status *upload_ctx = (struct upload_status *)userp; const char *data; size_t room = size * nmemb; if((size == 0) || (nmemb == 0) || ((size*nmemb) < 1)) { return 0; } data = &payload_text[upload_ctx->bytes_read]; if(data) { size_t len = strlen(data); if(room < len) len = room; memcpy(ptr, data, len); upload_ctx->bytes_read += len; return len; } return 0; } int main(void) { CURL *curl; CURLM *mcurl; int still_running = 1; struct curl_slist *recipients = NULL; struct upload_status upload_ctx = { 0 }; curl_global_init(CURL_GLOBAL_DEFAULT); curl = curl_easy_init(); if(!curl) return 1; mcurl = curl_multi_init(); if(!mcurl) return 2; /* This is the URL for your mailserver */ curl_easy_setopt(curl, CURLOPT_URL, "smtp://mail.example.com"); /* Note that this option is not strictly required, omitting it will result in * libcurl sending the MAIL FROM command with empty sender data. All * autoresponses should have an empty reverse-path, and should be directed * to the address in the reverse-path which triggered them. Otherwise, they * could cause an endless loop. See RFC 5321 Section 4.5.5 for more details. */ curl_easy_setopt(curl, CURLOPT_MAIL_FROM, FROM_MAIL); /* Add two recipients, in this particular case they correspond to the * To: and Cc: addressees in the header, but they could be any kind of * recipient. */ recipients = curl_slist_append(recipients, TO_MAIL); recipients = curl_slist_append(recipients, CC_MAIL); curl_easy_setopt(curl, CURLOPT_MAIL_RCPT, recipients); /* We are using a callback function to specify the payload (the headers and * body of the message). You could just use the CURLOPT_READDATA option to * specify a FILE pointer to read from. */ curl_easy_setopt(curl, CURLOPT_READFUNCTION, payload_source); curl_easy_setopt(curl, CURLOPT_READDATA, &upload_ctx); curl_easy_setopt(curl, CURLOPT_UPLOAD, 1L); /* Tell the multi stack about our easy handle */ curl_multi_add_handle(mcurl, curl); do { CURLMcode mc = curl_multi_perform(mcurl, &still_running); if(still_running) /* wait for activity, timeout or "nothing" */ mc = curl_multi_poll(mcurl, NULL, 0, 1000, NULL); if(mc) break; } while(still_running); /* Free the list of recipients */ curl_slist_free_all(recipients); /* Always cleanup */ curl_multi_remove_handle(mcurl, curl); curl_multi_cleanup(mcurl); curl_easy_cleanup(curl); curl_global_cleanup(); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/getinmemory.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Shows how the write callback function can be used to download data into a * chunk of memory instead of storing it in a file. * </DESC> */ #include <stdio.h> #include <stdlib.h> #include <string.h> #include <curl/curl.h> struct MemoryStruct { char *memory; size_t size; }; static size_t WriteMemoryCallback(void *contents, size_t size, size_t nmemb, void *userp) { size_t realsize = size * nmemb; struct MemoryStruct *mem = (struct MemoryStruct *)userp; char *ptr = realloc(mem->memory, mem->size + realsize + 1); if(!ptr) { /* out of memory! */ printf("not enough memory (realloc returned NULL)\n"); return 0; } mem->memory = ptr; memcpy(&(mem->memory[mem->size]), contents, realsize); mem->size += realsize; mem->memory[mem->size] = 0; return realsize; } int main(void) { CURL *curl_handle; CURLcode res; struct MemoryStruct chunk; chunk.memory = malloc(1); /* will be grown as needed by the realloc above */ chunk.size = 0; /* no data at this point */ curl_global_init(CURL_GLOBAL_ALL); /* init the curl session */ curl_handle = curl_easy_init(); /* specify URL to get */ curl_easy_setopt(curl_handle, CURLOPT_URL, "https://www.example.com/"); /* send all data to this function */ curl_easy_setopt(curl_handle, CURLOPT_WRITEFUNCTION, WriteMemoryCallback); /* we pass our 'chunk' struct to the callback function */ curl_easy_setopt(curl_handle, CURLOPT_WRITEDATA, (void *)&chunk); /* some servers do not like requests that are made without a user-agent field, so we provide one */ curl_easy_setopt(curl_handle, CURLOPT_USERAGENT, "libcurl-agent/1.0"); /* get it! */ res = curl_easy_perform(curl_handle); /* check for errors */ if(res != CURLE_OK) { fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); } else { /* * Now, our chunk.memory points to a memory block that is chunk.size * bytes big and contains the remote file. * * Do something nice with it! */ printf("%lu bytes retrieved\n", (unsigned long)chunk.size); } /* cleanup curl stuff */ curl_easy_cleanup(curl_handle); free(chunk.memory); /* we are done with libcurl, so clean it up */ curl_global_cleanup(); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/httpcustomheader.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * HTTP request with custom modified, removed and added headers * </DESC> */ #include <stdio.h> #include <curl/curl.h> int main(void) { CURL *curl; CURLcode res; curl = curl_easy_init(); if(curl) { struct curl_slist *chunk = NULL; /* Remove a header curl would otherwise add by itself */ chunk = curl_slist_append(chunk, "Accept:"); /* Add a custom header */ chunk = curl_slist_append(chunk, "Another: yes"); /* Modify a header curl otherwise adds differently */ chunk = curl_slist_append(chunk, "Host: example.com"); /* Add a header with "blank" contents to the right of the colon. Note that we are then using a semicolon in the string we pass to curl! */ chunk = curl_slist_append(chunk, "X-silly-header;"); /* set our custom set of headers */ curl_easy_setopt(curl, CURLOPT_HTTPHEADER, chunk); curl_easy_setopt(curl, CURLOPT_URL, "localhost"); curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L); res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* always cleanup */ curl_easy_cleanup(curl); /* free the custom headers */ curl_slist_free_all(chunk); } return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/shared-connection-cache.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Connection cache shared between easy handles with the share interface * </DESC> */ #include <stdio.h> #include <curl/curl.h> static void my_lock(CURL *handle, curl_lock_data data, curl_lock_access laccess, void *useptr) { (void)handle; (void)data; (void)laccess; (void)useptr; fprintf(stderr, "-> Mutex lock\n"); } static void my_unlock(CURL *handle, curl_lock_data data, void *useptr) { (void)handle; (void)data; (void)useptr; fprintf(stderr, "<- Mutex unlock\n"); } int main(void) { CURLSH *share; int i; share = curl_share_init(); curl_share_setopt(share, CURLSHOPT_SHARE, CURL_LOCK_DATA_CONNECT); curl_share_setopt(share, CURLSHOPT_LOCKFUNC, my_lock); curl_share_setopt(share, CURLSHOPT_UNLOCKFUNC, my_unlock); /* Loop the transfer and cleanup the handle properly every lap. This will still reuse connections since the pool is in the shared object! */ for(i = 0; i < 3; i++) { CURL *curl = curl_easy_init(); if(curl) { CURLcode res; curl_easy_setopt(curl, CURLOPT_URL, "https://curl.se/"); /* use the share object */ curl_easy_setopt(curl, CURLOPT_SHARE, share); /* Perform the request, res will get the return code */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* always cleanup */ curl_easy_cleanup(curl); } } curl_share_cleanup(share); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/xmlstream.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Stream-parse a document using the streaming Expat parser. * </DESC> */ /* Written by David Strauss * * Expat => https://libexpat.github.io/ * * gcc -Wall -I/usr/local/include xmlstream.c -lcurl -lexpat -o xmlstream * */ #include <stdio.h> #include <stdlib.h> #include <string.h> #include <assert.h> #include <expat.h> #include <curl/curl.h> struct MemoryStruct { char *memory; size_t size; }; struct ParserStruct { int ok; size_t tags; size_t depth; struct MemoryStruct characters; }; static void startElement(void *userData, const XML_Char *name, const XML_Char **atts) { struct ParserStruct *state = (struct ParserStruct *) userData; state->tags++; state->depth++; /* Get a clean slate for reading in character data. */ free(state->characters.memory); state->characters.memory = NULL; state->characters.size = 0; } static void characterDataHandler(void *userData, const XML_Char *s, int len) { struct ParserStruct *state = (struct ParserStruct *) userData; struct MemoryStruct *mem = &state->characters; char *ptr = realloc(mem->memory, mem->size + len + 1); if(!ptr) { /* Out of memory. */ fprintf(stderr, "Not enough memory (realloc returned NULL).\n"); state->ok = 0; return; } mem->memory = ptr; memcpy(&(mem->memory[mem->size]), s, len); mem->size += len; mem->memory[mem->size] = 0; } static void endElement(void *userData, const XML_Char *name) { struct ParserStruct *state = (struct ParserStruct *) userData; state->depth--; printf("%5lu %10lu %s\n", state->depth, state->characters.size, name); } static size_t parseStreamCallback(void *contents, size_t length, size_t nmemb, void *userp) { XML_Parser parser = (XML_Parser) userp; size_t real_size = length * nmemb; struct ParserStruct *state = (struct ParserStruct *) XML_GetUserData(parser); /* Only parse if we are not already in a failure state. */ if(state->ok && XML_Parse(parser, contents, real_size, 0) == 0) { int error_code = XML_GetErrorCode(parser); fprintf(stderr, "Parsing response buffer of length %lu failed" " with error code %d (%s).\n", real_size, error_code, XML_ErrorString(error_code)); state->ok = 0; } return real_size; } int main(void) { CURL *curl_handle; CURLcode res; XML_Parser parser; struct ParserStruct state; /* Initialize the state structure for parsing. */ memset(&state, 0, sizeof(struct ParserStruct)); state.ok = 1; /* Initialize a namespace-aware parser. */ parser = XML_ParserCreateNS(NULL, '\0'); XML_SetUserData(parser, &state); XML_SetElementHandler(parser, startElement, endElement); XML_SetCharacterDataHandler(parser, characterDataHandler); /* Initialize a libcurl handle. */ curl_global_init(CURL_GLOBAL_DEFAULT); curl_handle = curl_easy_init(); curl_easy_setopt(curl_handle, CURLOPT_URL, "https://www.w3schools.com/xml/simple.xml"); curl_easy_setopt(curl_handle, CURLOPT_WRITEFUNCTION, parseStreamCallback); curl_easy_setopt(curl_handle, CURLOPT_WRITEDATA, (void *)parser); printf("Depth Characters Closing Tag\n"); /* Perform the request and any follow-up parsing. */ res = curl_easy_perform(curl_handle); if(res != CURLE_OK) { fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); } else if(state.ok) { /* Expat requires one final call to finalize parsing. */ if(XML_Parse(parser, NULL, 0, 1) == 0) { int error_code = XML_GetErrorCode(parser); fprintf(stderr, "Finalizing parsing failed with error code %d (%s).\n", error_code, XML_ErrorString(error_code)); } else { printf(" --------------\n"); printf(" %lu tags total\n", state.tags); } } /* Clean up. */ free(state.characters.memory); XML_ParserFree(parser); curl_easy_cleanup(curl_handle); curl_global_cleanup(); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/multi-post.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * using the multi interface to do a multipart formpost without blocking * </DESC> */ #include <stdio.h> #include <string.h> #include <sys/time.h> #include <curl/curl.h> int main(void) { CURL *curl; CURLM *multi_handle; int still_running = 0; curl_mime *form = NULL; curl_mimepart *field = NULL; struct curl_slist *headerlist = NULL; static const char buf[] = "Expect:"; curl = curl_easy_init(); multi_handle = curl_multi_init(); if(curl && multi_handle) { /* Create the form */ form = curl_mime_init(curl); /* Fill in the file upload field */ field = curl_mime_addpart(form); curl_mime_name(field, "sendfile"); curl_mime_filedata(field, "multi-post.c"); /* Fill in the filename field */ field = curl_mime_addpart(form); curl_mime_name(field, "filename"); curl_mime_data(field, "multi-post.c", CURL_ZERO_TERMINATED); /* Fill in the submit field too, even if this is rarely needed */ field = curl_mime_addpart(form); curl_mime_name(field, "submit"); curl_mime_data(field, "send", CURL_ZERO_TERMINATED); /* initialize custom header list (stating that Expect: 100-continue is not wanted */ headerlist = curl_slist_append(headerlist, buf); /* what URL that receives this POST */ curl_easy_setopt(curl, CURLOPT_URL, "https://www.example.com/upload.cgi"); curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L); curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headerlist); curl_easy_setopt(curl, CURLOPT_MIMEPOST, form); curl_multi_add_handle(multi_handle, curl); do { CURLMcode mc = curl_multi_perform(multi_handle, &still_running); if(still_running) /* wait for activity, timeout or "nothing" */ mc = curl_multi_poll(multi_handle, NULL, 0, 1000, NULL); if(mc) break; } while(still_running); curl_multi_cleanup(multi_handle); /* always cleanup */ curl_easy_cleanup(curl); /* then cleanup the form */ curl_mime_free(form); /* free slist */ curl_slist_free_all(headerlist); } return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/cacertinmem.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * CA cert in memory with OpenSSL to get a HTTPS page. * </DESC> */ #include <openssl/err.h> #include <openssl/ssl.h> #include <curl/curl.h> #include <stdio.h> static size_t writefunction(void *ptr, size_t size, size_t nmemb, void *stream) { fwrite(ptr, size, nmemb, (FILE *)stream); return (nmemb*size); } static CURLcode sslctx_function(CURL *curl, void *sslctx, void *parm) { CURLcode rv = CURLE_ABORTED_BY_CALLBACK; /** This example uses two (fake) certificates **/ static const char mypem[] = "-----BEGIN CERTIFICATE-----\n" "MIIH0zCCBbugAwIBAgIIXsO3pkN/pOAwDQYJKoZIhvcNAQEFBQAwQjESMBAGA1UE\n" "AwwJQUNDVlJBSVoxMRAwDgYDVQQLDAdQS0lBQ0NWMQ0wCwYDVQQKDARBQ0NWMQsw\n" "CQYDVQQGEwJFUzAeFw0xMTA1MDUwOTM3MzdaFw0zMDEyMzEwOTM3MzdaMEIxEjAQ\n" "BgNVBAMMCUFDQ1ZSQUlaMTEQMA4GA1UECwwHUEtJQUNDVjENMAsGA1UECgwEQUND\n" "VjELMAkGA1UEBhMCRVMwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCb\n" "qau/YUqXry+XZpp0X9DZlv3P4uRm7x8fRzPCRKPfmt4ftVTdFXxpNRFvu8gMjmoY\n" "HtiP2Ra8EEg2XPBjs5BaXCQ316PWywlxufEBcoSwfdtNgM3802/J+Nq2DoLSRYWo\n" "G2ioPej0RGy9ocLLA76MPhMAhN9KSMDjIgro6TenGEyxCQ0jVn8ETdkXhBilyNpA\n" "0KIV9VMJcRz/RROE5iZe+OCIHAr8Fraocwa48GOEAqDGWuzndN9wrqODJerWx5eH\n" "k6fGioozl2A3ED6XPm4pFdahD9GILBKfb6qkxkLrQaLjlUPTAYVtjrs78yM2x/47\n" "JyCpZET/LtZ1qmxNYEAZSUNUY9rizLpm5U9EelvZaoErQNV/+QEnWCzI7UiRfD+m\n" "AM/EKXMRNt6GGT6d7hmKG9Ww7Y49nCrADdg9ZuM8Db3VlFzi4qc1GwQA9j9ajepD\n" "vV+JHanBsMyZ4k0ACtrJJ1vnE5Bc5PUzolVt3OAJTS+xJlsndQAJxGJ3KQhfnlms\n" "tn6tn1QwIgPBHnFk/vk4CpYY3QIUrCPLBhwepH2NDd4nQeit2hW3sCPdK6jT2iWH\n" "7ehVRE2I9DZ+hJp4rPcOVkkO1jMl1oRQQmwgEh0q1b688nCBpHBgvgW1m54ERL5h\n" "I6zppSSMEYCUWqKiuUnSwdzRp+0xESyeGabu4VXhwOrPDYTkF7eifKXeVSUG7szA\n" "h1xA2syVP1XgNce4hL60Xc16gwFy7ofmXx2utYXGJt/mwZrpHgJHnyqobalbz+xF\n" "d3+YJ5oyXSrjhO7FmGYvliAd3djDJ9ew+f7Zfc3Qn48LFFhRny+Lwzgt3uiP1o2H\n" "pPVWQxaZLPSkVrQ0uGE3ycJYgBugl6H8WY3pEfbRD0tVNEYqi4Y7\n" "-----END CERTIFICATE-----\n" "-----BEGIN CERTIFICATE-----\n" "MIIFtTCCA52gAwIBAgIIYY3HhjsBggUwDQYJKoZIhvcNAQEFBQAwRDEWMBQGA1UE\n" "AwwNQUNFRElDT00gUm9vdDEMMAoGA1UECwwDUEtJMQ8wDQYDVQQKDAZFRElDT00x\n" "CzAJBgNVBAYTAkVTMB4XDTA4MDQxODE2MjQyMloXDTI4MDQxMzE2MjQyMlowRDEW\n" "MBQGA1UEAwwNQUNFRElDT00gUm9vdDEMMAoGA1UECwwDUEtJMQ8wDQYDVQQKDAZF\n" "RElDT00xCzAJBgNVBAYTAkVTMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKC\n" "AgEA/5KV4WgGdrQsyFhIyv2AVClVYyT/kGWbEHV7w2rbYgIB8hiGtXxaOLHkWLn7\n" "09gtn70yN78sFW2+tfQh0hOR2QetAQXW8713zl9CgQr5auODAKgrLlUTY4HKRxx7\n" "XBZXehuDYAQ6PmXDzQHe3qTWDLqO3tkE7hdWIpuPY/1NFgu3e3eM+SW10W2ZEi5P\n" "gvoFNTPhNahXwOf9jU8/kzJPeGYDdwdY6ZXIfj7QeQCM8htRM5u8lOk6e25SLTKe\n" "I6RF+7YuE7CLGLHdztUdp0J/Vb77W7tH1PwkzQSulgUV1qzOMPPKC8W64iLgpq0i\n" "5ALudBF/TP94HTXa5gI06xgSYXcGCRZj6hitoocf8seACQl1ThCojz2GuHURwCRi\n" "ipZ7SkXp7FnFvmuD5uHorLUwHv4FB4D54SMNUI8FmP8sX+g7tq3PgbUhh8oIKiMn\n" "MCArz+2UW6yyetLHKKGKC5tNSixthT8Jcjxn4tncB7rrZXtaAWPWkFtPF2Y9fwsZ\n" "o5NjEFIqnxQWWOLcpfShFosOkYuByptZ+thrkQdlVV9SH686+5DdaaVbnG0OLLb6\n" "zqylfDJKZ0DcMDQj3dcEI2bw/FWAp/tmGYI1Z2JwOV5vx+qQQEQIHriy1tvuWacN\n" "GHk0vFQYXlPKNFHtRQrmjseCNj6nOGOpMCwXEGCSn1WHElkQwg9naRHMTh5+Spqt\n" "r0CodaxWkHS4oJyleW/c6RrIaQXpuvoDs3zk4E7Czp3otkYNbn5XOmeUwssfnHdK\n" "Z05phkOTOPu220+DkdRgfks+KzgHVZhepA==\n" "-----END CERTIFICATE-----\n"; BIO *cbio = BIO_new_mem_buf(mypem, sizeof(mypem)); X509_STORE *cts = SSL_CTX_get_cert_store((SSL_CTX *)sslctx); int i; STACK_OF(X509_INFO) *inf; (void)curl; (void)parm; if(!cts || !cbio) { return rv; } inf = PEM_X509_INFO_read_bio(cbio, NULL, NULL, NULL); if(!inf) { BIO_free(cbio); return rv; } for(i = 0; i < sk_X509_INFO_num(inf); i++) { X509_INFO *itmp = sk_X509_INFO_value(inf, i); if(itmp->x509) { X509_STORE_add_cert(cts, itmp->x509); } if(itmp->crl) { X509_STORE_add_crl(cts, itmp->crl); } } sk_X509_INFO_pop_free(inf, X509_INFO_free); BIO_free(cbio); rv = CURLE_OK; return rv; } int main(void) { CURL *ch; CURLcode rv; curl_global_init(CURL_GLOBAL_ALL); ch = curl_easy_init(); curl_easy_setopt(ch, CURLOPT_VERBOSE, 0L); curl_easy_setopt(ch, CURLOPT_HEADER, 0L); curl_easy_setopt(ch, CURLOPT_NOPROGRESS, 1L); curl_easy_setopt(ch, CURLOPT_NOSIGNAL, 1L); curl_easy_setopt(ch, CURLOPT_WRITEFUNCTION, writefunction); curl_easy_setopt(ch, CURLOPT_WRITEDATA, stdout); curl_easy_setopt(ch, CURLOPT_HEADERFUNCTION, writefunction); curl_easy_setopt(ch, CURLOPT_HEADERDATA, stderr); curl_easy_setopt(ch, CURLOPT_SSLCERTTYPE, "PEM"); curl_easy_setopt(ch, CURLOPT_SSL_VERIFYPEER, 1L); curl_easy_setopt(ch, CURLOPT_URL, "https://www.example.com/"); /* Turn off the default CA locations, otherwise libcurl will load CA * certificates from the locations that were detected/specified at * build-time */ curl_easy_setopt(ch, CURLOPT_CAINFO, NULL); curl_easy_setopt(ch, CURLOPT_CAPATH, NULL); /* first try: retrieve page without ca certificates -> should fail * unless libcurl was built --with-ca-fallback enabled at build-time */ rv = curl_easy_perform(ch); if(rv == CURLE_OK) printf("*** transfer succeeded ***\n"); else printf("*** transfer failed ***\n"); /* use a fresh connection (optional) * this option seriously impacts performance of multiple transfers but * it is necessary order to demonstrate this example. recall that the * ssl ctx callback is only called _before_ an SSL connection is * established, therefore it will not affect existing verified SSL * connections already in the connection cache associated with this * handle. normally you would set the ssl ctx function before making * any transfers, and not use this option. */ curl_easy_setopt(ch, CURLOPT_FRESH_CONNECT, 1L); /* second try: retrieve page using cacerts' certificate -> will succeed * load the certificate by installing a function doing the necessary * "modifications" to the SSL CONTEXT just before link init */ curl_easy_setopt(ch, CURLOPT_SSL_CTX_FUNCTION, sslctx_function); rv = curl_easy_perform(ch); if(rv == CURLE_OK) printf("*** transfer succeeded ***\n"); else printf("*** transfer failed ***\n"); curl_easy_cleanup(ch); curl_global_cleanup(); return rv; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/post-callback.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Issue an HTTP POST and provide the data through the read callback. * </DESC> */ #include <stdio.h> #include <string.h> #include <curl/curl.h> /* silly test data to POST */ static const char data[]="Lorem ipsum dolor sit amet, consectetur adipiscing " "elit. Sed vel urna neque. Ut quis leo metus. Quisque eleifend, ex at " "laoreet rhoncus, odio ipsum semper metus, at tempus ante urna in mauris. " "Suspendisse ornare tempor venenatis. Ut dui neque, pellentesque a varius " "eget, mattis vitae ligula. Fusce ut pharetra est. Ut ullamcorper mi ac " "sollicitudin semper. Praesent sit amet tellus varius, posuere nulla non, " "rhoncus ipsum."; struct WriteThis { const char *readptr; size_t sizeleft; }; static size_t read_callback(char *dest, size_t size, size_t nmemb, void *userp) { struct WriteThis *wt = (struct WriteThis *)userp; size_t buffer_size = size*nmemb; if(wt->sizeleft) { /* copy as much as possible from the source to the destination */ size_t copy_this_much = wt->sizeleft; if(copy_this_much > buffer_size) copy_this_much = buffer_size; memcpy(dest, wt->readptr, copy_this_much); wt->readptr += copy_this_much; wt->sizeleft -= copy_this_much; return copy_this_much; /* we copied this many bytes */ } return 0; /* no more data left to deliver */ } int main(void) { CURL *curl; CURLcode res; struct WriteThis wt; wt.readptr = data; wt.sizeleft = strlen(data); /* In windows, this will init the winsock stuff */ res = curl_global_init(CURL_GLOBAL_DEFAULT); /* Check for errors */ if(res != CURLE_OK) { fprintf(stderr, "curl_global_init() failed: %s\n", curl_easy_strerror(res)); return 1; } /* get a curl handle */ curl = curl_easy_init(); if(curl) { /* First set the URL that is about to receive our POST. */ curl_easy_setopt(curl, CURLOPT_URL, "https://example.com/index.cgi"); /* Now specify we want to POST data */ curl_easy_setopt(curl, CURLOPT_POST, 1L); /* we want to use our own read function */ curl_easy_setopt(curl, CURLOPT_READFUNCTION, read_callback); /* pointer to pass to our read function */ curl_easy_setopt(curl, CURLOPT_READDATA, &wt); /* get verbose debug output please */ curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L); /* If you use POST to a HTTP 1.1 server, you can send data without knowing the size before starting the POST if you use chunked encoding. You enable this by adding a header like "Transfer-Encoding: chunked" with CURLOPT_HTTPHEADER. With HTTP 1.0 or without chunked transfer, you must specify the size in the request. */ #ifdef USE_CHUNKED { struct curl_slist *chunk = NULL; chunk = curl_slist_append(chunk, "Transfer-Encoding: chunked"); res = curl_easy_setopt(curl, CURLOPT_HTTPHEADER, chunk); /* use curl_slist_free_all() after the *perform() call to free this list again */ } #else /* Set the expected POST size. If you want to POST large amounts of data, consider CURLOPT_POSTFIELDSIZE_LARGE */ curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE, (long)wt.sizeleft); #endif #ifdef DISABLE_EXPECT /* Using POST with HTTP 1.1 implies the use of a "Expect: 100-continue" header. You can disable this header with CURLOPT_HTTPHEADER as usual. NOTE: if you want chunked transfer too, you need to combine these two since you can only set one list of headers with CURLOPT_HTTPHEADER. */ /* A less good option would be to enforce HTTP 1.0, but that might also have other implications. */ { struct curl_slist *chunk = NULL; chunk = curl_slist_append(chunk, "Expect:"); res = curl_easy_setopt(curl, CURLOPT_HTTPHEADER, chunk); /* use curl_slist_free_all() after the *perform() call to free this list again */ } #endif /* Perform the request, res will get the return code */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* always cleanup */ curl_easy_cleanup(curl); } curl_global_cleanup(); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/imap-lsub.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * IMAP example to list the subscribed folders * </DESC> */ #include <stdio.h> #include <curl/curl.h> /* This is a simple example showing how to list the subscribed folders within * an IMAP mailbox. * * Note that this example requires libcurl 7.30.0 or above. */ int main(void) { CURL *curl; CURLcode res = CURLE_OK; curl = curl_easy_init(); if(curl) { /* Set username and password */ curl_easy_setopt(curl, CURLOPT_USERNAME, "user"); curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); /* This is just the server URL */ curl_easy_setopt(curl, CURLOPT_URL, "imap://imap.example.com"); /* Set the LSUB command. Note the syntax is very similar to that of a LIST command. */ curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, "LSUB \"\" *"); /* Perform the custom request */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* Always cleanup */ curl_easy_cleanup(curl); } return (int)res; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/ftpuploadresume.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Upload to FTP, resuming failed transfers. * </DESC> */ #include <stdlib.h> #include <stdio.h> #include <curl/curl.h> /* parse headers for Content-Length */ static size_t getcontentlengthfunc(void *ptr, size_t size, size_t nmemb, void *stream) { int r; long len = 0; r = sscanf(ptr, "Content-Length: %ld\n", &len); if(r) *((long *) stream) = len; return size * nmemb; } /* discard downloaded data */ static size_t discardfunc(void *ptr, size_t size, size_t nmemb, void *stream) { (void)ptr; (void)stream; return size * nmemb; } /* read data to upload */ static size_t readfunc(char *ptr, size_t size, size_t nmemb, void *stream) { FILE *f = stream; size_t n; if(ferror(f)) return CURL_READFUNC_ABORT; n = fread(ptr, size, nmemb, f) * size; return n; } static int upload(CURL *curlhandle, const char *remotepath, const char *localpath, long timeout, long tries) { FILE *f; long uploaded_len = 0; CURLcode r = CURLE_GOT_NOTHING; int c; f = fopen(localpath, "rb"); if(!f) { perror(NULL); return 0; } curl_easy_setopt(curlhandle, CURLOPT_UPLOAD, 1L); curl_easy_setopt(curlhandle, CURLOPT_URL, remotepath); if(timeout) curl_easy_setopt(curlhandle, CURLOPT_FTP_RESPONSE_TIMEOUT, timeout); curl_easy_setopt(curlhandle, CURLOPT_HEADERFUNCTION, getcontentlengthfunc); curl_easy_setopt(curlhandle, CURLOPT_HEADERDATA, &uploaded_len); curl_easy_setopt(curlhandle, CURLOPT_WRITEFUNCTION, discardfunc); curl_easy_setopt(curlhandle, CURLOPT_READFUNCTION, readfunc); curl_easy_setopt(curlhandle, CURLOPT_READDATA, f); /* disable passive mode */ curl_easy_setopt(curlhandle, CURLOPT_FTPPORT, "-"); curl_easy_setopt(curlhandle, CURLOPT_FTP_CREATE_MISSING_DIRS, 1L); curl_easy_setopt(curlhandle, CURLOPT_VERBOSE, 1L); for(c = 0; (r != CURLE_OK) && (c < tries); c++) { /* are we resuming? */ if(c) { /* yes */ /* determine the length of the file already written */ /* * With NOBODY and NOHEADER, libcurl will issue a SIZE * command, but the only way to retrieve the result is * to parse the returned Content-Length header. Thus, * getcontentlengthfunc(). We need discardfunc() above * because HEADER will dump the headers to stdout * without it. */ curl_easy_setopt(curlhandle, CURLOPT_NOBODY, 1L); curl_easy_setopt(curlhandle, CURLOPT_HEADER, 1L); r = curl_easy_perform(curlhandle); if(r != CURLE_OK) continue; curl_easy_setopt(curlhandle, CURLOPT_NOBODY, 0L); curl_easy_setopt(curlhandle, CURLOPT_HEADER, 0L); fseek(f, uploaded_len, SEEK_SET); curl_easy_setopt(curlhandle, CURLOPT_APPEND, 1L); } else { /* no */ curl_easy_setopt(curlhandle, CURLOPT_APPEND, 0L); } r = curl_easy_perform(curlhandle); } fclose(f); if(r == CURLE_OK) return 1; else { fprintf(stderr, "%s\n", curl_easy_strerror(r)); return 0; } } int main(void) { CURL *curlhandle = NULL; curl_global_init(CURL_GLOBAL_ALL); curlhandle = curl_easy_init(); upload(curlhandle, "ftp://user:[email protected]/path/file", "C:\\file", 0, 3); curl_easy_cleanup(curlhandle); curl_global_cleanup(); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/ftpsget.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ #include <stdio.h> #include <curl/curl.h> /* <DESC> * Get a single file from an FTPS server. * </DESC> */ struct FtpFile { const char *filename; FILE *stream; }; static size_t my_fwrite(void *buffer, size_t size, size_t nmemb, void *stream) { struct FtpFile *out = (struct FtpFile *)stream; if(!out->stream) { /* open file for writing */ out->stream = fopen(out->filename, "wb"); if(!out->stream) return -1; /* failure, cannot open file to write */ } return fwrite(buffer, size, nmemb, out->stream); } int main(void) { CURL *curl; CURLcode res; struct FtpFile ftpfile = { "yourfile.bin", /* name to store the file as if successful */ NULL }; curl_global_init(CURL_GLOBAL_DEFAULT); curl = curl_easy_init(); if(curl) { /* * You better replace the URL with one that works! Note that we use an * FTP:// URL with standard explicit FTPS. You can also do FTPS:// URLs if * you want to do the rarer kind of transfers: implicit. */ curl_easy_setopt(curl, CURLOPT_URL, "ftp://user@server/home/user/file.txt"); /* Define our callback to get called when there's data to be written */ curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, my_fwrite); /* Set a pointer to our struct to pass to the callback */ curl_easy_setopt(curl, CURLOPT_WRITEDATA, &ftpfile); /* We activate SSL and we require it for both control and data */ curl_easy_setopt(curl, CURLOPT_USE_SSL, CURLUSESSL_ALL); /* Switch on full protocol/debug output */ curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L); res = curl_easy_perform(curl); /* always cleanup */ curl_easy_cleanup(curl); if(CURLE_OK != res) { /* we failed */ fprintf(stderr, "curl told us %d\n", res); } } if(ftpfile.stream) fclose(ftpfile.stream); /* close the local file */ curl_global_cleanup(); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/sslbackend.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2020, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * Shows HTTPS usage with client certs and optional ssl engine use. * </DESC> */ #include <assert.h> #include <ctype.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <curl/curl.h> /* * An SSL-enabled libcurl is required for this sample to work (at least one * SSL backend has to be configured). * * **** This example only works with libcurl 7.56.0 and later! **** */ int main(int argc, char **argv) { const char *name = argc > 1 ? argv[1] : "openssl"; CURLsslset result; if(!strcmp("list", name)) { const curl_ssl_backend **list; int i; result = curl_global_sslset((curl_sslbackend)-1, NULL, &list); assert(result == CURLSSLSET_UNKNOWN_BACKEND); for(i = 0; list[i]; i++) printf("SSL backend #%d: '%s' (ID: %d)\n", i, list[i]->name, list[i]->id); return 0; } else if(isdigit((int)(unsigned char)*name)) { int id = atoi(name); result = curl_global_sslset((curl_sslbackend)id, NULL, NULL); } else result = curl_global_sslset((curl_sslbackend)-1, name, NULL); if(result == CURLSSLSET_UNKNOWN_BACKEND) { fprintf(stderr, "Unknown SSL backend id: %s\n", name); return 1; } assert(result == CURLSSLSET_OK); printf("Version with SSL backend '%s':\n\n\t%s\n", name, curl_version()); return 0; }
0
repos/gpt4all.zig/src/zig-libcurl/curl/docs
repos/gpt4all.zig/src/zig-libcurl/curl/docs/examples/smtp-vrfy.c
/*************************************************************************** * _ _ ____ _ * Project ___| | | | _ \| | * / __| | | | |_) | | * | (__| |_| | _ <| |___ * \___|\___/|_| \_\_____| * * Copyright (C) 1998 - 2021, Daniel Stenberg, <[email protected]>, et al. * * This software is licensed as described in the file COPYING, which * you should have received as part of this distribution. The terms * are also available at https://curl.se/docs/copyright.html. * * You may opt to use, copy, modify, merge, publish, distribute and/or sell * copies of the Software, and permit persons to whom the Software is * furnished to do so, under the terms of the COPYING file. * * This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY * KIND, either express or implied. * ***************************************************************************/ /* <DESC> * SMTP example showing how to verify an e-mail address * </DESC> */ #include <stdio.h> #include <string.h> #include <curl/curl.h> /* This is a simple example showing how to verify an e-mail address from an * SMTP server. * * Notes: * * 1) This example requires libcurl 7.34.0 or above. * 2) Not all email servers support this command and even if your email server * does support it, it may respond with a 252 response code even though the * address does not exist. */ int main(void) { CURL *curl; CURLcode res; struct curl_slist *recipients = NULL; curl = curl_easy_init(); if(curl) { /* This is the URL for your mailserver */ curl_easy_setopt(curl, CURLOPT_URL, "smtp://mail.example.com"); /* Note that the CURLOPT_MAIL_RCPT takes a list, not a char array */ recipients = curl_slist_append(recipients, "<[email protected]>"); curl_easy_setopt(curl, CURLOPT_MAIL_RCPT, recipients); /* Perform the VRFY */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) fprintf(stderr, "curl_easy_perform() failed: %s\n", curl_easy_strerror(res)); /* Free the list of recipients */ curl_slist_free_all(recipients); /* curl will not send the QUIT command until you call cleanup, so you * should be able to re-use this connection for additional requests. It * may not be a good idea to keep the connection open for a very long time * though (more than a few minutes may result in the server timing out the * connection) and you do want to clean up in the end. */ curl_easy_cleanup(curl); } return 0; }