text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
The io-pkt utility (io-pkt-v4, io-pkt-v4-hc, and io-pkt-v6-hc) is the Neutrino network manager. This is a process outside of the kernel space and executes in the application space. Along with providing the network manager framework, it also includes TCP/IP (TCP, UDP, IPv4, IPv6) support, along with PPP services and a variety of other services built in. DLLs can also be mounted to extend this process's functionality. This process is provided in these versions: As the io-pkt process executes in the application space, it's possible to launch more than one instance of it (see the io-pkt documentation). This ability is limited only by the management of the hardware interfaces, as typically only one driver instance (in one io-pkt instance) would manage one hardware interface. The io-pkt manager has three defined driver interface APIs: native, shim, and BSD. Shim and BSD are typically used only to support legacy drivers and aren't intended for new driver development. Drivers using any of these interfaces can be mounted and unloaded (unmounted or destroyed) dynamically. Loading and unloading can be performed in the following way: io-pkt-v4-hc -d name or full path of driver binary [option,[option,..]] [-d ....] where name is the unique part of the driver file name (for example speedo vs devnp-speedo.so). Each unique driver instance (different option set or different driver) is specified with another -d driver option on the command line. mount -T io-pkt [-o option[,option,...]] full path of driver You don't have to specify the devnp-shim.so DLL on the command line of either io-pkt or mount; it's loaded automatically if needed. ifconfig interface_name destroy Once all the interfaces managed by a driver have been destroyed, the driver DLL is unloaded, unless there are special actions taken by the driver to stay resident. umount /dev/io-net/interface_name As described in the System Architecture under Networking Architecture, Threading Model, io-pkt is a multithreaded process. We recommend that you read that section before continuing. From this section we have at least the following thread types: Since this context of code is single threaded, you must never block it. If you block this context of code, all the operations it performs (such as the resource manager) will be blocked until it's released. If during testing of your driver the ifconfig utility becomes blocked on io-pkt and doesn't terminate with the expected output, there is a good chance that you have blocked the stack context in your driver. While single threaded, the stack context can manage blocking operations. This is via pseudo-threading. A "stack" is maintained per pseudo-thread. If a pseudo-thread is going to block, it's put to sleep, to be woken when the required condition is met. Only pseudo-threads within the stack context can yield execution to each other. You can't use sleep and wake routines outside of the stack context. This includes functions that call these routines. If you use these function outside the stack context, io-pkt can become unstable or fault. For more information on blocking and interacting with the stack context see the "Stack context" section below. This is the thread created at io-pkt process startup to initialize io-pkt. It's generally idle after the io-pkt is initialized and its worker posix threads are started. It will never be the stack context. While generally idle, there is a way to leverage it for network driver blocking operations if needed (see blockop in the later sections). These are threads created by io-pkt to service interrupts. As discussed in the "Threading Model" of the Network Architecture guide, one thread is created per CPU. You can use an io-pkt option to create more or less threads based on unusual conditions, but its optimal format is one POSIX thread per CPU. The naming of the thread is the default io-pkt thread naming for any io-pkt managed thread, so this naming doesn't absolutely identify one of these threads (see "User-created io-pkt managed thread" below). The io-pkt worker threads will also execute the stack context code, and are the only threads that can execute the stack context code. Only one of these thread can execute this code at a time. The stack context may also migrate between the io-pkt worker threads, depending on the circumstances. User-created threads include the following: These are POSIX threads that were created by a dynamically loaded library (driver or other) or a thread created by an internal io-pkt service. These threads are created and managed by an io-pkt-specific POSIX thread API and can't execute the stack context code. An io-pkt internal service example is the PPP read thread (identified as such in the pidin threads output). These threads are typically created to handle blocking operations (such as a blocking read()) in the PPP case. This keeps the stack context from becoming blocked. User-created io-pkt-managed threads should always have a thread name assigned to them to make it easy to identify them during debugging situations. If they aren't named, they can be hard to distinguish from one another as well as from the io-pkt worker threads, as by default they use the same naming convention. While these threads can't perform operations that manipulate the pseudo-threads of the stack context, they can allocate and free mbufs and clusters and other memory objects via the io-pkt memory management. They can't however perform memory allocation as a M_WAITOK operation (they must always use M_NOWAIT). Using M_WAITOK would engage the pseudo-thread code in the stack context. These are threads created by the user using the default libc Posix thread API. We don't recommended that you create threads in this manner as there will be no io-pkt context associated with them. This means that they can't allocate memory using io-pkt's memory management and can't be managed via io-pkt's thread synchronization mechanisms. Typically the reason these threads may exist is during integration of third party code or library functions that create threads for specific tasks (for example USB insertion and removal event thread via libusbdi), or legacy io-net drivers that created receive threads. If a thread is created using this API, it should operate in a manner that abstracts it from io-pkt API functions, so they aren't performed by this thread. For example, if a mbuf or cluster memory buffer needs to be created and managed, this thread could modify the data in the buffer, but couldn't allocate or free this buffer. Thread management must also be done by user code (starting/terminating and synchronizing), because io-pkt isn't aware of this thread. As with the io-pkt managed threads, these threads should be named. When coding a new io-pkt driver or porting existing driver code, you will want to consider how best to integrate it with io-pkt. The io-pkt program is optimized in such a manner that the preferred driver architecture doesn't require the creation of any driver-specific POSIX threads. This is to minimize thread switching in high-bandwidth situations (including forwarding between interfaces). Most io-pkt driver callback functions can potentially be called from the stack context. If this is the case, any time you spend in your driver is time that is potentially blocking other network operations from occurring. Whether you need to create a thread or take other special steps will probably depend on a few considerations: If so, you may have to consider some of the advanced topics described later in this appendix. If not, then it's likely you should be able to integrate your driver as close to the optimized architecture as possible without using additional threads. Examples of blocking include: A typical scenario is a read operation to the USB stack (io-usb) by a USB network driver, or a read operation to a serial port or character-based interface (for example io-pkt PPP code). In these cases, we don't know when data to RX will arrive, so this could result in blocking indefinitely. You can send a message to another process, but you will expect an immediate response. An example of your hardware managing additional functions could be that the hardware services a multipurpose BUS. Ethernet frames may just be one type of data passed on this BUS, encapsulated within specific framing associated with this BUS although the primary data passed is network data. You may want to create a resource manager within io-pkt to allow other types of data along with the TCP/IP traffic to be passed on the BUS. In this case, we are optimizing the TCP/IP traffic over other frame types. An architectural alternative could be to create a dedicated process to manage the BUS and require the io-pkt driver to perform message-passing to communicate with the BUS manager. This would be a more system-wide BUS sharing consideration. An example of complicated hardware integration could be an interface with limited or no support of optimizations such as DMA and descriptor ring support. It may require multiple operations to obtain packet data where each suboperation requires its own interrupt, or multiple status requests are required. This can be time-consuming and complicated to integrate. A thread dedicated to managing HW RX and potentially TX may be needed. See the Writing Network Drivers for io-pkt appendix and the accompanying sample driver, sam.c. One of the items often overlooked in io-pkt drivers is restarting transmission if some kind of resource conflict/exhaustion occurred or the link state is down. When io-pkt calls the driver if_start() callback function, it expects the TX queue to be drained. If it isn't, it will not call this callback function again unless there's a new packet added to the output queue. Also if the link state is down, the TX queue can fill up with packets to be sent. When the link state is restored, io-pkt will wait until the next packet transmission to call the if_start() callback, so that the packets in the send queue are transmitted. Often managing this behavior is overlooked and can be misinterpreted at runtime as a lost or dropped frame, which was retransmitted simply because another packet will likely be sent shortly afterward to cause the if_start() callback to be executed again. Differences between the interface Up/Down state and the Link up/down state The first place to start is the difference between the interface up and down state vs the link up and down state. The interface up and down state is reflected in the interface flags (IFF_UP and IFF_DOWN), which can be viewed by ifconfig. The link up and down state is reflected in the media flags and can be viewed by ifconfig under the "media:" heading, and can also be viewed with nicinfo under the heading "Link is down|up." These states are managed independently of each other, and one can be up while the other is down and vice versa. The interface state is set via ifconfig and its default is down until set up when configured up explicitly, or when an IP address is assigned to the interface. It's considered an advisory state, as it reflects whether the user has set the interface up or down, regardless of the link's state. If the interface state is marked down, TX packets are dropped (memory is freed) without being queued, and the application can receive the error ENETDOWN. Likewise, RX packets are dropped by ether_input() (which is called by the driver on RX). The link state is set by the driver itself based on the status of the physical link. If the link state is down, no RX packets will arrive, but on TX, the behavior is driver-specific. The MII code may update the status to io-pkt as displayed by the routing socket, ifconfig, and nicinfo, but otherwise io-pkt takes no specific action. On TX, (provided that the interface state is up) the packet will be added to the interface send queue (if it isn't already full), your driver's if_start() function will be called, and what occurs with respect to the send queue will be driver-specific. Managing the TX queue As we saw in the driver sample above, on TX the if_start() driver callback obtains packets to transmit from the ifp->if_snd queue. Packets are added to the send queue regardless of link state or other HW resource issues. One of the first things done in if_start() is to set the interface flag IFF_OACTIVE. This flag defines whether the driver is actively attempting to transmit data. This is a driver-level flag and isn't limited to the context of the if_start() callback function itself. If this flag is set, io-pkt will not attempt to call the if_start() callback again. The significance of this is what occurs if there aren't enough resources to TX the packet, or if the link state is down. What should be done? If nothing is done, the driver clears IFF_OACTIVE and if_start() returns, the packets remain on the send queue and if_start() will not be called again until there's another packet to be sent, at which point everything is evaluated again as before. If the link remains down, the send queue can fill, and applications could start getting ENOBUFS errors. The driver may first exhaust the TX descriptors. It all depends on how the driver was coded. It can also be possible to get into this state when the link state is up simply because the HW couldn't transmit the packets quickly enough, exhausting the TX descriptors. We probably want the driver to continue transmission when the hardware or descriptor ring is ready, rather than wait until io-pkt has another packet to add to the send queue. What needs to be decided is what to do if packets can't be transmitted: whether to leave the packets in the buffer, for how long, and how often should the driver attempt to send them. These parameters are specific to the driver implementation, but here is how they can be applied. A timer can be enabled with a callback function to execute the if_start() callback. So for example, if the hardware isn't ready: static void sam_kick_tx (void *arg) { sam_dev_t *sam = arg; NW_SIGLOCK(&sam->ecom.ec_if.if_snd_ex, sam->iopkt); sam_start(&sam->ecom.ec_if); } ... void sam_start(struct ifnet *ifp) { ..... if (callout_pending(&sam->tx_callout)) callout_stop(&sam->tx_callout); ifp->if_flags_tx |= IFF_OACTIVE; /* Actively sending data on interface */ ..... if (detected_issue) { /* Resources aren't ready or something else is wrong */ /* Set a callback to try again later */ callout_msec(&sam->tx_callout, 2, sam_kick_tx, sam); /* Actual timeout value can be configurable or vary based on implementation */ /* Leave IFF_OACTIVE set so the stack doesn't call us again */ NW_SIGUNLOCK(&ifp->if_snd_ex, sam->iopkt); return; } ... /* Successful execution of sam_start() */ ifp->if_flags_tx &= ~IFF_OACTIVE; NW_SIGUNLOCK(&ifp->if_snd_ex, sam->iopkt); return; } You can also make a similar call when the link is detected up in your MII code. In this case, you may perform some queries to determine if there is data to be sent; you may want to check both the transmit descriptor list and the interface send queue: ... sam->cfg.flags &= ~NIC_FLAG_LINK_DOWN; if_link_state_change(ifp, LINK_STATE_UP); if (data_in_tx_desc || !IFQ_IS_EMPTY(&ifp->if_snd)){ /* There is some data to send */ if (callout_pending(&sam->tx_callout)) callout_stop(&sam->tx_callout); /* Timer not needed calling if_start() callback directly. */ NW_SIGLOCK(&ifp->if_snd_ex, sam->iopkt); sam_start(ifp); } ... If you set this timer, it should be stopped if an ifconfig interface_name down occurs, or otherwise the if_stop() driver callback function is executed. When this occurs, the following can be called early in if_stop(): static void sam_stop(struct ifnet *ifp, int disable) { ... /* Lock out the transmit side */ NW_SIGLOCK(&ifp->if_snd_ex, sta2x11->iopkt); if (callout_pending(&sam->tx_callout)) { callout_stop(&sam->tx_callout); /* We aren't in if_start() as it stops the callout */ ifp->if_flags_tx &= ~IFF_OACTIVE; } for (i = 0; i < 10; i++) { if ((ifp->if_flags_tx & IFF_OACTIVE) == 0) break; NW_SIGUNLOCK(&ifp->if_snd_ex, sam->iopkt); delay(50); NW_SIGLOCK(&ifp->if_snd_ex, sam->iopkt); } if (i < 10) { ifp->if_flags_tx &= ~IFF_RUNNING; NW_SIGUNLOCK(&ifp->if_snd_ex, sam->iopkt); } else { /* Heavy load or bad luck. Try the big gun. */ quiesce_all(); ifp->if_flags_tx &= ~IFF_RUNNING; unquiesce_all(); } ... /* Mark the interface as down and cancel the watchdog timer. */ ifp->if_flags &= ~(IFF_RUNNING | IFF_OACTIVE); ifp->if_timer = 0; return; } The last point is stale data. These are packets that have accumulated in the send queue but can't be sent. How long should attempts to retransmit this data be made and when should the queue be flushed? You probably want to consider flushing the queue, as you probably don't want to send packets that have sat in the send queue for extended periods of time, as the data is probably out of date. Above we've seen how to use a timer to resume transmission, or to use link state to resume transmission. This is based on the idea that the issues related to TX are sporadic and for short periods of time. A decision may have to be made when to declare the data stale as well as to stop data from being queued. We can flush the send queue, but we also want io-pkt to stop queuing packets, or the send queue will just fill up again. Based on some kind of timing, if TX hasn't resumed, you can decide to purge the send queue. This can be managed by a higher level or at the driver level. If managed at the higher level, marking the interface down by clearing the IFF_UP interface flag will cause the send queue to be purged. At the driver level you can perform the same operation via: IFQ_PURGE(&ifp->if_snd); If the interface remains down, no new packets will be added to the send queue. If the interface is marked up, io-pkt will continue to add packets to the send queue. If the interface remains up, periodic purging may be needed if TX hasn't resumed at the hardware level. Blocking Operations The io-pkt manager is optimized to minimize thread switching, and as mentioned in the architecture discussion previously, driver API callback functions can be called from the stack context. As the stack context is single-threaded, we can't have blocking operations being performed within the stack context. If a blocking operation occurs, you will block the stack context (io-pkt resource manager, protocol processing) for the duration of the time spent blocked. What defines blocking? Basically any time spent in the driver API callback functions may potentially be time that io-pkt can't service the resource manager (applications), timers, and processing associated with the supported protocols in io-pkt. Time spent in the driver API callback functions should be as little as possible. Some examples to consider are: Many QNX Neutrino function calls result in a message's being sent to another resource manager. If the message being sent will not get a immediate response (a blocking read() or write() for example), you can block io-pkt. The typical example is a read operation; if it's a blocking read, the function call may not return until there is data to read. If the resource is already locked, can it potentially be locked for a long period of time, blocking the callback function while the driver waits to acquire the lock? Does your hardware require a service that can take a long period of time, such as loading the firmware? Block Op If the duration of the blocking scenario is known and within a few seconds or less, you can use the blockop services. Essentially this offloads an operation that may take some time to the io-pkt main thread (which is typically idle). Note that blockop is a shared service, and may have multiple operations scheduled. This is meant for occasional time-consuming operations (such as a firmware upload that occurs once), but not indefinite or long-term operations and not repetitive operations. It's a convenience service that handles the complicated management of the stack context pseudo-thread handling. As it performs these kinds of operations, it must be called from within the stack context. The callback function however isn't called from the stack context and shouldn't perform any operations that require the stack context or buffer management. The example below is taken from the PPP data link shutdown processing. In this case, the close() function for the serial port resource manager takes an unusually long, but predictable, amount of time to reply to the message blocking the close() function. Since this is called in the stack context, it blocks other io-pkt operations until the close() returns. This code moves the execution of the close() into the main io-pkt thread, and pseudo-thread switches to other operations until the callback function returns. The qnxppp_ttydetach() function pseudo-thread switches at the blockop_dispatch() and resumes from the same point once the qnxppp_tty_close_blockop() function returns. #include <blockop.h> struct ppp_close_blockop { int qnxsc_pppfdrd; int qnxsc_pppfdrd2; int qnxsc_pppfdwr; } void qnxppp_tty_close_blockop(void *arg); .... void qnxppp_tty_close_blockop(void *arg) { struct ppp_close_blockop *pcb = arg; if(pcb->qnxsc_pppfdrd != -1) close(pcb->qnxsc_pppfdrd); if(pcb->qnxsc_pppfdrd2 != -1) close(pcb->qnxsc_pppfdrd2); if(pcb->qnxsc_pppfdwr != -1) close(pcb->qnxsc_pppfdwr); } int qnxppp_ttydetach(...) { struct ppp_close_blockop pcb; struct bop_dispatch bop; ..... pcb.qnxsc_pppfdrd = sc->qnxsc_pppfdrd; pcb.qnxsc_pppfdrd2 = sc->qnxsc_pppfdrd2; pcb.qnxsc_pppfdwr = sc->qnxsc_pppfdwr; bop.bop_func = qnxppp_tty_close_blockop; bop.bop_arg = &pcb; bop.bop_prio = curproc->p_ctxt.info.priority; blockop_dispatch(&bop); .... return; } Thread Creation As stated above, there are several types of threads that can exist in an instance of io-pkt. The two types of threads created by driver or module developers from above are user-created threads that are either tracked (nw_pthread_create()) or not tracked (pthread_create()) by io-pkt. Regardless of how they're created, all POSIX threads created in io-pkt should be named for easier debugging. If your code creates a thread directly, you should create a tracked thread as described below. If you're calling library functions that create threads on your behalf, you must manage these threads in your module code, because io-pkt isn't aware of their existence. As stated under the io-pkt Architecture section, threads that aren't tracked can't allocate or free an mbuf or cluster, and can't call functions that perform any manipulation of the stack context pseudo-threading. All tracked POSIX threads must register a quiesce callback function (defined below). If your thread doesn't register a quiesce callback function, io-pkt can end up in a deadlock situation. In the sample below, nw_pthread_create() is the same as pthread_create(), but for some considerations in the initialization function. The first consideration is naming the thread for easier debugging. The other is setting up the mechanism for your threads' quiesce handling where io-pkt requires all threads to block for an exclusive operation. This is required of all threads created with nw_pthread_create(). Threads can be terminated via quiesce_block handling, or using the function nw_pthread_reap(tid), where tid is the thread ID of your tracked thread. The nw_pthread_reap() can't be called by the thread specified by the tid argument (i.e., a thread can't reap itself). Both nw_pthread_create() and nw_pthread_reap() must be called from the stack context. Below is an example where the user-created tracked thread creates a resource manager. The structure of your driver can be different, but the main point is that your quiesce callback function must cause your tracked thread to call quiesce_block(). #include <nw_thread.h> static sam_thread_init(void *arg) { struct nw_work_thread *wtp; sam_dev_t *sam = (sam_dev_t *)arg; pthread_setname_np(gettid(), "sam workthread"); wtp = WTP; ... if ((sam->code = pulse_attach(sam->dpp, MSG_FLAG_ALLOC_PULSE, 0, sam_pulse_func, NULL)) == -1) { log(LOG_ERR, "sam: pulse_attach(): %s", strerror(errno)); return errno; } if ((sam->coid = message_connect(sam->dpp, MSG_FLAG_SIDE_CHANNEL)) == -1) { pulse_detach(sam->dpp, sam->code, 0); log(LOG_ERR, "sam: message_connect(): %s", strerror(errno)); return errno; } wtp->quiesce_callout = sam_thread_quiesce; wtp->quiesce_arg = sam; ... return EOK; } static int sam_pulse_func(message_context_t *ctp, int code, unsigned flags, void *handle) { /* If the die argument is 1, the user thread will terminate in queisce_block */ quiesce_block(ctp->msg->pulse.value.sigval_int); return 0; } static void sam_thread_quiesce(void *arg, int die) { sam_dev_t *sam = (sam_dev_t *)arg; MsgSendPulse(sam->coid, SIGEV_PULSE_PRIO_INHERIT, sam->code, die); } static void *sam_thread(void *arg) { sam_dev_t *sam = (sam_dev_t *)arg; dispatch_context_t *ctp; if ((ctp = dispatch_context_alloc(sam->dpp)) == NULL { ... return NULL; } while(1) { if ((ctp = dispatch_block(ctp)) == NULL) { ... break; } dispatch_handler(ctp); } return NULL; } ... /* Likely in the sam_attach() interface attach driver callback function */ /*Need a thread to handle blocking or other special circumstance scenario */ if (nw_pthread_create(&sam->worker_tid, NULL, sam_thread, sam, 0, sam_thread_init, sam) != EOK) { log(LOG_ERR, "sam: nw_pthread_create() failed\n"); /* Clean up and likely return -1 */ } /* Likely in the sam_detach() interface detach driver callback function */ ... if (nw_pthread_reap(sam->worker_tid)) log(LOG_ERR, "%s(): nw_pthread_reap() failed\n", __FUNCTION__); ... Quiesce handling Quiesce handling is required by all threads that are created by nw_pthread_create(). The purpose of this functionality is to allow io-pkt to quiesce (quiet) all threads for an exclusive operation. It also provides a mechanism for terminating the thread. The basic structure of the mechanism is the quiesce callback function provided by the driver (example above), and the quiesce_block() io-pkt function that the tracked thread is required to call. The quiesce callback function is executed by io-pkt (otherwise called from the stack context via the quiesce_all() function). This callback function provides some kind of mechanism to trigger the tracked thread to call the function quiesce_block() with the die argument provided to the callback function. This argument determines if the thread blocks (die = 0) or terminates (die = 1). If the quiesce_block() function isn't called by the tracked thread, io-pkt (and thus the stack context) will be blocked in quiesce_all() until it does, as quiesce_all() is intended to block all worker threads until unquiesce_all() is called to resume the tracked threads. The unquiesce_all() function must also be called from the stack context. As well, if die is 0, your thread will block for a short period of time. You may have HW integration issues to consider that could be affected by this blocking. You may want to have some code around the quiesce_block() to handle this, such as disable and enable interrupts or other hardware considerations. These considerations would be implementation-specific. If we continue from the example above, the callback function provided will send a pulse to a channel managed by the tracked thread (its resource manager). That pulse will trigger another callback function that's executed by the tracked thread. This function calls quiesce_block() with the die argument provided. Periodic timers Network drivers frequently need periodic timers to perform such housekeeping functions as maintaining links and harvesting transmit descriptors. The preferred way to set up a periodic timer is via the callback API provided by io-pkt. This API is used to call a user-defined function after a specified period of time. You can call callout_* functions in io-pkt driver API callbacks, or nw_pthread_create()-created io-pkt threads. The callout function will be called from the stack context. The callout data type is struct callout, and includes the following functions: Here's an example: struct sam_dev { ... /* Declare a type callout in your driver device structure */ /* Unique to this interface */ struct callout my_callout; ... }; static void my_function (void *arg) { struct sam_dev *sam = arg; /* Do something if the timer expires */ /* We may want to arm the callout again if we want my_function() to be called on a regular interval. */ callout_msec(&sam->my_callout, 5, my_function, sam); { /* Before it's used, it must be initialized */ /* This can be in the if_init() or if_attach() callback for example */ callout_init(&sam->my_callout); /* Initialize callout */ /* Once initialized it can be used */ /* Call my_function() in 5 ms */ callout_msec(&sam->my_callout, 5, my_function, sam); callout_stop(&sam->my_callout); /* Cancel the callout */ if (callout_pending(&sam->my_callout)) { /* Is the callout armed */ /* action if pending */ } else { /* action if not pending */ } Driver doesn't use an RX interrupt When the driver isn't notified via an interrupt that a packet has arrived, you will need to mimic this functionality in your driver. There are different approaches to this, with different limitations. In your nw_pthread_create() thread, you can either call if_input() directly, or simulate the ISR. Calling if_input() directly has limitations, as your interface will not be able to support fastforward feature or bridging between interfaces if you're considering these features for the future. You would prepare the mbuf in the same manner as the sample's "process interrupt" function, and end with calling if_input(). The if_input() function executed in your (nw_pthread)thread will cause the packet to be queued, and an event will trigger the main io-pkt threads to process the packet. The other method allows fastforward and bridging to work as in other io-pkt drivers. In this case, you will enqueue your packets in your nw_pthread, and trigger the event directly in your code to cause your "process interrupt" io-pkt callback to execute in the same way it would if an ISR had occurred. In the "process interrupt" callback, you would dequeue the packet from your internal driver queue, prepare the mbuf in the same manner as the sample, and execute if_input(). In this case, if_input() is executed in the io-pkt callback rather than in your nw_pthread. For this, you will define a process interrupt callback, along with an enable-interrupt callback as you would with an ISR. The difference is how the interrupt_queue() is applied. In your case, you will have a queue that's accessed by two different threads, the one receiving the packet from the HW, and the other "process interrupt" passing the packet to upper layers in io-pkt. You will want a mutex protecting this queue so it's modified by only one thread at a time. You will also want to protect the event notification mechanism interrupt_queue() is using. In your driver thread, you will lock the mutex, check if the queue is full, and if not, enqueue the packet in your internal queue. You will now call interrupt_queue() in your thread. If evp (event structure) isn't NULL, you will send this event yourself in your thread: MsgSendPulse(evp->sigev_coid, evp->sigev_priority, evp->segev_code, (int)evp->sigev_value.sival_ptr); Once you have done this, you would unlock your mutex. The remainder of your function will be hw management or descriptor management. The io-pkt manager will now schedule your "process interrupt" callback to execute. In your process interrupt callback, you will loop dequeueing packets until the queue is empty. First you will lock your mutex for your internal queue, and attempt to dequeue a packet. If you did dequeue a packet, unlock your mutex, and call if_input() (provided the mbuf is prepared as required) and go back to the top of the loop. If there is no packet to dequeue (IF_DEQUEUE() returns NULL), break out of your loop and return without unlocking your mutex. You don't want to unlock your mutex here because we don’t want your receive thread to call interrupt_queue() at this point. If it did, it would return NULL because you're currently processing a packet. You will unlock your internal mutex in the "enable interrupt" callback. This way a new evp structure will be returned when your receive thread can continue as you are finished processing packets. Your mbuf can be prepared either in your receive thread or the "process interrupt" callback. It just depends on whether you want to store the fully formed mbuf in your internal queue or partial buffers to be formatted later. In the receive thread: struct sigevent *evp; pthread_mutex_lock(&driv->rx_mutex); if (IF_QFULL(&driv->rx_queue)) { m_freem(m); ifp->if_ierrors++; ...->stats.rx_failed_allocs++; } else { IF_ENQUEUE(&driv->rx_queue, m); } if (!driv->rx_running) { //RX_running is mimicking interrupt masking. // This is for future compatibility when using interrupt_queue() driv->rx_running = 1; evp = interrupt_queue(driv->iopkt, &driv->inter); if (evp != NULL) { MsgSendPulse( evp->sigev_coid, evp->sigev_priority, evp->sigev_code, (int)evp->sigev_value.sival_ptr); } } pthread_mutex_unlock(&driv->rx_mutex); In the main code: int your_process_interrupt( void *arg, struct nw_work_thread *wtp) { driver_dev_t *driv = arg; struct ifnet *ifp; struct mbuf *m; ifp = &driv->ecom.ec_if; while (1) { pthread_mutex_lock(&driv->rx_mutex); IF_DEQUEUE(&driv->rx_queue, m); if (m!= NULL) { pthread_mutex_unlock(&driv->rx_mutex); ... Prepare mbuf if needed ... (*ifp->if_input)(ifp, m); } else { /* Leave mutex locked to prevent any enqueues; unlock in enable */ break; } } return 1; } int your_enable_interrupt (void *arg) { driver_dev_t *driv = arg; ... driv->rx_running = 0; pthread_mutex_unlock(&driv->rx_mutex); return 1; }
http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.core_networking/topic/native_drvr_extra.html
CC-MAIN-2018-09
refinedweb
5,445
52.6
DataGridPro API The API documentation of the DataGridPro React component. Learn more about the props, slots and CSS customization points. Import import { DataGridPro } from '@mui/x-data-grid-pro'; Component name The name MuiDataGridPro can be used when providing default props or style overrides in the theme. Props The ref is forwarded to the root element. Slots You can use the slots API to override nested components or to inject extra props. CSS You can override the style of the component thanks to one of these customization points: - With a rule name of the classesobject prop. - With a global class name. - With a theme and an overridesproperty. If that's not sufficient, you can check the implementation of the component style for more detail.
https://v4.mui.com/api/data-grid/data-grid-pro/
CC-MAIN-2022-33
refinedweb
124
66.54
How to use HTTP GET request in Java ME From Nokia Developer Wiki This example demonstrates how to establish a HttpConnection and use it to send a GET request to a web server. Article Metadata Code Example Source file: Media:HttpGET.zipTested with SDK: Nokia SDK for Java 2.0, Nokia Asha SDK 1.0 (beta) Devices(s): Nokia Asha 311, Nokia Asha 501Compatibility Created: senthilkumar05 (18 Dec 2007) Updated: grift (03 Apr 2013) Reviewed: skalogir (09 May 2013) Last edited: hamishwillee (25 Jul 2013) Code Example import javax.microedition.midlet.*; import javax.microedition.lcdui.*; import javax.microedition.io.*; import java.io.*; public class HttpGET extends MIDlet implements CommandListener { /* * the default value for the url string is * * user can change it to some other urls within the application */ private static String defaultURL = ""; // GUI component for user to enter web url private Display myDisplay = null; private Form mainScreen; private TextField requestField; // GUI component for displaying web page content private Form resultScreen; private StringItem resultField; // the "send" button used on mainScreen Command sendCommand = new Command("SEND", Command.OK, 1); // the "back" button used on resultScreen Command backCommand = new Command("BACK", Command.OK, 1); public HttpGET() { // initializing the GUI components for entering web urls myDisplay = Display.getDisplay(this); mainScreen = new Form("Type in a URL:"); requestField = new TextField(null, defaultURL, 100, TextField.URL); mainScreen.append(requestField); mainScreen.addCommand(sendCommand); mainScreen.setCommandListener(this); } public void startApp() { myDisplay.setCurrent(mainScreen); } public void pauseApp() { } public void destroyApp(boolean unconditional) { } public void commandAction(Command c, Displayable s) { // when user clicks on "send" button on mainScreen if (c == sendCommand) { // retrieving the web url that user entered new Thread() { private String urlstring = requestField.getString(); public void run() { // sending a GET request to web server String resultstring = sendGetRequest(urlstring); // displaying the page content retrieved from web server resultScreen = new Form("GET Result:"); resultField = new StringItem(null, resultstring); resultScreen.append(resultField); resultScreen.addCommand(backCommand); resultScreen.setCommandListener(new CommandListener() { public void commandAction(Command c, Displayable s) { notifyDestroyed(); } }); myDisplay.setCurrent(resultScreen); } }.start(); } else if (c == backCommand) { // do it all over again requestField.setString(defaultURL); myDisplay.setCurrent(mainScreen); } } // send a GET request to web server public String sendGetRequest(String urlstring) { HttpConnection hc = null; InputStream dis = null; ByteArrayOutputStream baos = new ByteArrayOutputStream(); try { /* openning up http connection with the web server * when the connection is opened, the default request * method is GET */ hc = (HttpConnection) Connector.open(urlstring); // establishing input stream from the connection dis = hc.openInputStream(); byte[] buf = new byte[512]; // reading the response from web server character by character int red; while ((red = dis.read(buf)) != -1 ) { baos.write(buf, 0, red); } } catch (IOException ioe) { return "ERROR"; } finally { try { if(hc != null) hc.close(); } catch (IOException ignored) {} try { if(dis != null) dis.close();} catch (IOException ignored) {} } return new String(baos.toByteArray()); } } Grift - Updated to work with Nokia 2.0 SDKI tried to use this code with the latest version of the Nokia 2.0 SDK code and found that there were compile-time errors when I tried to create a project for the code. I have updated the code and created a sample project. grift 00:32, 3 April 2013 (EEST) Hamishwillee - Thanks for the upload and updates Hi Grift Thanks for adding the example code and fixing this. Can you please confirm what devices you tested this on? I will then update the article metadata appropriately. FYI, I also added an update timestamp to the articlemetadata. The update is for major revisions and changes to the article, and its important to include the timestamp so that people know that the article is current at the date you put in, not just when it was created. It is usual also to update the ArticleMetadata with the SDK, devices tested and versions so that people also know it works on current devices. Sometimes you'll come across an old article that you don't need to update but which works on current devices - in this case you can do the same updates to articlemedatadata but instead update the "reviewed-by" and "reviewed-timestamp" fields (again, so people know the article is still valid) Thanks very much! regardsH hamishwillee 01:37, 3 April 2013 (EEST) Grift - The GET and POST examples I updated were both tested on the Nokia Asha 311 through RDA.Thanks for your advice on updating articles. Now I have some idea of what I am doing, there's a few more broken examples I have come across that I will look at when I get time :) grift 00:11, 4 April 2013 (EEST) Hamishwillee - Thanks Hi Grift Any improvements you can do would be great - feel free to message me (or email) to review We're trying to formalise the format of "How To" articles to some extent - for an idea of structure see How to detect if an app is running in Kid’s Corner. The idea is to make it easier for people to understand what they need to include in a "good" FAQ - e.g. remembering import/using statements, and special capabilities or signing required, providing code example etc. It is not worth retro-fitting this to older articles systematically, but if you're there anyway, might be worth assessing if using this structure makes sense. Most important thing is that the code works :-) For general advice you might want to check out Help:Wiki Article Review Checklist PS, I added teh Asha 311 to metadata as shown. Regards H hamishwillee 01:28, 4 April 2013 (EEST)
http://developer.nokia.com/community/wiki/How_to_use_HTTP_GET_request_in_Java_ME
CC-MAIN-2014-23
refinedweb
907
50.06
ListPaging Plugin ported to 2.0 Greetings, I had quite a few issues getting the ListPaging plugin to work properly, and ended up refactoring the plugin so that it could work with 2.0 (it looked like it was referencing a lot of 1.1 methods / properties that aren't present anymore). Note: This currently only supports auto paging. I haven't tried it with the button. To use it, just add {ptype: 'paging'} ListPaging.js.zip Please keep in mind that I'm very new to Sencha Touch and ExtJS, so there is still quite a bit of room for improvement, but this is what I used to get it to work for me (well...I have a different namespace in my local app). Any feedback is greatly valued and appreciated, as I will be continuing development on this (albeit lightly) for a month or so. - Join Date - Mar 2007 - Location - Gainesville, FL - 38,985 - Vote Rating - 1193 Thank you! Every little bit you work on something, especially something within the framework the more you will learn! Currently the ListPaging plugin isn't fully updated to work with ST2 so your troubles with it are justified. Glad to help Glad I could be of any assistance. I'll probably be doing a lot of development in SenchaTouch 2.0 in the next few months, so hopefully I can contribute more to the community than just something like this. - Ext JS Senior Software Architect Personal Blog: Twitter: Github: Sweet :) Sweet, thank you sir. Looking forward to trying it out. I finished you watching an introductory speech at Senchacon 2011 a few days ago, very insightful. - Ext JS Senior Software Architect Personal Blog: Twitter: Github: I was on an equal amount of sleep when I wrote that grammatical catastrophe of a post regarding your speech at Senchacon. Your speech was well done, and provided a good deal of insight that the current examples and docs don't quite provide. Looking forward to trying PR4 today. Cool. I just overhauled the MVC docs/guides for PR4 so hopefully most of the stuff from the talk translated across well. I'm sure I've missed things though so let me know if anything is confusing.Ext JS Senior Software Architect Personal Blog: Twitter: Github: Using lists and AJAX proxies is a bit weak I could probably help out with that, as I got it working cleanly (with my AutoPagination plugin, and the applied fix for loading ptype aliased objects from the factory) , but it won't be compatible with the PR4 release, just PR3.
https://www.sencha.com/forum/showthread.php?176232-ListPaging-Plugin-ported-to-2.0
CC-MAIN-2016-22
refinedweb
431
71.65
By Rexford A. Nyarko Technological innovations have made it very easy to own an online shop or e-commerce business and to integrate products or services into websites or mobile apps. One thing you need to ensure if you’re an online seller is that you’re offering your customers or users a range of payment options. The Rapyd Collect API is a service offered by Rapyd that allows you to collect payments from customers online or within your app, including one-time, recurring, subscription, or business-to-business payments. It provides flexibility for your customers by offering access to almost any payment method in almost any country, without the need to set up or manage extra infrastructure. In this article, you’ll learn how to integrate payments into a Flutter app using the Rapyd Collect API via their Hosted Checkout Page. The only prerequisite for the integration is having a Rapyd account. The example app used in this tutorial is a simple Flutter mobile application that allows users to donate money to a cause, which was built using a UI design from dribbble artist Emir Abiyyu. As your use case might be different, this article focuses mainly on the integration and not the building of the entire app from scratch. The complete source code can be found on GitHub. Setting Up Your Rapyd Account As stated earlier, the only prerequisite for integrating the Collect API into your application is a Rapyd account. You first need to go through the simple process of setting up your Rapyd account. You’ll need to verify your email by clicking on the confirmation link sent to the email address provided and also complete multifactor authentication with a mobile number. Enable Sandbox Mode After you create your Rapyd account and log in, you have to enable sandbox mode. This provides a sandbox environment to explore Rapyd’s functions and products, including tests and simulations you can try without altering your account. To enable the sandbox mode, flip the sandbox switch at the bottom of the main navigation on the left side of the dashboard as shown in this image. Access and Secret Keys To use any of Rapyd's APIs, you’ll need the access and secret keys as part of the authentication credentials. To find these keys, from the navigation menu, go to Developers > Credential Details on the left. Copy and save them for use later in this tutorial. Creating and Customizing a Rapyd Checkout Page To ensure users have a seamless transition between your website and the checkout page, you should customize your Rapyd checkout page to match the theme of your website. You may want to add your company logo or the app logo and thematic color for the buttons. Rapyd provides basic customizations to help you achieve this. These customizations can be found in the settings menu on the left navigation of the dashboard under Branding. There are various options to customize the look and feel of your checkout page, and Rapyd has an extensive guide to customizations. For this tutorial, a logo was added, the payment options were changed to remove “bank redirect” and “cash”, a color preference for the buttons to match the mobile app was set, and the wording for the call to action was changed from "Place your order" to "Donate". The following image is the final mobile view after the changes. Creating or Preparing the Flutter App for Integration If you already have a Flutter application you want to work with, you can skip to the next section. But if you prefer to follow along with the prebuilt dummy for this tutorial, please continue reading. Clone App From your terminal or command prompt with Git installed, run the command below to clone the project files from GitHub: $ git clone Change Branch The default branch for this repo is the dev branch, which contains the completed code for this tutorial, but to follow along you can switch to the basic branch, which only contains the basic app code without the Collect API integrated: $ git checkout basic Initial App Run You can run the application or any supported connected device with the following command: $ flutter run You should see a user interface identical to the one below. Understanding the Current Basic App As you can see from the previous screenshots, the app consists of three main screens, the main or home screen, the Details screen, and the Donate screen.The app is not dynamic or data-driven; it’s just a bunch of widgets created to build the UI as in the artwork referenced earlier. The only functionality here is the ability to navigate between screens. From the main screen, tapping on the main widget takes you to the Details screen. The Details screen presents more information about the cause. Again, the only functional thing on this screen is the Donate Now button at the bottom, which takes you to the Donate page. The Donate screen provides four donation value options, which you can easily select by tapping, or you can directly enter an amount in the Enter Price Manually text box below the options. The Pay & Confirm button (currently just capturing the values) will later take you to the checkout screen where you can see the Rapyd Collect API in action; however, the checkout page has not yet been created in this part of the tutorial and is the focus of the next section. Generating the Rapyd Checkout Page To start with the integration, you first need to make a basic HTTP POST request to the Collect API to create the checkout page and then display it. In this section, you’ll write the necessary code to make that request. Creating a Class You create a directory called payment under the lib directory of your project, and in that directory, create a new file rapyd.dart. In the following lines of code, replace the keys with the respective values from your dashboard, and place the code into the file to create a class and declare some constants that are needed: import ‘dart:convert’; Import ‘dart:math’; class Rapyd { // Declaring variables final String _ACCESS_KEY = "YOUR-ACCESS-KEY"; final String _SECRET_KEY = "YOUR-SECRET-KEY"; final String _BASEURL = ""; final double amount; Rapyd(this.amount); } The constructor of this class requires amount to be passed to it. This will be the amount selected or typed on the Donate screen of the app. Adding the Necessary Packages You need to add the following packages to your app: http for making the request, crypto to access and run some cryptographic algorithms, and convert for text encoding functions: - Add the httppackage to your application: $ flutter pub add "http"; - Add the cryptopackage to the application: $ flutter pub add "crypto"; - Add the convertpackage: $ flutter pub add "convert"; - Add the imports for the packages to the top of the rapyd.dartfile: import 'package:convert/convert.dart'; import 'package:http/http.dart' as http; import 'package:crypto/crypto.dart'; Generating Random String for Salt According to the documentation, requests are accompanied by a random eight to sixteen character string containing digits, letters, and special characters. This is achieved with the code below, which you need to paste in the class definition right after the constructor: //1. Generating random string for each request with specific length as salt String _getRandString(int len) { var values = List<int>.generate(len, (i) => Random.secure().nextInt(256)); return base64Url.encode(values); } Building the Request Body The body will hold various key value pairs that will be passed along in the HTTP request and used to configure the checkout page you're creating. You can paste the following code snippet below the previous method: //2. Generating body Map<String, String> _getBody() { return <String, String>{ "amount": amount.toString(), "currency": "USD", "country": "US", "complete_checkout_url": "", "cancel_checkout_url": "" }; } Here, you specify the amount, currency, and country as required by the API. Two non-required options are also defined, cancel_checkout_url and complete_checkout_url, which determine the pages that the user will be redirected to when they either cancel or complete the transaction. You can also define this from the client portal. Note that there’s a complete list of options that can be specified when creating the checkout page. Generating the Signature The Rapyd API documentation provides information about securing requests by signing them using a signature that’s generated by a number of characters, encoding functions, and cryptographic functions. Below is the entire code snippet on each step to get this working properly: //3. Generating Signature String _getSignature(String httpMethod, String urlPath, String salt, String timestamp, String bodyString) { //concatenating string values together before hashing string according to Rapyd documentation String sigString = httpMethod + urlPath + salt + timestamp + _ACCESS_KEY + _SECRET_KEY + bodyString; //passing the concatenated string through HMAC with the SHA256 algorithm Hmac hmac = Hmac(sha256, utf8.encode(_SECRET_KEY)); Digest digest = hmac.convert(utf8.encode(sigString)); var ss = hex.encode(digest.bytes); //base64 encoding the results and returning it. return base64UrlEncode(ss.codeUnits); } Building the Headers The various values generated earlier are put together to build a set of request headers to help authenticate and securely undertake the request in the snippet below: //4. Generating Headers Map<String, String> _getHeaders(String urlEndpoint, {String body = ""}) { //generate a random string of length 16 String salt = _getRandString(16); //calculating the unix timestamp in seconds String timestamp = (DateTime.now().toUtc().millisecondsSinceEpoch / 1000) .round() .toString(); //generating the signature for the request according to the docs String signature = _getSignature("post", urlEndpoint, salt, timestamp, body); //Returning a map containing the headers and generated values return <String, String>{ "access_key": _ACCESS_KEY, "signature": signature, "salt": salt, "timestamp": timestamp, "Content-Type": "application/json", }; } Making the Request to Create the Checkout Page Finally, you write the function to make the HTTP request. When successful, the method is expected to return a Map with the created checkout page details. If the request fails, the function will throw an error with a Map of error details from the API service. Both are used later in this tutorial: //5. making post request Future<Map> createCheckoutPage() async { final responseURL = Uri.parse("$_BASEURL/v1/checkout"); final String body = jsonEncode(_getBody()); //making post request with headers and body. var response = await http.post( responseURL, headers: _getHeaders("/v1/checkout", body: body), body: body, ); Map repBody = jsonDecode(response.body) as Map; //return data if request was successful if (response.statusCode == 200) { return repBody["data"] as Map; } //throw error if request was unsuccessful throw repBody["status"] as Map; } Retrieving and Displaying the Rapyd Checkout Page After successfully creating the checkout page, you need to display the page. This can be done by allowing the user to complete the process in a web browser on the same device running the application, or you can embed a web view in the application to provide a seamless feel and more control, as well as to keep the user in your app. Create the CheckoutScreen Widget In the screens directory in the lib directory of your project, there’s an empty file called In this file, you can use the following snippet to create a stateful widget that accepts an instance of the Rapyd class and returns a Scaffold widget: import 'package:donation/payment/rapyd.dart'; import 'package:donation/widgets/back_button.dart'; import 'package:flutter/material.dart'; import 'package:webview_flutter/webview_flutter.dart'; class CheckOutScreen extends StatefulWidget { final Rapyd rapyd; const CheckOutScreen({super.key, required this.rapyd}); @override State<StatefulWidget> createState() { return _CheckOutScreenState(); } } class _CheckOutScreenState extends State<CheckOutScreen> { @override Widget build(BuildContext context) { return Scaffold( resizeToAvoidBottomInset: true, appBar: AppBar( leading: CustomBackButton(), title: const Align( child: Text("Checkout", textAlign: TextAlign.center, style: TextStyle( color: Colors.black, fontWeight: FontWeight.bold)), ), backgroundColor: Colors.white, actions: const [ SizedBox( width: 55, ), ], elevation: 0, ), ); } } Initialize Request to Create Checkout Page In the Future<Map> and initialize it by calling the createCheckoutPage() method of the Rapyd object passed to this widget. This means that once the CheckoutScreen is launched, the request to create the checkout page is executed and the results are stored in the created checkout page variable. This can be implemented with the following code snippet: ... late Future<Map> createdCheckoutPage; @override void initState() { super.initState(); createdCheckoutPage = widget.rapyd.createCheckoutPage(); } ... Adding Navigation to the Open Checkout Page Once you've created the basic widgets for the checkout screen, you need to modify the button on the DonateScreen widget to also navigate to the CheckoutScreen widget when clicked, passing on the value to be paid in an instance of the Rapyd class as a parameter to the To do this, first add the following lines of imports to the lib/screens/DonateScreen.dart file: import 'package:donation/payment/rapyd.dart'; import 'package:donation/screens/CheckoutScreen.dart'; Next, in the same file, replace the proceed method of the DonateScreen widget class with the code in the following snippet: void proceed(context) { if (selected == 0) { selectedAmount = double.parse(controller.text); } Rapyd rapyd = Rapyd(selectedAmount); Navigator.push(context, MaterialPageRoute(builder: (context) => CheckOutScreen(rapyd: rapyd))); } In the same file, you should also modify the definition of the MaterialButton widget for the Pay & Confirm button by passing the context to the proceed method call as seen below: ... onPressed: () { proceed(context); }, ... Using a FutureBuilder At this point, the users see no indication of the process running to create the checkout page when the checkout screen is launched. Since the API request call may take a few seconds, it’s good practice to display a busy or loading status. You can use a future builder to show the user a progress indicator until that method call completes and returns a result. The future builder also enables you to show an appropriate widget based on the returned result. For example, if the request is successful, a web page is loaded to continue the process, and if the request is unsuccessful, the user is directed to another widget with an error. In the body of the Scaffold widget: body: FutureBuilder( future: createdCheckoutPage, builder: (BuildContext context, AsyncSnapshot<dynamic> snapshot) { switch (snapshot.connectionState) { case ConnectionState.waiting: return const Center(child: CircularProgressIndicator()); default: if (snapshot.hasError) { return const Center(child: Text('Some error occurred!')); } else { return Text("Success"); } } }, ) Adding the webview_flutter Package To display a web page within the app without exiting or navigating outside the app, you use a web view plug-in, which can be added by using the following command: $ flutter pub add "webview_flutter" For Android builds, you’re required to set the minimum minSdkVersion in android/app/build.gradle to 19, as seen below: android { defaultConfig { minSdkVersion 19 //previously: flutter.minSdkVersion } } Displaying Page in a Webview Now add the WebView widget to display the checkout page by passing the redirect\_url value from the successful response of the request made earlier. This can be done by replacing the return Text("Success"); line with the following: return WebView( initialUrl: snapshot.data["redirect_url"].toString(), javascriptMode: JavascriptMode.unrestricted, ); Returning to the App Donate Page from Webview A user may decide to complete the payment or cancel the payment. Once they cancel or complete the payment you need to return to the DonateScreen from the onPageStarted parameter of the WebView widget to detect changes to the URL during a redirect from the checkout page. If the URL contains the value of the cancel_checkout_url or the complete_checkout_url , the app will exit the onPageStarted parameter should look like the following: onPageStarted: (url) { //Exit webview widget once the current url matches either checkout completed or canceled urls if (url .contains(snapshot.data["complete_checkout_url"])) { Navigator.pop(context); } else if (url .contains(snapshot.data["cancel_checkout_url"])) { Navigator.pop(context); } }, Running the App Now, using either a virtual or physically connected device, you can run the app using the following command: $ flutter run You should now have a functioning app with Rapyd Collect API integrated and working, and a Hosted Checkout Page that looks like the following image. Conclusion In this tutorial, you learned how to accept payments from users in your Flutter app using the Rapyd Collect API. You also learned how to customize your checkout page with elements like your logo, color, and payment options. You saw how to generate the signature and headers needed to make secure and authenticated requests to the Rapyd API. Finally, you learned how to display the checkout web page within your app and how to exit the web page seamlessly, and return to your app screens. The Rapyd Collect API is a payment solution that provides a plethora of options for your customers or app users to patronize your products and services across the globe without limitations. Integration is simple, secure, and flexible. Top comments (0)
https://dev.to/rapyd/how-to-accept-payments-in-your-flutter-application-1j5j
CC-MAIN-2022-40
refinedweb
2,749
50.57
ffprobe [options] [‘input_file’] ffprobe gathers information from multimedia streams and prints it in human- and machine-readable fashion. For ‘print_format’ option. Sections may contain other nested sections, and are identified by a name (which may be shared by other sections), and an unique name. See the output of ‘sections’. Metadata tags stored in the container or in the streams are recognized and printed in the corresponding "FORMAT", "STREAM" or "PROGRAM_STREAM" section. % set the log level. Force format to use.". Set the output printing format. writer_name specifies the name of the writer, and writer_options specifies the options to be passed to the writer. For example for printing the output in JSON format, specify: For more details on the available output printing formats, see the Writers section below. Print sections structure and section information, and exit. The output is not meant to be parsed by a machine. Select only the streams specified by stream_specifier. This option affects only the options related to streams (e.g. show_streams, show_packets, etc.). For example to show only audio streams, you can use the command: To show only video packets belonging to the video stream with index 1: Show payload data, as a hexadecimal and ASCII dump. Coupled with ‘-show_packets’, it will dump the packets’ data. Coupled with ‘-show_streams’, it will dump the codec extradata. The dump is printed as the "data" field. It may contain newlines. Show information about the error found when trying to probe the input. The error information is printed within a section with name "ERROR".. This option is deprecated, use show_entries instead.: For example, to show only the index and type of each stream, and the PTS time, duration time, and stream index of the packets, you can specify the argument: To show all the entries in the section "format", but only the codec type in the section "stream", specify the argument: To show all the tags in the stream and format sections: To show only the title tag (if available) in the stream sections: Show information about each packet contained in the input multimedia stream. The information for each single packet is printed within a dedicated section with name "PACKET". Show information about each frame and subtitle contained in the input multimedia stream. The information for each single frame is printed within a dedicated section with name "FRAME" or "SUBTITLE". Show information about each media stream contained in the input multimedia stream. Each media stream information is printed within a dedicated section with name "STREAM". Show information about programs and their streams contained in the input multimedia stream. Each media stream information is printed within a dedicated section with name "PROGRAM_STREAM". Show information about chapters stored in the format. Each chapter is printed within a dedicated section with name "CHAPTER". Count the number of frames per stream and report it in the corresponding stream section. Count the number of packets per stream and report it in the corresponding stream section. abolute: A few examples follow. 01:30(1 minute and thirty seconds) and read packets until position 01:45. 01:23: 02:30: Show private data, that is data depending on the format of the particular shown element. This option is enabled by default, but you may need to disable it for specific uses, for example when creating XSD-compliant XML output. Show information related to program version. Version information is printed within a section with name "PROGRAM_VERSION". Show information related to library versions. Version information for each library is printed within a section with name "LIBRARY_VERSION". Show information related to program and library versions. This is the equivalent of setting both ‘-show_program_version’ and ‘-show_library_versions’ options. Force bitexact output, useful to produce output which is not dependent on the specific build. Read input_file.: Set string validation mode. The following values are accepted. The writer will fail immediately in case an invalid string (UTF-8) sequence or code point is found in the input. This is especially useful to validate input metadata. Any validation error will be ignored. This will result in possibly broken output, especially with the json or xml writer. The writer will substitute invalid UTF-8 sequences or code points with the string specified with the ‘string_validation_replacement’. Default value is ‘replace’. Set replacement string to use in case ‘string_validation’ is set to ‘replace’. In case the option is not specified, the writer will assume the empty string, that is it will remove the invalid sequences from the input strings. A description of the currently available writers follows. Default format. Print each section in the form: Metadata tags are printed as a line in the corresponding FORMAT, STREAM or PROGRAM_STREAM section, and are prefixed by the string "TAG:". A description of the accepted options follows. If set to 1 specify not to print the key of each field. Default value is 0. If set to 1 specify not to print the section header and footer. Default value is 0. Compact and CSV format. The csv writer is equivalent to compact, but supports different defaults. Each section is printed on a single line. If no option is specifid, the output has the form: Metadata tags are printed in the corresponding "format" or "stream" section. A metadata tag key, if printed, is prefixed by the string "tag:". The description of the accepted options follows. Specify the character to use for separating fields in the output line. It must be a single printable character, it is "|" by default ("," for the csv writer). If set to 1 specify not to print the key of each field. Its default value is 0 (1 for the csv writer). Set the escape mode to use, default to "c" ("csv" for the csv writer). It can assume one of the following values:". Perform CSV-like escaping, as described in RFC4180. Strings containing a newline (’\n’), a carriage return (’\r’), a double quote (’"’), or SEP are enclosed in double-quotes. Perform no escaping. Print the section name at the begin of each line if the value is 1, disable it with value set to 0. Default value is 1.. Separator character used to separate the chapter, the section name, IDs and potential tags in the printed field key. Default value format output. Print output in an INI based format. The following conventions are adopted: This writer accepts options as a list of key=value pairs, separated by ":". The description of the accepted options follows. based format. Each section is printed using JSON notation. The description of the accepted options follows. If set to 1 enable compact output, that is each section will be printed on a single line. Default value is 0. For more information about JSON, see. XML based format. The XML output is described in the XML schema description file ‘ff ‘ffprobe.xsd’ schema only when no special global output options (‘unit’, ‘prefix’, ‘byte_binary_prefix’, ‘sexagesimal’ etc.) are specified. The description of the accepted options follows. If set to 1 specify if the output should be fully qualified. Default value is 0. This is required for generating an XML file which can be validated through an XSD file. If set to 1 perform more checks for ensuring that the output is XSD compliant. Default value is 0. This option automatically sets ‘fully_qualified’ to 1. For more information about the XML format, see. ffprobe supports Timecode extraction: This section documents the syntax and formats employed by the FFmpeg libraries and tools.. 'itself cannot be quoted, so you may need to close the quote and escape it. Note that you may need to add a second level of escaping when using the command line or a script, which depends on the syntax of the adopted shell language. The function av_get_token defined in ‘libavutil/avstring.h’ can be used to parse a token quoted or escaped according to the rules defined above. The tool ‘tools/ffescape’ in the FFmpeg source tree can be used to automatically quote or escape a string in a script. Crime d'Amourcontaining the 'special character: 'needs to be escaped when quoting it: \you can use either escaping or quoting: The accepted syntax is: If the value is "now" it takes the current time. Time is local time unless Z is appended, in which case it is interpreted as UTC. If the year-month-day part is not specified it takes the current year-month-day. There are two accepted syntaxes for expressing time duration. HH expresses the number of hours, MM the number of minutes for a maximum of 2 digits, and SS the number of seconds for a maximum of 2 digits. The m at the end expresses decimal value for SS. or S expresses the number of seconds, with the optional decimal part m. In both expressions, the optional ‘-’ indicates negative duration. The following examples are all valid time duration: 55 seconds 12 hours, 03 minutes and 45 seconds 23.189 seconds Specify the size of the sourced video, it may be a string of the form widthxheight, or the name of a size abbreviation. The following abbreviations are recognized: 720x480 720x576 352x240 352x288 640x480 768x576 352x240 352x240 1998x1080 2048x858 4096x2160 3996x2160 4096x1716 640x360 240x160 400x240 432x240 480x320 960x540: 30000/1001 25/1 30000/1001 25/1 30000/1001 25/1 24/1 24000/1001 : 0xF0F8FF 0xFAEBD7 0x00FFFF 0x7FFFD4 0xF0FFFF 0xF5F5DC 0xFFE4C4 0x000000 0xFFEBCD 0x0000FF 0x8A2BE2 0xA52A2A 0xDEB887 0x5F9EA0 0x7FFF00 0xD2691E 0xFF7F50 0x6495ED 0xFFF8DC 0xDC143C 0x00FFFF 0x00008B 0x008B8B 0xB8860B 0xA9A9A9 0x006400 0xBDB76B 0x8B008B 0x556B2F 0xFF8C00 0x9932CC 0x8B0000 0xE9967A 0x8FBC8F 0x483D8B 0x2F4F4F 0x00CED1 0x9400D3 0xFF1493 0x00BFFF 0x696969 0x1E90FF 0xB22222 0xFFFAF0 0x228B22 0xFF00FF 0xDCDCDC 0xF8F8FF 0xFFD700 0xDAA520 0x808080 0x008000 0xADFF2F 0xF0FFF0 0xFF69B4 0xCD5C5C 0x4B0082 0xFFFFF0 0xF0E68C 0xE6E6FA 0xFFF0F5 0x7CFC00 0xFFFACD 0xADD8E6 0xF08080 0xE0FFFF 0xFAFAD2 0x90EE90 0xD3D3D3 0xFFB6C1 0xFFA07A 0x20B2AA 0x87CEFA 0x778899 0xB0C4DE 0xFFFFE0 0x00FF00 0x32CD32 0xFAF0E6 0xFF00FF 0x800000 0x66CDAA 0x0000CD 0xBA55D3 0x9370D8 0x3CB371 0x7B68EE 0x00FA9A 0x48D1CC 0xC71585 0x191970 0xF5FFFA 0xFFE4E1 0xFFE4B5 0xFFDEAD 0x000080 0xFDF5E6 0x808000 0x6B8E23 0xFFA500 0xFF4500 0xDA70D6 0xEEE8AA 0x98FB98 0xAFEEEE 0xD87093 0xFFEFD5 0xFFDAB9 0xCD853F 0xFFC0CB 0xDDA0DD 0xB0E0E6 0x800080 0xFF0000 0xBC8F8F 0x4169E1 0x8B4513 0xFA8072 0xF4A460 0x2E8B57 0xFFF5EE 0xA0522D 0xC0C0C0 0x87CEEB 0x6A5ACD 0x708090 0xFFFAFA 0x00FF7F 0x4682B4 0xD2B48C 0x008080 0xD8BFD8 0xFF6347 0x40E0D0 0xEE82EE 0xF5DEB3 0xFFFFFF 0xF5F5F5 0xFFFF00 0x9ACD32 A channel layout specifies the spatial disposition of the channels in a multi-channel audio stream. To specify a channel layout, FFmpeg makes use of a special syntax. Individual channels are identified by an id, as given by the table below: front left front right front center low frequency back left back right front left-of-center front right-of-center back center side left side right top center top front left top front center top front right top back left top back center top back right downmix left downmix right wide left wide right surround direct left surround direct right low frequency 2 Standard channel layout compositions can be specified by using the following identifiers: FC FL+FR FL+FR+LFE FL+FR+FC FL+FR+BC FL+FR+FC+BC FL+FR+BL+BR FL+FR+SL+SR FL+FR+FC+LFE FL+FR+FC+BL+BR FL+FR+FC+SL+SR FL+FR+FC+LFE+BC FL+FR+FC+LFE+BL+BR FL+FR+FC+LFE+SL+SR FL+FR+FC+BC+SL+SR FL+FR+FLC+FRC+SL+SR FL+FR+FC+BL+BR+BC FL+FR+FC+LFE+BC+SL+SR FL+FR+FC+LFE+BL+BR+BC FL+FR+LFE+FLC+FRC+SL+SR FL+FR+FC+BL+BR+SL+SR FL+FR+FC+FLC+FRC+SL+SR FL+FR+FC+LFE+BL+BR+SL+SR FL+FR+FC+LFE+BL+BR+FLC+FRC FL+FR+FC+LFE+FLC+FRC+SL+SR FL+FR+FC+BL+BR+BC+SL+SR DL+DR A custom channel layout can be specified as a sequence of terms, separated by ’+’ or ’|’. Each term can be: av_get_default_channel_layout) AV_CH_*macros in ‘libavutil/channel_layout ‘libavutil/channel_layout.h’. When evaluating an arithmetic expression, FFmpeg uses an internal formula evaluator, implemented through the ‘lib: Compute absolute value of x. Compute arccosine of x. Compute arcsine of x. Compute arctangent of x. Return 1 if x is greater than or equal to min and lesser than or equal to max, 0 otherwise.). Round the value of expression expr upwards to the nearest integer. For example, "ceil(1.5)" is "2.0". Compute cosine of x. Compute hyperbolic cosine of x. Return 1 if x and y are equivalent, 0 otherwise. Compute exponential of x (with base e, the Euler’s number). Round the value of expression expr downwards to the nearest integer. For example, "floor(-1.5)" is "-2.0". Compute Gauss function of x, corresponding to exp(-x*x/2) / sqrt(2*PI). Return the greatest common divisor of x and y. If both x and y are 0 or either or both are less than zero then behavior is undefined. Return 1 if x is greater than y, 0 otherwise. Return 1 if x is greater than or equal to y, 0 otherwise. This function is similar to the C function with the same name; it returns "sqrt(x*x + y*y)", the length of the hypotenuse of a right triangle with sides of length x and y, or the distance of the point (x, y) from the origin. Evaluate x, and if the result is non-zero return the result of the evaluation of y, return 0 otherwise. Evaluate x, and if the result is non-zero return the evaluation result of y, otherwise the evaluation result of z. Evaluate x, and if the result is zero return the result of the evaluation of y, return 0 otherwise. Evaluate x, and if the result is zero return the evaluation result of y, otherwise the evaluation result of z. Return 1.0 if x is +/-INFINITY, 0.0 otherwise. Return 1.0 if x is NAN, 0.0 otherwise. Allow to load the value of the internal variable with number var, which was previously stored with st(var, expr). The function returns the loaded value. Compute natural logarithm of x. Return 1 if x is lesser than y, 0 otherwise. Return 1 if x is lesser than or equal to y, 0 otherwise. Return the maximum between x and y. Return the maximum between x and y. Compute the remainder of division of x by y. Return 1.0 if expr is zero, 0.0 otherwise. Compute the power of x elevated y, it is equivalent to "(x)^(y)". Print the value of expression t with loglevel l. If l is not specified then a default log level is used. Returns the value of the expression printed. Prints t with loglevel l Return a pseudo random value between 0.0 and 1.0. x is the index of the internal variable which will be used to save the seed/state.. Compute sine of x. Compute hyperbolic sine of x. Compute the square root of expr. This is equivalent to "(expr)^.5". Compute expression 1/(1 + exp(4*x)). Allow to. Compute tangent of x. Compute hyperbolic tangent of x.. Return the current (wallclock) time in seconds. Round the value of expression expr towards zero to the nearest integer. For example, "trunc(-1.5)" is "-1.0". Evaluate expression expr while the expression cond is non-zero, and returns the value of the last expr evaluation, or NAN if cond was always false. The following constants are available: area of the unit disc, approximately 3.14 exp(1) (Euler’s number), approximately 2.718 golden ratio (1+sqrt(5))/2, approximately 1.618 Assuming that an expression is considered "true" if it has a non-zero value, note that: * works like AND + works like OR For example the construct: is equivalent. 10^-24 / 2^-80 10^-21 / 2^-70 10^-18 / 2^-60 10^-15 / 2^-50 10^-12 / 2^-40 10^-9 / 2^-30 10^-6 / 2^-20 10^-3 / 2^-10 10^-2 10^-1 10^2 10^3 / 2^10 10^3 / 2^10 10^6 / 2^20 10^9 / 2^30 10^12 / 2^40 10^15 / 2^40 10^18 / 2^50 10^21 / 2^60 10^24 / 2^70 When FFmpeg is configured with --enable-opencl, it is possible to set the options for the global OpenCL context. The list of supported options follows: Set build options used to compile the registered kernels. See reference "OpenCL Specification Version: 1.2 chapter 5.6.4". Select the index of the platform to run OpenCL code. The specified index must be one of the indexes in the device list which can be obtained with ffmpeg -opencl_bench or av_opencl_get_device_list(). Select the index of the device used to run OpenCL code. The specified index must be one of the indexes in the device list which can be obtained with ffmpeg -opencl_bench or av_opencl_get_device_list(). libavcodec provides some generic global options, which can be set on all the encoders and decoders. In addition each codec may support so-called private options, which are specific for a given codec. Sometimes, a global option may only affect a specific kind of codec, and may be unsensical ‘libavutil/opt.h’ API for programmatic use. The list of supported options follow: Set bitrate in bits/s. Default value is 200K. Set audio bitrate (in bits/s). Default value is 128K.. Set generic flags. Possible values: Use four motion vector by macroblock (mpeg4). Use 1/4 pel motion compensation. Use loop filter. Use fixed qscale. Use gmc. Always try a mb with mv=<0,0>. Use internal 2pass ratecontrol in first pass mode. Use internal 2pass ratecontrol in second pass mode. Only decode/encode grayscale. Do not draw edges. Set error[?] variables during encoding. Normalize adaptive quantization. Use interlaced DCT. Force low delay. Place global headers in extradata instead of every keyframe. Use only bitexact stuff (except (I)DCT). Apply H263 advanced intra coding / mpeg4 ac prediction. Deprecated, use mpegvideo private options instead. Deprecated, use mpegvideo private options instead. Apply interlaced motion estimation. Use closed gop. Set motion estimation method. Possible values: zero motion estimation (fastest) full motion estimation (slowest) EPZS motion estimation (default) esa motion estimation (alias for full) tesa motion estimation dia motion estimation (alias for epzs) log motion estimation phods motion estimation X1 motion estimation hex motion estimation umh motion estimation iter motion estimation Set extradata size. Set codec time base. It is the fundamental unit of time (in seconds) in terms of which frame timestamps are represented. For fixed-fps content, timebase should be 1 / frame_rate and timestamp increments should be identically 1. Set the group of picture size. Default value is 12. Set audio sampling rate (in Hz). Set number of audio channels. Set cutoff bandwidth.. Set the frame number. Set video quantizer scale compression (VBR). It is used as a constant in the ratecontrol equation. Recommended range for default rc_eq: 0.0-1.0. Set video quantizer scale blur (VBR). Set min video quantizer scale (VBR). Must be included between -1 and 69, default value is 2. Set max video quantizer scale (VBR). Must be included between -1 and 1024, default value is 31. Set max difference between the quantizer scale (VBR). Set max number of B frames between non-B-frames. Must be an integer between -1 and 16. 0 means that B-frames are disabled. If a value of -1 is used, it will choose an automatic value depending on the encoder. Default value is 0. Set qp factor between P and B frames. Set ratecontrol method. Set strategy to choose between I/P/B-frames. Set RTP payload size in bytes. Workaround not auto detected encoder bugs. Possible values: some old lavc generated msmpeg4v3 files (no autodetection) Xvid interlacing bug (autodetected if fourcc==XVIX) (autodetected if fourcc==UMP4) padding bug (autodetected) illegal vlc bug (autodetected per fourcc) old standard qpel (autodetected per fourcc/version) direct-qpel-blocksize bug (autodetected per fourcc/version) edge padding bug (autodetected per fourcc/version) Workaround various bugs in microsoft broken decoders. trancated frames Set single coefficient elimination threshold for luminance (negative values also consider DC coefficient). Set single coefficient elimination threshold for chrominance (negative values also consider dc coefficient) Specify how strictly to follow the standards. Possible values: strictly conform to a older more strict version of the spec or reference software strictly conform to all the things in the spec no matter what consequences allow unofficial extensions allow non standardized experimental things, experimental (unfinished/work in progress/not well tested) decoders and encoders. Note: experimental decoders can pose a security risk, do not use this for decoding untrusted input. Set QP offset between P and B frames. Set error detection flags. Possible values: verify embedded CRCs detect bitstream specification deviations detect improper bitstream length abort decoding on minor error detection ignore decoding errors, and continue decoding. This is useful if you want to analyze the content of a video and thus want everything to be decoded no matter what. This option will not result in a video that is pleasing to watch in case of errors. consider things that violate the spec and have not been seen in the wild as errors consider all spec non compliancies as errors consider things that a sane encoder should not do as an error Use MPEG quantizers instead of H.263. How to keep quantizer between qmin and qmax (0 = clip, 1 = use differentiable function). Set experimental quantizer modulation. Set experimental quantizer modulation.. Set max bitrate tolerance (in bits/s). Requires bufsize to be set. Set min bitrate tolerance (in bits/s). Most useful in setting up a CBR encode. It is of little use elsewise. Set ratecontrol buffer size (in bits). Currently useless. Set QP factor between P and I frames. Set QP offset between P and I frames. Set initial complexity for 1-pass encoding. Set DCT algorithm. Possible values: autoselect a good one (default) fast integer accurate integer floating point AAN DCT Compress bright areas stronger than medium ones. Set temporal complexity masking. Set spatial complexity masking. Set inter masking. Compress dark areas stronger than medium ones. Select IDCT implementation. Possible values: Automatically pick a IDCT compatible with the simple one floating point AAN IDCT Set error concealment strategy. Possible values: iterative motion vector (MV) search (slow) use strong deblock filter for damaged MBs favor predicting from the previous frame instead of the current Set prediction method. Possible values: Set sample aspect ratio. Print specific debug info. Possible values: picture info rate control macroblock (MB) type per-block quantization parameter (QP) motion vector error recognition memory management control operations (H.264) visualize quantization parameter (QP), lower QP are tinted greener visualize block types picture buffer allocations threading operations Visualize motion vectors (MVs). Possible values: forward predicted MVs of P-frames forward predicted MVs of B-frames backward predicted MVs of B-frames Set full sub macroblock interlaced d. Set amount of motion predictors from the previous frame. Set pre motion estimation. Set pre motion estimation pre-pass. Set sub pel motion estimation quality. Set limit motion vectors range (1023 for DivX player). Set intra quant bias. Set inter quant bias. Possible values: variable length coder / huffman coder arithmetic coder raw (no encoding) run-length coder deflate-based coder Set context model. Set macroblock decision algorithm (high quality mode). Possible values: use mbcmp (default) use fewest bits use best rate distortion Set scene change threshold. Set min lagrange factor (VBR). Set max lagrange factor (VBR). Set noise reduction. Set number of bits which should be loaded into the rc buffer before decoding starts. Possible values: Allow non spec compliant speedup tricks. Deprecated, use mpegvideo private options instead. Skip bitstream encoding. Ignore cropping information from sps. Place global headers at every keyframe instead of in extradata. Frame data might be split into multiple chunks. Show all frames before the first keyframe. Deprecated, use mpegvideo private options instead. Deprecated, use mpegvideo private options instead. Possible values: detect a good number of threads Set motion estimation threshold. Set macroblock threshold. Set intra_dc_precision. Set nsse weight. Set number of macroblock rows at the top which are skipped. Set number of macroblock rows at the bottom which are skipped. Possible values: Possible values: Decode at 1= 1/2, 2=1/4, 3=1/8 resolutions. Set frame skip threshold. Set frame skip factor. Set frame skip exponent. Negative values behave identical to the corresponding positive ones, except that the score is normalized. Positive values exist primarily for compatibility reasons and are not so useful. Set frame skip Increase the quantizer for macroblocks close to borders. Set min macroblock lagrange factor (VBR). Set max macroblock lagrange factor (VBR). Set motion estimation bitrate penalty compensation (1.0 = 256). Make decoder discard processing depending on the frame type selected by the option value. ‘skip_loop_filter’ skips frame loop filtering, ‘skip_idct’ skips frame IDCT/dequantization, ‘skip_frame’ skips decoding. Possible values: Discard no frame. Discard useless frames like 0-sized frames. Discard all non-reference frames. Discard all bidirectional frames. Discard all frames excepts keyframes. Discard all frames. Default value is ‘default’. Refine the two motion vectors used in bidirectional macroblocks. Downscale frames for dynamic B-frame decision. Set minimum interval between IDR-frames. Set reference frames to consider for motion compensation. Set chroma qp offset from luma. Set rate-distortion optimal quantization. Set value multiplied by qscale for each frame and added to scene_change_score. Adjust sensitivity of b_frame_strategy 1. Set GOP timecode frame start number, in non drop frame format. Set desired number of audio channels. Possible values: Possible values: Set the log level offset. Number of slices, used in parallelized encoding. Select which multithreading methods to use. Use of ‘frame’ will increase decoding delay by one frame per thread, so clients which cannot provide future frames should not use it. Possible values: Decode more than one part of a single frame at once. Multithreading using slices works only when the video was encoded with slices. Decode more than one frame at once. Default value is ‘slice+frame’. Set audio service type. Possible values: Main Audio Service Effects Visually Impaired Hearing Impaired Dialogue Commentary Emergency Voice Over Karaoke Set sample format audio decoders should prefer. Default value is none. Set the input subtitles character encoding. Set/override the field order of the video. Possible values: Progressive video Interlaced video, top field coded and displayed first Interlaced video, bottom field coded and displayed first Interlaced video, top coded first, bottom displayed first Interlaced video, bottom coded first, top displayed first Set to 1 to disable processing alpha (transparency). This works like the ‘gray’ flag in the ‘flags’ option which skips chroma information instead of alpha. Default is 0. Decoders are configured elements in FFmpeg which allow the decoding of multimedia streams.. A description of some of the currently available video decoders follows. Raw video decoder. This decoder decodes rawvideo streams. Specify the assumed field type of the input video. the video is assumed to be progressive (default) bottom-field-first is assumed top-field-first is assumed. Internal wave synthetizer. This decoder generates wave patterns according to predefined sequences. Its use is purely internal and the format of the data it accepts is not publicly documented.. libgsm decoder wrapper. decoder wrapper.. Enable the enhancement of the decoded audio when set to 1. The default value is 0 (disabled). libopencore-amrnb decoder decoder wrapper. decoder wrapper. libopus allows libavcodec to decode the Opus Interactive Audio Codec. Requires the presence of the libopus headers and library during configuration. You need to explicitly configure the build with --enable-libopus. This codec decodes the bitmap subtitles used in DVDs; the same subtitles can also be found in VobSub file pairs and in some Matroska files... List of teletext page numbers to decode. You may use the special * string to match all pages. Pages that do not match the specified list are dropped. Default value is *. Discards the top teletext line. Default value is 1. Specifies the format of the decoded subtitles. The teletext decoder is capable of decoding the teletext pages to bitmaps or to simple text, you should use "bitmap" for teletext pages, because certain graphics and colors cannot be expressed in simple text. You might use "text" for teletext based subtitles if your application can handle simple text based subtitles. Default value is bitmap. X offset of generated bitmaps, default is 0. Y offset of generated bitmaps, default is 0. Chops leading and trailing spaces and removes empty lines from the generated text. This option is useful for teletext based subtitles where empty spaces may be present at the start or at the end of the lines or empty lines may be present between the subtitle lines because of double-sized teletext charactes. Default value is 1. Sets the display duration of the decoded teletext pages or subtitles in miliseconds. Default value is 30000 which is 30 seconds. Force transparent background of the generated teletext bitmaps. Default value is 0 which means an opaque (black) background.. Remove zero padding at the end of a packet.: Convert MJPEG/AVI1 packets to full JPEG/JFIF packets. MJPEG is a video codec wherein each video frame is essentially a JPEG image. The individual frames can be extracted without loss, e.g. by. Damages the contents of packets without damaging the container. Can be used for fuzzing or testing error resilience/concealment.. Possible values: Shift timestamps to make them non-negative. Also note that this affects only leading negative timestamps, and not non-monotonic negative timestamps. Shift timestamps so that the first timestamp is 0. Enables shifting when required by the target format. Disables shifting of timestamp. When shifting is enabled, all output timestamps are shifted by the same amount. Audio, video, and subtitles desynching and relative timestamp differences are preserved compared to how they would have been without shifting.. Set the output time offset. offset must be a time duration specification, see (ffmpeg-utils)time duration syntax. The offset is added by the muxer to the output timestamps. Specifying a positive offset means that the corresponding streams are delayed bt the time duration specified in offset. Default value is 0 (meaning that no offset is applied)... one: Protocols are configured elements in FFmpeg that enable access to resources that require specific protocols.. Read BluRay playlist. The accepted options are: BluRay angle Start chapter (1...N) Playlist to read (BDMV/PLAYLIST/?????.mpls) Examples: Read longest playlist from BluRay mounted to /mnt/bluray: Read angle 2 of playlist 4 from BluRay mounted to /mnt/bluray, start from chapter 2: Caching wrapper for input stream. Cache the input stream to temporary file. It brings seeking capability to live streams. ffplay use the command: Note that you may need to escape the character "|" which is special for many shells. AES-encrypted stream reading protocol. The accepted options are: Set the AES decryption key binary block from given hexadecimal representation. Set the AES decryption initialization vector binary block from given hexadecimal representation. Accepted URL formats: Data in-line in the URI. See. For example, to convert a GIF file given inline with ffmpeg:
http://ffmpeg.org/ffprobe-all.html
CC-MAIN-2014-23
refinedweb
5,159
59.9
Ruby/GNUstep (aka "RIGS") is a Ruby interface to the GNUstep development environment and indirectly to the Objective C language. It allows you to quickly prototype GNUstep applications from Ruby. LICENSE: LGPL2 or later (framework) LICENSE: GPL2 or later (examples) Author: Laurent Julliard <[email protected]> WWW: NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered. No installation instructions: this port has been deleted. The package name of this deleted port was: ruby18-gnustep ruby18-gnustep No options to configure Number of commits found: 53 - retire devel/ruby-gnustep - mark DEPRECATED - reset MAINTAINER - mark BROKEN - Mark as broken with Ruby 1.9 Approved by: portmgr - drop MD5 add LICENSE_COMB LICENSE GPLv2 - - fix build for custom configurations cleanup pkg-plist after gnustep-make - fix build with gnustep-make-1.13.0 - use more of bsd.gnustep.mk - make portlint happier - use @dirrmtry - fix build with gcc4.1 - fix MASTER_SITE_SUBDIR - change path on MASTER_SITE - update to 0.2.2 - cleanup Makefile and use bsd.gnustep.mk - fix build with gcc34 - add SHA checksum - take MAINTAINERSHIP With portmgr hat on, reset maintainership of knu's ports since he has been inactive more than 6 months. We hope to see him back sometime. - cleanup Makefile location Approved by: knu (implicit) - update auf gnustep-back 0.10 Approved by: knu (implicit) At Kris's request, back out the MACHINE_ARCH spelling correction until after 5.4-RELEASE. Assist getting more ports working on AMD64 by obeying the Ports Collection documentation and use 'ARCH' rather than 'MACHINE_ARCH'. ;-) - cleanup obsolete defs - support for GNUSTEP_WITH_BASE_GCC Approved by: (implicit by knu) - add SIZE Bump PORTREVISION on all ports that depend on gettext to aid with upgrading. (Part 2) Use the FIND and XARGS macros introduced in bsd.port.mk 1.391. - Layout for GnuSTEP 1.8.0 - Update for GNUstep 1.7.3 - preserve MAKE_ENV Approved by: knu (implicit) - flat layout Approved by: knu - make more dependencys configureable Approved by: knu - Add WITH_GNUSTEP_DEVEL - resolve namespace confict Approved by: knu - use lang/gnustep-objc so libobjc.so is be used PR: 50479 Approved by: knu - drop gnustep-objc in favor of gcc32 in STABLE Approved by: knu De-pkg-comment. Add dependency to ffcall (was previous supplied with gnustep-objc) Approved by: knu - unbreak INDEX (was still broken) - migrate dependency gnustep-xgps -> gnustep-back - don't depend on gnustep-objc in CURRENT - Fix PLIST as we are here PR: 47351 Approved by: knu Catch up with the Ruby Application Archive's URL scheme change. Use RUBY_MOD*. Make the port build with the latest GNUstep port. (Although I'm not sure if it works 100% functionally) Update to 0.2.1. Update to 0.2.0. Add ruby-gnustep, a Ruby interface to the GNUstep development environment. Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD 11 vulnerabilities affecting 47 ports have been reported in the past 14 days * - modified, not new All vulnerabilities
http://www.freshports.org/devel/ruby-gnustep
CC-MAIN-2014-52
refinedweb
490
56.15
In this tutorial we will learn the basics of developing a Java application using Couchbase. Getting the JAR Files Get the latest Java client JAR files from here. You can either get the JAR files from Maven repository if you are using the Maven build system. Otherwise, you can download the ZIP file and get the JAR files from there. In client API version 1.3.2, you will need to add these JAR files to your class path and build path: - commons-codec-1.5.jar - couchbase-client-1.2.0.jar - httpcore-4.1.1.jar - httpcore-nio-4.1.1.jar - jettison-1.1.jar - netty-3.5.5.Final.jar - spymemcached-2.10.0.jar In addition, you will need a JSON library that can serialize Java objects into JSON and back. I recommend using Gson. This, you can download from here. Gson needs only one JAR file, for example: gson-2.2.4.jar. If you are writing a web application, add all of these JAR files to WEB-INF/lib folder. Opening a Connection The steps to open a new connection is as follows: List<URI> hosts = Arrays.asList(new URI("")); // Name of the Bucket to connect to String bucket = "default"; // Password of the bucket (empty) string if none String password = ""; // Connect to the Cluster CouchbaseClient client = new CouchbaseClient(hosts, bucket, password); To close the connection: client.shutdown(); Connection Pooling? There is no need for connection pooling with Couchbase. The client library uses multiplexing. That is, a single socket is used to serve multiple different channels of communication. A web application can open a single connection and use it from multiple threads. Couchbase doesn’t have any notion of a transaction. So, the operations performed by one thread doesn’t need to be distinguished from the operations from another thread. If you are using CDI, you can create a singleton class to simplify connection management. @ApplicationScoped public class ConnectionManager { Logger logger = Logger.getLogger(getClass().getName()); private CouchbaseClient client; @PostConstruct public void init() { try { logger.info("Opening couchbase connection."); List<URI> hosts = Arrays.asList(new URI("")); // Name of the Bucket to connect to String bucket = "default"; // Password of the bucket (empty) string if none String password = ""; // Connect to the Cluster client = new CouchbaseClient(hosts, bucket, password); } catch (Exception e) { client = null; throw new IllegalStateException(e); } } @PreDestroy public void destroy() { logger.info("Closing couchbase connection."); if (client != null) { client.shutdown(); client = null; } } public CouchbaseClient getClient() { return client; } } Getting the connection from any CDI managed class, including Servlet, JSF managed beans and EJB becomes easy. @WebServlet("/DemoServlet") public class DemoServlet extends HttpServlet { private static final long serialVersionUID = 1L; @Inject ConnectionManager cm; protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { CouchbaseClient c = cm.getClient(); //No need to close connection here. } } You don’t have to close the connection. The connection stays open for the life of the web application. The @PreDestroy method of the ConnectionManager class will close the connection when the application shuts down. Storing Documents In our example, we will use the following document syntax: { id: "ABC1234", type: "Story", user: "billybob", genre: "ghost", text: "It was a dark and stormy night..." } { id: "MNO567", type: "Story", user: "billybob", genre: "adventure", text: "The ship sailed at night..." } { id: "XYZ890", type: "Story", user: "jane13", genre: "ghost", text: "The house stood at the end of a narrow road..." } The corresponding Java class will be: public class Story implements Serializable { private static final long serialVersionUID = 2137881606866283873L; private String id; private String type; private String user; private String genre; private String text; //Getters and setters.... } A couple of things to note here: - Every document in Couchbase must have a unique ID. This ID is stored separate from the document itself and you don’t have to include the ID within the document. I find it convenient to see the ID within the document. This is why I have the “id” field with the document as well as in the Java class. - I highly recommend adding a type field that describes the nature of the document. This serves the same purpose as a table name in a relational database. It is a namespace. Couchebase doesn’t have a notion of table names. To distinguish different types of documents, the type field comes handy. I usually set the type to the same name as the Java class – “Story” in our case. - The Java class needs to be serializable for Gson to serialize objects into JSON and back. If you have a variable that should not be serialized, mark it as transient. We add (insert) a new Story document like this: CouchbaseClient c = ...; Story s = new Story(); s.setId(UUID.randomUUID().toString()); s.setGenre("ghost"); s.setType("Story"); s.setUser("billybob"); s.setText("It was a dark and stormy night..."); Gson gson = new Gson(); String json = gson.toJson(s); c.add(s.getId(), json); The add() method is used to insert a new document. The replace() method is used to update an existing document. If the document does not exist replace() fails. There is also the set() method. This implicitly does an add() if a document doesn’t exist. Otherwise, it does a replace(). To delete a document, call the delete() method of the client. Looking Up a Document You can look up a document JSON string using the get() method. Then convert the JSON into a Java object. CouchbaseClient c = ...; Gson gson = new Gson(); String json = (String) c.get("1ac900aa-6092-4a4a-958b-bb08f8076ebf"); Story s = gson.fromJson(json, Story.class); That’s all for today. I will do another tutorial on querying using views. 3 thoughts on “Java Application Development Using Couchbase” java.lang.NullPointerException =( What happens to the couchbaseClient() if the server was down for a short-time and up again Couchbase client will eventually reconnect with the server.
https://mobiarch.wordpress.com/2014/03/06/java-application-development-using-couchbase/
CC-MAIN-2018-13
refinedweb
969
58.38
Created on 2012-09-14 15:05 by dabeaz, last changed 2015-08-08 11:45 by skrah. This issue is now closed. I've been playing with the interaction of ctypes and memoryviews and am curious about intended behavior. Consider the following: >>> import ctypes >>> d = ctypes.c_double() >>> m = memoryview(d) >>> m.ndim 0 >>> m.shape () >>> m.readonly False >>> m.itemsize 8 >>> As you can see, you have a memory view for the ctypes double object. However, the fact that it has a 0-dimension and no shape seems to cause all sorts of weird behavior. For instance, indexing and slicing don't work: >>> m[0] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: invalid indexing of 0-dim memory >>> m[:] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: invalid indexing of 0-dim memory >>> As such, you can't really seem to do anything interesting with the resulting memory view. For example, you can't pull data out of it. Nor can you overwrite the contents (i.e., replacing the contents with an 8-byte byte string). Attempting to cast the memory view to something else doesn't work either. >>> d = ctypes.c_double() >>> m = memoryview(d) >>> m2 = m.cast('c') Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: memoryview: source format must be a native single character format prefixed with an optional '@' >>> I must be missing something really obvious here. Is there no way to get access to the memory behind a ctypes object? You can still read the underlying representation: >>> d = ctypes.c_double(0.6) >>> m = memoryview(d) >>> bytes(m) b'333333\xe3?' >>> d.value = 0.7 >>> bytes(m) b'ffffff\xe6?' I don't want to read the representation by copying it into a bytes object. I want direct access to the underlying memory--including the ability to modify it. As it stands now, it's completely useless. 0-dim memory is indexed by x[()]. The ctypes example has an additional problem, because format="<d" is not yet implemented in memoryview. Only native single character formats in struct module syntax are implemented, and "<d" in struct module syntax means "standard size, little endian". To demonstrate 0-dim indexing, here's an example using _testbuffer: >>> x = ndarray(3.14, shape=[], format='d', flags=ND_WRITABLE) >>> x[()] 3.14 >>> tau = 6.28 >>> x[()] = tau >>> x[()] 6.28 >>> m = memoryview(x) >>> m[()] 6.28 >>> m[()] = 100.111 >>> m[()] 100.111 BTW, if c_double means "native machine double", then ctypes should fill in Py_buffer.format with "d" and not "<d" in order to be PEP-3118 compatible. Even with the <d format, I'm not sure why it can't be cast to simple byte-view. None of that seems to work at all. The decision was made in order to be able to cast back and forth between known formats. Otherwise one would be able to cast from '<d' to 'B' but not from 'B' to '<d'. Python 3.4 will have support for all formats in struct module syntax, but all non-native formats will be *far* slower than the native ones. You can still pack/unpack directly using the struct module: >>> import ctypes, struct >>> d = ctypes.c_double() >>> m = memoryview(d) >>> struct.pack_into(m.format, m, 0, 22.7) >>> struct.unpack_from(m.format, m, 0)[0] 22.7 I don't think memoryviews should be imposing any casting restrictions at all. It's low level. Get out of the way. So you want to be able to segfault the core interpreter using the builtins? No, I want to be able to access the raw bytes sitting behind a memoryview as bytes without all of this casting and reinterpretation. Just show me the raw bytes. Not doubles, not ints, not structure packing, not copying into byte strings, or whatever. Is this really impossible? It sure seems so. Just to be specific, why is something like this not possible? >>> d = ctypes.c_double() >>> m = memoryview(d) >>> m[0:8] = b'abcdefgh' >>> d.value 8.540883223036124e+194 >>> (Doesn't have to be exactly like this, but what's wrong with overwriting bytes with bytes of a compatible size?). I should add that 0-dim indexing doesn't work as described either: >>> import ctypes >>> d = ctypes.c_double() >>> m = memoryview(d) >>> m[()] Traceback (most recent call last): File "<stdin>", line 1, in <module> NotImplementedError: memoryview: unsupported format <d >>> Please read msg170482. It even contains a copy and paste example!. There's probably a bigger discussion about memoryviews for a rainy day. However, the number one thing that would save all of this in my book would be to make sure cast('B') is universally supported regardless of format including endianness--especially in the standard library. For example, being able to do this: >>> a = array.array('d',[1.0, 2.0, 3.0, 4.0]) >>> m = memoryview(a).cast('B') >>> m[0:4] = b'\x00\x01\x02\x03' >>> a array('d', [1.0000000112050316, 2.0, 3.0, 4.0]) >>> Right now, it doesn't work for ctypes. For example: >>> import ctypes >>> a = (ctypes.c_double * 4)(1,2,3,4) >>> a <__main__.c_double_Array_4 object at 0x1006a7cb0> >>> m = memoryview(a).cast('B') Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: memoryview: source format must be a native single character format prefixed with an optional '@' >>> As some background, being able to work with a "byte" view of memory is important for a lot of problems involving I/O, data interchange, and related problems where being able to accurately construct/deconstruct the underlying memory buffers is more useful than actually interpreting their contents. One followup note---I think it's fine to punt on cast('B') if the memoryview is non-contiguous. That's a rare case that's probably not as common. We? You don't need `raw=True`, `.cast('b')` already must do this. But unfortunately, is is not implemented yet. In my experience, I tend to only use memoryview() for “bytes-like” buffers (but see Issue 23756 about clarifying what this means). Example from /Lib/_compression.py:67: def readinto(self, b): with memoryview(b) as view, view.cast("B") as byte_view: data = self.read(len(byte_view)) byte_view[:len(data)] = data return len(data) Fixing cast("B") or adding a memoryview(raw=True) mode could probably help when all you want is a byte buffer. A. Yuriy: cast() does not do this. What's requested is that e.g. a single float is represented as a bytes object instead of a float. Thus, you'd be able to do: m[0] = b'\x00\x00\x00\x01' This has other implications, for example, two NaNs would compare equal. Hence the suggestion memoryview(raw=True). Here is a patch that allows any “C-contiguous” memoryview() to be cast to a byte view. Apart from the test that was explicitly checking that this wasn’t supported, the rest of the test suite still passes. I basically removed the check that was generating the “source format must be a native single character” error. If two NANs are represented by the same byte sequence, I would expect their byte views to compare equal, which is the case with my patch. The question is whether we want this behavior. Assuming. The proposal sounds reasonable to me. If people are content with writing m[124:128] = b'abcd' and accept that tolist() etc. won't represent the original structure of the object, then let's do it. On the bright side, it is less work. -- I'll review the patch.. Ok, shall we sneak this past Larry for 3.5? Why not :) New changeset e33f2b8b937f by Stefan Krah in branch '3.5': Issue #15944: memoryview: Allow arbitrary formats when casting to bytes. New changeset c7c4b8411037 by Stefan Krah in branch 'default': Merge #15944. Done. Thanks for the patch.
https://bugs.python.org/issue15944
CC-MAIN-2021-21
refinedweb
1,309
68.26
Daisuke Oyama Faculty of Economics, University of Tokyo This notebook demonstrates via a simple example how to use the DiscreteDP module. from __future__ import division, print_function import numpy as np from scipy import sparse from quantecon.markov import DiscreteDP Let us consider the following two-state dynamic program, taken from Puterman (2005), Section 3.1, pp.33-35; see also Example 6.2.1, pp.155-156. There are two possible states $0$ and $1$. At state $0$, you may choose either "stay", say action $0$, or "move", action $1$. At state $1$, there is no way to move, so that you can only stay, i.e., $0$ is the only available action. (You may alternatively distinguish between the action "staty" at state $0$ and that at state $1$, and call the latter action $2$; but here we choose to refer to the both actions as action $0$.) At state $0$, if you choose action $0$ (stay), then you receive a reward $5$, and in the next period the state will remain at $0$ with probability $1/2$, but it moves to $1$ with probability $1/2$. If you choose action $1$ (move), then you receive a reward $10$, and the state in the next period will be $1$ with probability $1$. At state $1$, where the only action you can take is $0$ (stay), you receive a reward $-1$, and the state will remain at $1$ with probability $1$. You want to maximize the sum of discounted expected reward flows with discount factor $\beta \in [0, 1)$. The optimization problem consists of: the state space: $S = \{0, 1\}$; the action space: $A = \{0, 1\}$; the set of feasible state-action pairs $\mathit{SA} = \{(0, 0), (0, 1), (1, 0)\} \subset S \times A$; the reward function $r\colon \mathit{SA} \to \mathbb{R}$, where $$ r(0, 0) = 5,\ r(0, 1) = 10,\ r(1, 0) = -1; $$ the transition probability function $q \colon \mathit{SA} \to \Delta(S)$, where $$ \begin{aligned} &(q(0 | 0, 0), q(1 | 0, 0)) = (1/2, 1/2), \\ &(q(0 | 0, 1), q(1 | 0, 1)) = (0, 1), \\ &(q(0 | 1, 0), q(1 | 1, 0)) = (0, 1); \end{aligned} $$ the discount factor $\beta \in [0, 1)$. The Belmann equation for this problem is: $$ \begin{aligned} v(0) &= \max \left\{5 + \beta \left(\frac{1}{2} v(0) + \frac{1}{2} v(1)\right), 10 + \beta v(1)\right\}, \\ v(1) &= (-1) + \beta v(1). \end{aligned} $$ This problem is simple enough to solve by hand: the optimal value function $v^*$ is given by $$ \begin{aligned} &v(0) = \begin{cases} \dfrac{5 - 5.5 \beta}{(1 - 0.5 \beta) (1 - \beta)} & \text{if $\beta > \frac{10}{11}$} \\ \dfrac{10 - 11 \beta}{1 - \beta} & \text{otherwise}, \end{cases}\\ &v(1) = -\frac{1}{1 - \beta}, \end{aligned} $$ and the optimal policy function $\sigma^*$ is given by $$ \begin{aligned} &\sigma^*(0) = \begin{cases} 0 & \text{if $\beta > \frac{10}{11}$} \\ 1 & \text{otherwise}, \end{cases}\\ &\sigma^*(1) = 0. \end{aligned} $$ def v_star(beta): v = np.empty(2) v[1] = -1 / (1 - beta) if beta > 10/11: v[0] = (5 - 5.5*beta) / ((1 - 0.5*beta) * (1 - beta)) else: v[0] = (10 - 11*beta) / (1 - beta) return v We want to solve this problem numerically by using the DiscreteDP class. We will set $\beta = 0.95$ ($> 10/11$), for which the anlaytical solution is: $\sigma^* = (0, 0)$ and v_star(beta=0.95) array([ -8.57142857, -20. ]) There are two ways to represent the data for instantiating a DiscreteDP object. Let $n$, $m$, and $L$ denote the numbers of states, actions, and feasbile state-action pairs, respectively; in the above example, $n = 2$, $m = 2$, and $L = 3$. DiscreteDP(R, Q, beta) with parameters: R, Q, and beta, where R[s, a] is the reward for action a when the state is s and Q[s, a, s'] is the probability that the state in the next period is s' when the current state is s and the action chosen is a. DiscreteDP(R, Q, beta, s_indices, a_indices) with parameters: R, Q, beta, s_indices, and a_indices, where the pairs (s_indices[0], a_indices[0]), ..., (s_indices[L-1], a_indices[L-1]) enumerate feasible state-action pairs, and R[i] is the reward for action a_indices[i] when the state is s_indices[i] and Q[i, s'] is the probability that the state in the next period is s' when the current state is s_indices[i] and the action chosen is a_indices[0]. Let us illustrate the two formulations by the simple example at the outset. This formulation is straightforward when the number of feasible actions is constant across states so that the set of feasible state-action pairs is naturally represetend by the product $S \times A$, while any problem can actually be represented in this way by defining the reward R[s, a] to be $-\infty$ when action a is infeasible under state s. To apply this approach to the current example, we consider the effectively equivalent problem in which at both states $0$ and $1$, both actions $0$ (stay) and $1$ (move) are available, but action $1$ yields a reward $-\infty$ at state $1$. The reward array R is an $n \times m$ 2-dimensional array: R = [[5, 10], [-1, -float('inf')]] The transition probability array Q is an $n \times m \times n$ 3-dimenstional array: Q = [[(0.5, 0.5), (0, 1)], [(0, 1), (0.5, 0.5)]] # Probabilities in Q[1, 1] are arbitrary Note that the transition probabilities for action $(s, a) = (1, 1)$ are arbitrary, since $a = 1$ is infeasible at $s = 1$ in the original problem. Let us set the discount factor $\beta$ to be $0.95$: beta = 0.95 We are ready to create a DiscreteDP instance: ddp = DiscreteDP(R, Q, beta) When the number of feasible actions varies across states, it can be inefficient in terms of memory usage to extend the domain by treating infeasible actions to be "feasible but yielding reward $-\infty$". This formulation takes the set of feasible state-action pairs as is, defining R to be a 1-dimensional array of length L and Q to be a 2-dimensional array of shape (L, n), where L is the number of feasible state-action pairs. First, we have to list all the feasible state-action pairs. For our example, they are: $(s, a) = (0, 0), (0, 1), (1, 0)$. We have arrays s_indices and a_indices of length $3$ contain the indices of states and actions, respectively. s_indices = [0, 0, 1] # State indices a_indices = [0, 1, 0] # Action indices The reward vector R is a length $L$ 1-dimensional array: # Rewards for (s, a) = (0, 0), (0, 1), (1, 0), respectively R = [5, 10, -1] The transition probability array Q is an $L \times n$ 2-dimensional array: # Probability vectors for (s, a) = (0, 0), (0, 1), (1, 0), respectively Q = [(0.5, 0.5), (0, 1), (0, 1)] For the discount factor, set $\beta = 0.95$ as before: beta = 0.95 Now create a DiscreteDP instance: ddp_sa = DiscreteDP(R, Q, beta, s_indices, a_indices) Importantly, this formulation allows us to represent the transition probability array Q as a scipy.sparse matrix (of any format), which is useful for large and sparse problems. For example, let us convert the above ndarray Q to the Coordinate (coo) format: import scipy.sparse Q = scipy.sparse.coo_matrix(Q) Pass it to DiscreteDP with the other parameters: ddp_sparse = DiscreteDP(R, Q, beta, s_indices, a_indices) Internally, the matrix Q is converted to the Compressed Sparse Row (csr) format: ddp_sparse.Q <3x2 sparse matrix of type '<class 'numpy.float64'>' with 4 stored elements in Compressed Sparse Row format> ddp_sparse.Q.toarray() array([[ 0.5, 0.5], [ 0. , 1. ], [ 0. , 1. ]]) Now let us solve our model. Currently, DiscreteDP supports the following solution algorithms: (The methods are the same across the formulations.) We solve the model first by policy iteration, which gives the exact solution: v_init = [0, 0] # Initial value function, optional(default=max_a r(s, a)) res = ddp.solve(method='policy_iteration', v_init=v_init) res contains the information about the solution result: res method: 'policy iteration' max_iter: 250 mc: Markov chain with transition matrix P = [[ 0.5 0.5] [ 0. 1. ]] sigma: array([0, 0]) v: array([ -8.57142857, -20. ]) num_iter: 2 The optimal policy function: res.sigma array([0, 0]) The optimal value function: res.v array([ -8.57142857, -20. ]) This coincides with the analytical solution: v_star(beta) array([ -8.57142857, -20. ]) np.allclose(res.v, v_star(beta)) True The number of iterations: res.num_iter 2 Verify that the value of the policy [0, 0] is actually equal to the optimal value v: ddp.evaluate_policy(res.sigma) array([ -8.57142857, -20. ]) ddp.evaluate_policy(res.sigma) == res.v array([ True, True], dtype=bool) res.mc is the controlled Markov chain given by the optimal policy [0, 0]: res.mc Markov chain with transition matrix P = [[ 0.5 0.5] [ 0. 1. ]] Next, solve the model by value iteration, which returns an $\varepsilon$-optimal solution for a specified value of $\varepsilon$: epsilon = 1e-2 # Convergece tolerance, optional(default=1e-3) v_init = [0, 0] # Initial value function, optional(default=max_a r(s, a)) res_vi = ddp.solve(method='value_iteration', v_init=v_init, epsilon=epsilon) res_vi method: 'value iteration' max_iter: 250 mc: Markov chain with transition matrix P = [[ 0.5 0.5] [ 0. 1. ]] sigma: array([0, 0]) v: array([ -8.5665053 , -19.99507673]) epsilon: 0.01 num_iter: 162 The computed policy function res1.sigma is an $\varepsilon$-optimal policy, and the value function res1.v is an $\varepsilon/2$-approximation of the true optimal value function. np.abs(v_star(beta) - res_vi.v).max() 0.0049232745189442539 Finally, solve the model by modified policy iteration: epsilon = 1e-2 # Convergece tolerance, optional(defaul=1e-3) v_init = [0, 0] # Initial value function, optional(default=max_a r(s, a)) res_mpi = ddp.solve(method='modified_policy_iteration', v_init=v_init, epsilon=epsilon) res_mpi mc: Markov chain with transition matrix P = [[ 0.5 0.5] [ 0. 1. ]] sigma: array([0, 0]) v: array([ -8.57142826, -19.99999965]) epsilon: 0.01 max_iter: 250 method: 'modified policy iteration' k: 20 num_iter: 3 Modified policy function also returns an $\varepsilon$-optimal policy function and an $\varepsilon/2$-approximate value function: np.abs(v_star(beta) - res_mpi.v).max() 3.4711384344632279e-07
http://nbviewer.jupyter.org/github/QuantEcon/QuantEcon.notebooks/blob/master/ddp_intro_py.ipynb
CC-MAIN-2017-04
refinedweb
1,717
53
From: David Abrahams (dave_at_[hidden]) Date: 2005-10-12 23:20:54 "Andreas Huber" <ahd6974-spamgroupstrap_at_[hidden]> writes: > - that HTML is somewhat standard for boost documentation. Yep. > - that prepending identifiers with underscores makes your code > non-portable. Only if they're in the global namespace or followed by a capital letter, right? > - to add swap(). That's a nice thing to do. > - to provide the strong exception guarantee for all functions. That's bad advice, in general. Provide the basic guarantee for all functions. Provide the strong guarantee where possible without loss of efficiency. > - to put everything in established standard directories (e.g. doc, > test). Yep. > - to add bjam files for you tests. Yep. > Feel free to have a look at code, docs & tests of other boost libraries > to get a feeling what is commonly considered sufficient. Good idea. -- Dave Abrahams Boost Consulting Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2005/10/95366.php
CC-MAIN-2021-21
refinedweb
165
70.6
Eclipse Foundation Releases Eclipse Photon IDE - | - - - - - - Read later Reading List The Eclipse Foundation has released the latest version of the Eclipse IDE. Eclipse Photon brings support for both Java 10 and Java EE 8, improvements for PHP development tools, dark theme UI improvements, and more. Java 10 is completely supported for Eclipse Java Development Tools (JDT), and enables developers to use local variable type inference (JEP 286), such as suggesting code completion where var is allowed and Quick Assist to convert from type to var. Eclipse Photon has also added a feature to convert a non-modular Java project to a module by creating a module.info.java. Developers can also create a module by pasting a code snippet that represents module-info.java directly into a source folder to create a module-info.java file. The following code can be copy-pasted to demonstrate this: import java.sql.Driver; module hello { exports org.example; requires java.sql; provides Driver with org.example.DriverImpl; } The Java Editor has been improved in Eclipse Photon in several ways. The Java syntax coloring has been improved when using the dark theme by reducing bold style usage and changing some colors that were too close to each other. Furthermore, now it is possible to escape non-ASCII characters when pasting into a string literal. To enable it, click Java > Editor > Typing > Escape text when pasting into a string literal and check Use Unicode escape syntax for non-ASCII characters. Below is a sample of characters replaced by unicode escape sequences when pasted into a string: Eclipse Photon enables developers to sort library entries alphabetically in Package Explorer; to enable it, open Java > Appearance preference page, and check Sort library entries alphabetically in Package Explorer. The libraries will be presented as follows: The Java Compiler in Eclipse Photon contains a new preference named "Compiler Compliance does not match used JRE", which indicates the severity of the problem reported when a project uses a JRE that does not match the compiler compliance level selected (e.g. a project using JRE 1.8 as JRE System Library, and the compiler compliance is set to 1.7). In addition, there is new support for running Java annotations on test sources, and an experimental feature was added to allow regex usage when searching for module declaration. The Java Formatter profile preference was simplified, with all preferences presented in an expandable tree instead of multiple tabs. To see it, open Java > Code Style > Formatter > Edit. There is a new option "align Javadoc tags in columns" under Comments > Javadoc. Below an example of Align descriptions, grouped by type in usage: Debugging with Eclipse Photon has become more productive, thanks to a series of new features: - Advanced source lookup provides correct source lookup when runtime class path is not known in advance - Launch configuration prototype for Java launch configurations - The debugger now listens to thread name changes, which means that the Java debugger adds a news breakpoint in the JVM and notifies Debug view on a breakpoint hit - Valued displayed for method exit and exception breakpoints, the last method result (per return or throw) that was observed during Step Into, Step Over or Step Return, is now shown as first line in the Variables view - In the breakpoints view, a new sort by option has been added to sort by age The PHP Development Tools has received a series of enhancements, such as validation support for variable unused/unassigned, validation for scalars in break/continue, validation for static operation for PHP 7 or higher. Additionally, the PHP Explorer has been replaced by the Project Explorer. A complete list of PHP features is available in the PHP section on the Eclipse Photon New and Noteworthy page. According to the Eclipse Foundation, this release includes 85 projects with more than 73 million lines of code, with contribution by 620 developers, 246 of whom are Eclipse committers. More information about the Eclipse Photon can be found on Eclipse Photon New and Noteworthy page. Eclipse Photon can be downloaded from the Eclipse downloads page. Rate this Article - Editor Review - Chief Editor Action Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered. Get the most out of the InfoQ experience. Tell us what you think good new! by ZM Michael
https://www.infoq.com/news/2018/07/eclipse-photon
CC-MAIN-2018-47
refinedweb
729
50.36
Basic Language Processing with NLTK In this post, we explore some basic text processing using the Natural Language Toolkit (NLTK). We will be grabbing the most popular nouns from a list of text documents. We will need to start by downloading a couple of NLTK packages for language processing. punkt is used for tokenising sentences and averaged_perceptron_tagger is used for tagging words with their parts of speech (POS). We also need to set the add this directory to the NLTK data path. import os import nltk # Create NLTK data directory NLTK_DATA_DIR = './nltk_data' if not os.path.exists(NLTK_DATA_DIR): os.makedirs(NLTK_DATA_DIR) nltk.data.path.append(NLTK_DATA_DIR) # Download packages and store in directory above nltk.download('punkt', download_dir=NLTK_DATA_DIR) nltk.download('averaged_perceptron_tagger', download_dir=NLTK_DATA_DIR) True The documents to be analysed here are in the docs folder, we see that there are 6 text documents. DOCS_DIR = './docs' files = os.listdir(DOCS_DIR) files ['doc1.txt', 'doc2.txt', 'doc3.txt', 'doc4.txt', 'doc5.txt', 'doc6.txt'] For now we will analyse the first document doc1.txt import io path = os.path.join(DOCS_DIR, 'doc1.txt') with io.open(path, encoding='utf-8') as f: text_from_file = ' '.join(f.read().splitlines()) text_from_file now has the whole corpus of text from doc1.txt. If we tokenise this block of text by sentences, we are left with a list of sentences. # 5th sentence sentences = nltk.sent_tokenize(text_from_file) fifth_sentence = sentences[4] print(fifth_sentence) In the face of despair, you believe there can be hope. We can also tokenise the words within the sentence # 3rd word from 5th sentence words_in_fifth_sentence = nltk.word_tokenize(fifth_sentence) third_word_fifth_sentence = words_in_fifth_sentence[2] third_word_fifth_sentence u'face' Now we will tag the parts of speech of this sentence # Part of speech tagging nltk.pos_tag(words_in_fifth_sentence) [(u'In', 'IN'), (u'the', 'DT'), (u'face', 'NN'), (u'of', 'IN'), (u'despair', 'NN'), (u',', ','), (u'you', 'PRP'), (u'believe', 'VBP'), (u'there', 'EX'), (u'can', 'MD'), (u'be', 'VB'), (u'hope', 'VBN'), (u'.', '.')] We see that the words are now tuples of the given word and a tag for the part of speech. In the list above, the following abbreviations can be observed: - IN – Preposition - DT – Determiner - NN – Noun - PRP – Personal pronoun - EX – Existential there - MD – Modal - VB – Verb base form - VBN – Verb past participle Our aim is to extract the most popular nouns from all the sentences across all the documents. # Get all sentences from all documents all_text = '' for file_name in files: path = os.path.join(DOCS_DIR, file_name) with io.open(path, encoding='utf-8') as f: text_from_file = ' '.join(f.read().splitlines()) all_text += text_from_file sentences = nltk.sent_tokenize(all_text) # tag speech words = [] for sentence in sentences: words.extend(nltk.word_tokenize(sentence)) tagged_words = nltk.pos_tag(words) # Count nouns from collections import defaultdict freqs = defaultdict(int) nouns = [word[0].lower() for word in tagged_words if word[1] == 'NN'] for noun in nouns: if noun in freqs: freqs[noun] += 1 else: freqs[noun] = 1 # Get the 25 most common words import operator sorted_by_most_common = sorted(freqs.items(), key=operator.itemgetter(1), reverse=True) for word, count in sorted_by_most_common[:25]: print "{} was found {} times.".format(word.capitalize(), count) Country was found 60 times. Time was found 58 times. Government was found 45 times. War was found 40 times. Promise was found 33 times. Today was found 29 times. World was found 29 times. Care was found 27 times. Health was found 23 times. Work was found 23 times. Way was found 21 times. Nation was found 21 times. Corruption was found 20 times. Job was found 19 times. Party was found 19 times. Year was found 18 times. Security was found 18 times. Economy was found 17 times. Part was found 17 times. Progress was found 17 times. Change was found 15 times. Generation was found 15 times. Man was found 14 times. Threat was found 14 times. Life was found 14 times.
https://benalexkeen.com/basic-language-processing-with-nltk/
CC-MAIN-2021-21
refinedweb
637
62.04
Member since 01-26-2016 30 3 Kudos Received 0 Solutions 09-04-2019 07:20 AM 09-04-2019 07:20 AM Without reviewing the logs, If I can guess on the issue, If you are submitting the Spark job from Oozie Shell action, then I would suspect the problem is with Oozie's behavior of setting the environment variable HADOOP_CONF_DIR behind the scene. There is an internal Jira that tracks this behavior/work. The KB [1] explains a bit on this (even though it is reported for the hive-site.xml, I think it may influence the HBase client conf as well). Try working around the problem by following the instructions on the KB [1] and see if it helps. Thanks Lingesh [1]: ... View more 03-04-2019 03:30 AM 03-04-2019 03:30 AM The observed error is nothing to do with HBase authorization rather it simply states that the HMaster is not yet completed it's initialization after startup. You can check the HMaster WebUI (look at the section called "Tasks" and search for something like "Startup") and see if it got stuck at any stage or check the HMaster role logs to see where it is stuck. There are multiple areas which may interfere with HMaster initialization like RegionSever WAL (distributed) splitting stage is more common. In older versions of CDH, system table 'hbase:namespace' could not get assigned within a reasonable time frame (due to unavailability of feature like HBASE-14674 HBASE-14190 HBASE-14729 etc). Also, keep in mind, The Cell level TTL feature has a requirement as " hfile . format . version " should be set to "3" ... View more 03-04-2019 03:18 AM 03-04-2019 03:18 AM The error snippets posted do not indicate a FATAL event that could interrupt with the main thread of HMaster or RS. Do you see any FATAL events in their respective roles before the crash? If not, check the standard output logs of these roles and see if they record any OOM situation. ... View more 03-04-2019 02:47 AM? ... View more 03-04-2019 02:35 AM 03-04-2019 02:35 AM The shared stderr snippet looks normal. What do you see in the role logs (under /var/log/hbase folder) during the startup failure? Can you share the complete stack trace if there is any? ... View more 02-08-2019 05:06 AM 02-08-2019 05:06 AM Not sure which version of CDH you met with this issue. Note that t. ... View more 02-08-2019 04:50 AM 02-08-2019 04:50 AM If the exported HFiles are getting deleted in the target and if you can also confirm it's the Master's HFileCleaner thread which is deleting them, then there is some problem at the initial stage of ExportSnapshot where snapshot Manifest/References are copied over. Check if there is any warning/errors reported in the console logs. Also, check the manifest does exist in the target cluster. ... View more 02-06-2019 01:56 AM) ... View more 02-06-2019 01:47 AM 02-06-2019 01:47 AM Thanks for sharing the steps to resolve the issue. Yes, indeed every NN/DN in each cluster should have access to other cluster's node and vice-versa since the ExportSnapshot is more of the HDFS distcp operation where the majority of the work involves copying the HFiles (associated with the snapshot) in a distributed fashion from the source to target (similar to distcp). It would be helpful if you could share the complete stack trace of the exception which would also help to understand the flow during the failure. Again thanks for taking the time to post the solution. ... View more There is currently work-in-progress thing to certify some of the OpenJDK versions against Cloudera suite of products. We can expect the official announcement once the testing and certification are complete for the future CM/CDH releases. ... View more 01-15-2018 11:36 PM ... View more
https://community.cloudera.com/t5/user/viewprofilepage/user-id/14340/user-messages-feed/latest-contributions
CC-MAIN-2019-47
refinedweb
678
69.21
When I was doing SQL-type work last year I was serializing datasets and sending them over a network. The serialization used something called XSLT (I think) to translate the serialized XML into a more readable format, and back. The actual output xml was used in conjunction with a Visual Studio tool called XSD to generate a schema file and then C# classes. I think I used: xsd Data.xml xsd /classes /namespace:YourNamespaceHere Data.xsd This type of technique could be automated with a shell script I am sure and maybe generated/executed at runtime. I am not a very experienced developer so I can't say for sure. But if you are interested it might be an idea to look into the xsd.exe documentation on MSDN. Either way, it was for a business application and I am not sure if it is efficient enough to use in a real game. bdcroteauMember Since 23 Oct 2011 Offline Last Active Feb 14 2013 01:17 AM Community Stats - Group Members - Active Posts 10 - Profile Views 678 - Submitted Links 0 - Member Title Member - Age Age Unknown - Birthday Birthday Unknown - Gender Not Telling 105 Neutral
http://www.gamedev.net/user/190484-bdcroteau/?tab=posts
CC-MAIN-2014-10
refinedweb
194
62.78
direct.task.Task¶ from direct.task.Task import TaskManager, checkLeak, loop, print_exc_plus, sequence This module defines a Python-level wrapper around the C++ AsyncTaskManager interface. It replaces the old full-Python implementation of the Task system. For more information about the task system, consult the Tasks and Event Handling page in the programming manual. Inheritance diagram Task¶ alias of panda3d.core.PythonTask - class TaskManager[source]¶ Bases: object add(self, funcOrTask, name=None, sort=None, extraArgs=None, priority=None, uponDeath=None, appendTask=False, taskChain=None, owner=None)[source]¶ Add a new task to the taskMgr. The task will begin executing immediately, or next frame if its sort value has already passed this frame. - Parameters funcOrTask – either an existing Task object (not already added to the task manager), or a callable function object. If this is a function, a new Task object will be created and returned. You may also pass in a coroutine object. name (str) – the name to assign to the Task. Required, unless you are passing in a Task object that already has a name. extraArgs (list) – the list of arguments to pass to the task function. If this is omitted, the list is just the task object itself. appendTask (bool) – If this is true, then the task object itself will be appended to the end of the extraArgs list before calling the function. sort (int) – the sort value to assign the task. The default sort is 0. Within a particular task chain, it is guaranteed that the tasks with a lower sort value will all run before tasks with a higher sort value run. priority (int) – the priority at which to run the task. The default priority is 0. Higher priority tasks are run sooner, and/or more often. For historical purposes, if you specify a priority without also specifying a sort, the priority value is understood to actually be a sort value. (Previously, there was no priority value, only a sort value, and it was called “priority”.) uponDeath (bool) – a function to call when the task terminates, either because it has run to completion, or because it has been explicitly removed. taskChain (str) – the name of the task chain to assign the task to. owner – an optional Python object that is declared as the “owner” of this task for maintenance purposes. The owner must have two methods: owner._addTask(self, task), which is called when the task begins, and owner._clearTask(self, task), which is called when the task terminates. This is all the ownermeans. - Returns The new Task object that has been added, or the original Task object that was passed in. doMethodLater(self, delayTime, funcOrTask, name, extraArgs=None, sort=None, priority=None, taskChain=None, uponDeath=None, appendTask=False, owner=None)[source]¶ Adds a task to be performed at some time in the future. This is identical to add(), except that the specified delayTime is applied to the Task object first, which means that the task will not begin executing until at least the indicated delayTime (in seconds) has elapsed. After delayTime has elapsed, the task will become active, and will run in the soonest possible frame thereafter. If you wish to specify a task that will run in the next frame, use a delayTime of 0. getCurrentTask(self)[source]¶ Returns the task currently executing on this thread, or None if this is being called outside of the task manager. getTasksMatching(self, taskPattern)[source]¶ Returns a list of all tasks, active or sleeping, with a name that matches the pattern, which can include standard shell globbing characters like *, ?, and []. getTasksNamed(self, taskName)[source]¶ Returns a list of all tasks, active or sleeping, with the indicated name. hasTaskChain(self, chainName)[source]¶ Returns true if a task chain with the indicated name has already been defined, or false otherwise. Note that setupTaskChain() will implicitly define a task chain if it has not already been defined, or modify an existing one if it has, so in most cases there is no need to check this method first. hasTaskNamed(self, taskName)[source]¶ Returns true if there is at least one task, active or sleeping, with the indicated name. remove(self, taskOrName)[source]¶ Removes a task from the task manager. The task is stopped, almost as if it had returned task.done. (But if the task is currently executing, it will finish out its current frame before being removed.) You may specify either an explicit Task object, or the name of a task. If you specify a name, all tasks with the indicated name are removed. Returns the number of tasks removed. removeTasksMatching(self, taskPattern)[source]¶ Removes all tasks whose names match the pattern, which can include standard shell globbing characters like *, ?, and []. See also remove(). Returns the number of tasks removed. run(self)[source]¶ Starts the task manager running. Does not return until an exception is encountered (including KeyboardInterrupt). setupTaskChain(self, chainName, numThreads=None, tickClock=None, threadPriority=None, frameBudget=None, frameSync=None, timeslicePriority=None)[source]¶ Defines a new task chain. Each task chain executes tasks potentially in parallel with all of the other task chains (if numThreads is more than zero). When a new task is created, it may be associated with any of the task chains, by name (or you can move a task to another task chain with task.setTaskChain()). You can have any number of task chains, but each must have a unique name. numThreads is the number of threads to allocate for this task chain. If it is 1 or more, then the tasks on this task chain will execute in parallel with the tasks on other task chains. If it is greater than 1, then the tasks on this task chain may execute in parallel with themselves (within tasks of the same sort value). If tickClock is True, then this task chain will be responsible for ticking the global clock each frame (and thereby incrementing the frame counter). There should be just one task chain responsible for ticking the clock, and usually it is the default, unnamed task chain. threadPriority specifies the priority level to assign to threads on this task chain. It may be one of TPLow, TPNormal, TPHigh, or TPUrgent. This is passed to the underlying threading system to control the way the threads are scheduled. frameBudget is the maximum amount of time (in seconds) to allow this task chain to run per frame. Set it to -1 to mean no limit (the default). It’s not directly related to threadPriority. frameSync is False in the default mode, in which each task runs exactly once each frame, round-robin style, regardless of the task’s priority value; or True to change the meaning of priority so that certain tasks are run less often, in proportion to their time used and to their priority value. See AsyncTaskManager.setTimeslicePriority() for more. step(self)[source]¶ Invokes the task manager for one frame, and then returns. Normally, this executes each task exactly once, though task chains that are in sub-threads or that have frame budgets might execute their tasks differently. gather()¶ Creates a new future that returns done()when all of the contained futures are done. Calling cancel()on the returned future will result in all contained futures that have not yet finished to be cancelled. print_exc_plus()[source]¶ Print the usual traceback information, followed by a listing of all the local variables in each frame.
https://docs.panda3d.org/1.10/python/reference/direct.task.Task
CC-MAIN-2020-05
refinedweb
1,229
64
Details Description AbsoluteTimeDateFormatter's caching of the "to the second" timestamp string is not thread-safe. It is possible for one thread to clear the check (that this timestamp matches the currently cached "to the second" timestamp), but then end up using an incorrect "to the second" timestamp string if another thread has changed it in the meantime. In our organization, we see this bug fairly regularly because we have a mix of "real time" loggers that immediately write out log lines and "batching" loggers that defer logging to a background task that runs every second. We therefore regularly see log lines where the timestamp is off by a second or two. The following unit tests demonstrates the bug: [TestFixture] [Explicit] public class Log4netTimestampBug { /// <summary> /// This test demonstrates a bug with the log4net default time formatter (Iso8601DateFormatter) /// where the logged timestamp can be seconds off from the actual input timestamp /// The bug is caused to a race condition in the base class AbsoluteTimeDateFormatter /// because this class caches the timestamp string to the second but it is possible for /// the timestamp as written by a different thread to "sneak in" and be used by another /// thread erroneously (the checking and usage of this string is not done under a lock, only /// its modification) /// </summary> [Test] public void Test() { var now = DateTime.Now; var times = Enumerable.Range(1, 1000000).Select(i => now.AddMilliseconds ).ToList(); var sb1 = new StringBuilder(); var sb2 = new StringBuilder(); var task1 = Task.Run(() => WriteAllTheTimes(times, new StringWriter(sb1))); var task2 = Task.Delay(50).ContinueWith(t => WriteAllTheTimes(times, new StringWriter(sb2))); Task.WaitAll(task1, task2); var task1Strings = GetTimeStrings(sb1); var task2Strings = GetTimeStrings(sb2); var diffs = Enumerable.Range(0, times.Count).Where(i => task1Strings[i] != task2Strings[i]).ToList(); Console.WriteLine("found{0} instances where the formatted timestamps are not the same", diffs.Count); Console.WriteLine(); var diffToLookAt = diffs.FirstOrDefault(i => i - 10 > 0 && i + 10 < times.Count); if (diffToLookAt != 0) { Console.WriteLine("Example Diff:"); Console.WriteLine(); Console.WriteLine("Index Original Timestamp Task 1 Format Task 2 Format"); for (int i = diffToLookAt - 10; i < diffToLookAt + 10; i++) { Console.WriteLine(" {1}{1} {2}{2} {3}{3} {4}{4} ", i, times[i].ToString("yyyy-MM-dd HH:mm:ss,fff"), task1Strings[i], task2Strings[i], i == diffToLookAt ? "**** DIFF HERE ****" : ""); } } CollectionAssert.AreEqual(task1Strings, task2Strings); } private static List<string> GetTimeStrings(StringBuilder sb1) { return sb1.ToString().Split(new[] , StringSplitOptions.RemoveEmptyEntries).ToList(); } private static void WriteAllTheTimes(IEnumerable<DateTime> times, TextWriter writer) { var formatter = new Iso8601DateFormatter(); foreach (var t in times) } } Activity The bug is in AbsoluteTimeDateFormatter, so it affects its inheritors – Iso8601TimeDateFormat and DateTimeFormatter – as well. SimpleDateFormatter is not affected. The race condition occurs in AbsoluteTimeDateFormatter.FormatDate – the input time, rounded to the nearest second, is compared to the "last time to the second". If they match, the "last time string" is written to the stream. However, these two operations are not done under a lock, so in-between the check and the write, another thread can change the "last time string". In typical operations, this isn't a problem. Our issue is that we have some loggers that batch logging events and defer writing to a background thread, and some that write logs in "real time". This means that simultaneously, we have logging events that are "fresh" being written and logging events that are one or two seconds old being written. This set-up causes us to regularly see "wrong" timestamps in our log files due to this race condition. I've tried to roughly replicate this scenario in the unit test. Thanks for your help. Switching the static state to instance state would help a lot. The race condition would still exist, but it would be restricted to a single instance. It is much more reasonable to expect that a single instance will only see a monotonically increasing sequence of timestamps than it is to expect the whole application to process a monotonically increasing sequence of timestamps. I suspected that the lock in there would be sewed too tight. I've commited a fix as of revision 1483375. Can you confirm that it fixes your issue? I've commited a second fix as 1483378. Please look at both revisions as being a single patch. Sorry for the noise. Thanks for having a look. I have a feeling that people will get concerned about the performance implications of locking the entire method. I share that concern a bit, as this is a static lock object, so you are effectively synchronizing all calls to this method across the entire application. Have you considered removing the locking and instead marking all the mutable static fields ThreadStatic? I could also be convinced that locking the whole method is okay if all the static state were changed to instance state – that would also have fixed LOG4NET-323, without requiring the dictionary of strings. Regarding ThreadStatic: I did not do performance timings, thus I do not know if it would perform better. My believe is that this attribute could tear down performance in heavily cross-threaded environments by practically disabling the cache. Generally I do not think it is wise to have code behaving one way in multi threading and another way in single threading. It makes the code terribly hard to maintain and therefore I would not want to take a foot on that road unless there are good arguments to do so. Regarding the drop of static: changing the cache to be instance specific would practically make it ineffective since it would happen in every formatter instance whereas it needs to be done only once every second. So that would have a performance impact too which probably is bigger than a pure lock. The way Stefan fixed LOG4NET-323 seems to be perfectly fine to me. It is widely known that thread safety does not come for free. If you believe that the performance impact is not neglectable I would like to encourage you to do some performance timings for these scenarios: - "Rev 1483378" in Single Thread - "Rev 1380139 + ThreadStatic" in Single Thread - "Rev 1483378" in Multi Thread - "Rev 1380139 + ThreadStatic" in Multi Thread Hello again. Sorry for letting this slide for a while. We've noticed that with the log4net 1.2.12 service release, this bug has actually gotten considerably worse. We have run into scenarios where the "to the second" component of the timestamp gets "stuck" and never updates again for the duration of the application. Unfortunately, I have been unable to produce a unit test that reliably reproduces the issue (the unit test above sometimes reproduces it), but my best guess is that this is related to the addition of the Hashtable in AbsoluteTimeDateFormatter in LOG4NET-323. The Hashtable is cleared and read from outside the lock, which can lead to potential race conditions since the Hashtable is not thread-safe. I have created my own implementation of IDateFormatter that behaves identically to Iso8601DateFormatter, but avoids the bugs noted in this issue. My performance testing shows that it performs nearly identically to Iso8601DateFormatter in typical scenarios, and actually performs substantially better in multi-threaded scenarios. I chose to make the cache state ThreadStatic as that performs marginally better than the other options (static, instance) in my performance testing, but the other options also perform well and behave equivalently. Code is below. If you would like to see my performance testing code, I can send you that as well, but it uses some of my own custom components that would require unpacking to post in copy-paste friendly form. public class StandardDateFormatter : IDateFormatter { // Using ThreadStatic is a micro-optimization. Making it static or instance state also works. // ThreadStatic performs marginally better in scenarios where the same instance of the formatter // is being hit from multiple threads that are using different timestamps. // Performance is roughly equivalent in single-threaded scenarios. [ThreadStatic] private static Tuple<long, string> _lastTicksToTheSecond; public void FormatDate(DateTime dateToFormat, TextWriter writer) { var ticksToTheSecond = dateToFormat.Ticks - dateToFormat.Ticks % TimeSpan.TicksPerSecond; var lastToTheSecond = _lastTicksToTheSecond; string toTheSecondString; if (lastToTheSecond != null && lastToTheSecond.Item1 == ticksToTheSecond) { toTheSecondString = lastToTheSecond.Item2; } else { toTheSecondString = dateToFormat.ToString("yyyy-MM-dd HH:mm:ss"); _lastTicksToTheSecond = Tuple.Create(ticksToTheSecond, toTheSecondString); } writer.Write(toTheSecondString); writer.Write(','); var millisecond = dateToFormat.Millisecond; if (millisecond < 100) writer.Write('0'); if (millisecond < 10) writer.Write('0'); writer.Write(millisecond); } } I'm afraid svn revision 1486883 has reverted the locking that Dominik had introduced. I'll bring them back for 1.2.13 and also verify we don't access the dictionary outside of a lock. brought back the big lock with svn revision 1542664 I'll keep this ticket open as I'd like to look into different approaches to solve it in 1.3.0. svn revision 1670334 uses ThreadStatic for log4net 1.3.0 - in my (limited) tests the TreadStatic version has been a lot faster than the one using locks on Mono 3.2 and both versions showed about the same performance on .NET 4.0. Is this issue reproducable with one or more of the following formatters, too?
https://issues.apache.org/jira/browse/LOG4NET-376
CC-MAIN-2015-27
refinedweb
1,499
55.44
Dead Simple Python: Data Typing and Immutability Jason C. McDonald Jan 17 Updated on Jan 31, 2019 ・18 min read I received a lovely comment on this series from Damian Rivas... I just read the first two parts that are currently released. I gotta say, with Python being around the 5th language I dive into, I really appreciate this style of teaching! It's hard finding teaching material that doesn't start with "what is a variable" lol. Hate to disappoint, Damian, but I couldn't avoid variables forever! Okay, okay, I'm not going to bore y'all with explanation #2,582,596 of variables. You're smart people, I'm sure you know all about them by now. But now that we're all set up to write code, I think it's worth touching on what a variable is in Python. While we're at it, we'll take a look at functions, strings, and all that other dull, boring stuff...which may well turn out not to be that boring under the hood. There's a lot of information here, but I believe it makes the most sense when understood together. Welcome to Python. Please mind the furniture on your way down the rabbit hole. Where's The Datatypes?!? In summer 2011, I sat on the porch swing in Seattle, Washington and logged onto Freenode IRC. I had just decided to switch languages from Visual Basic .NET to Python, and I had some questions. I joined #python and jumped right in. How do you declare the data type of a variable in Python? Within moments, I received a response which I consider to be my first true induction into the bizarre world of programming. <_habnabit> you're a data type He and the rest of the room regulars were quick to fill me in. Python is a dynamically typed language, meaning I don't have to go and tell the language what sort of information goes in a variable. I don't even have to use a special "variable declaration" keyword. I just assign. netWorth = 52348493767.50 At that precise moment, Python became my all-time favorite language. Before we get carried away, however, I must point out that Python is still a strongly-typed language. Um, dynamically typed? Strongly typed? What does all that means? Dynamically typed: the data type of a variable (object) is determined at run time. Contrast with "statically typed," where we declare the object's data type initially. (C++ is statically typed.) Strongly typed: the language has strict rules about what you can do to different data types, such as adding an integer and a string together. Contrast with "weakly typed," where the language will let you do practically anything, and it'll figure it out for you. (Javascript is weakly typed.) (If you want a more advanced explanation, see Why is Python a dynamic language and also a strongly typed language). So, to put that in other terms: Python variables have data types, but the language automatically figures out what that data type is. So, we can reassign a variable to contain whatever data we want... netWorth = 52348493767.50 netWorth = "52.3B" ...but we are limited on what we can do to it... netWorth = 52348493767.50 netWorth = netWorth + "1 billion" >>> Traceback (most recent call last): >>> File "<stdin>", line 1, in <module> >>> TypeError: unsupported operand type(s) for +: 'float' and 'str' If we ever need to know what type of variable something is, we can use the type() function. That will print out what class the variable is an instance of. (In Python, everything is an object, so get your object-oriented hat on.) netWorth = 52348493767.50 type(netWorth) >>> <class 'float'> We may actually want to check what the datatype is before we do something with it. For that, we can pair the type() function with the is operator, like this: if type(netWorth) is float: swimInMoneyBin() However, in many cases, it may be better to use isinstance() instead of type(), as that will account for subclasses and inheritance (object-oriented programming, anyone?) Bonus, the function itself returns True or False... if isinstance(netWorth, float): swimInMoneyBin() The Immutable Truth Since I just introduced that is operator, we'd better clear something up: is and == do not do the same thing! A lot of Python novices discover that this works....") >>> I am the richest duck in the world! >>> Please, Glomgold is only the SECOND richest duck in the world. "Oh, that's cool!" said a certain young developer in Seattle. "So, in Python, I just use is and is not for comparisons." WRONG, WRONG, WRONG. Those conditional statements worked, but not for the reason I thought. This faulty logic surrounding is falls apart as soon as you try this... nephews = ["Huey", "Dewey", "Louie"] if nephews is ["Huey", "Dewey", "Louie"]: print("Scrooge's nephews identified.") else: print("Not recognized.") >>> Not recognized. "Wait, WHAT?" You might poke at this a bit, even confirming that nephews is nephews evaluates to True. So what in dismal downs is going on? The trouble is, the is operator checks to see if the two operands are the same instance, and Python has these funny things called immutable types. Oversimplifying it, when you have something like an integer or a string, only one of that piece of data actually exist in the program's memory at once. Earlier, when I created the string "Scrooge McDuck", there was only one in existence (isn't there always?) If I say... richestDuck = "Scrooge McDuck" adventureCapitalist = "Scrooge McDuck" ...we would say that both richestDuck and adventureCapitalist are bound to this one instance of "Scrooge McDuck" in memory. They're like a couple of sign posts, both pointing to the exact same thing, of which we only have one. To put that another way, if you're familiar with pointers, this is a little like that (without the scary sharp edges). You can have two pointers to the same place in memory. If we changed one of those variables, say richestDuck = "Glomgold", we'd be rebinding richestDuck to point to something different in memory. (We'd also be full of beans for claiming Glomgold is that rich.) Mutable types, on the other hand, can store the same data multiple times in memory. Lists, like ["Huey", "Dewey", "Louie"], are one of those mutable types, which is why the is operator reported what it did earlier. The two lists, although they contained the exact same information, were not the same instance. Technical Note: You should be aware that immutability isn't actually related to sharing only one instance of a thing, although that's a common side effect. It's a useful way to imagine it, but don't rely on it to always be so. Multiple instances can exist. Run this in an interactive terminal to see what I mean... a = 5 b = 5 a is b >>> True a = 500 b = 500 a is b >>> False a = 500; b = 500; a is b >>> True The actual truth behind immutability is a lot more complicated. My Freenode #python friend Ned Batchelder ( nedbat) has an awesome talk about all this, which you should totally check out. So, what are we supposed to use instead of is? You'll be happy to know, it's just good old fashioned ==. nephews = ["Huey", "Dewey", "Louie"] if nephews == ["Huey", "Dewey", "Louie"]: print("Scrooge's nephews identified.") else: print("Not recognized.") >>> Scrooge's nephews identified. As a rule, you should always use == (etc.) for comparing values, and is for comparing instances. Meaning, although they appear to work the same, the earlier example should actually read... richestDuck = "Scrooge McDuck" if richestDuck == "Scrooge McDuck": print("I am the richest duck in the world!") if richestDuck != "Glomgold": print("Please, Glomgold is only the SECOND richest duck in the world.") >>> I am the richest duck in the world! >>> Please, Glomgold is only the SECOND richest duck in the world. There's one semi-exception... license = "1245262" if license is None: print("Why is Launchpad allowed to drive, again?") It's somewhat common to check for a non-value with foo is None because there's only one None in existence. Of course, we could also just do this the shorthand way... if not license: print("Why is Launchpad allowed to drive, again?") Either way is fine, although the latter is considered the cleaner, more "Pythonic" way to do it. Word of Warning: Hungarian Notation When I was still new to the language, I got the "brilliant" idea to use Systems Hungarian notation to remind me of my intended data types. intFoo = 6 fltBar = 6.5 strBaz = "Hello, world." Turns out, that idea was neither original nor brilliant. To begin with, Systems Hungarian notation is a rancid misunderstanding of Apps Hungarian notation, itself the clever idea of Microsoft developer Charles Simonyi. In Apps Hungarian, we use a short abbreviation at the start of a variable name to remind us of the purpose of that variable. He used this, for example, in his development work on Microsoft Excel, wherein he would use row at the start of any variable relating to rows, and col at the start of any variable relating to columns. This makes the code more readable and seriously helps with preventing name conflicts ( rowIndex vs colIndex, for example). To this day, I use Apps Hungarian in GUI development work, to distinguish between types and purposes of widgets. Systems Hungarian, however, misses the entire point of this, and prepends an abbreviation of the data type to the variable, such as intFoo or strBaz. In a statically typed language, it's bright-blazingly redundant, but in Python, it might feel like a good idea. The reason it isn't a good idea, however, is that it robs you of the advantages of a dynamically typed language! We can store a number in a variable one moment, and then turn around and store a string in it the next. So long as we're doing this in some fashion that makes sense in the code, this can unlock a LOT of potential that statically typed languages lack. But if we're mentally locking ourselves into one pre-determined type per variable, we're effectively treating Python like a statically typed language, hobbling ourselves in the process. All that to say, Systems Hungarian has no place in your Python coding. Frankly, it doesn't have a place in any coding. Eschew it from your arsenal immediately, and let's never speak of this again. Casting Call Let's take a break from the brain-bending of immutability, and touch on something a little easier to digest: type casting. No, not the kind of type casting that landed David Tennant the voice role of Scrooge McDuck....although he is completely awesome in that role. I'm talking about converting data from one data type to another, and in Python, that's about as easy as it gets, at least with our standard types. For example, to convert an integer or float to a string, we can just use the str() function. netWorth = 52348493767.50 richestDuck = "Scrooge McDuck" print(richestDuck + " has a net worth of $" + str(netWorth)) >>> Scrooge McDuck has a net worth of $52348493767.5 Within that print(...) statement, I was able to concatenate (combine) all three pieces into one string to be printed, because all three pieces were strings. print(richestDuck + " has a net worth of $" + netWorth) would have failed with a TypeError because Python is strongly-typed (remember?), and you can't combine a float and a string outright. You may be a bit confused, because this works... print(netWorth) >>> 52348493767.5 That's because the print(...) function automatically handles the type conversion in the background. But it can't do anything about that + operator - that happens before the data is handed to print(...) - so we have to do the conversion there ourselves. Naturally, if you're writing a class, you'll need to define those functions yourself, but that's beyond the scope of this article. (Hint, __str__() and __int__() handle casting the object to a string or integer, respectively.) Hanging By A...String While we're on the subject of strings, there's a few things to know about them. Most confusing of all, perhaps, is that there are multiple ways of defining a string literal... housekeeper = "Mrs. Beakley" housekeeper = 'Mrs. Beakley' housekeeper = """Mrs. Beakley""" We can wrap a literal in single quotes '...', double quotes "...", or triple quotes """...""", and Python will treat it (mostly) the same way. There's something special about that third option, but we'll come back to it. The Python style guide, PEP 8, addresses the use of single and double. This comes in handy when we deal with something like this... quote = "\"I am NOT your secretary,\" shouted Mrs. Beakley." quote = '"I am NOT your secretary," shouted Mrs. Beakley.' Obviously, that second option is much more readable. The backslash before the quotes means we are wanting that literal character, not to have Python treat it like the boundary of a string. However, because the quotes we wrap the string in have to match, if we wrap in single quotes, Python will just assume the double quotes are characters in the string. The only time we'd really need those backslashes would be if we had both types of quotes in the string at once. print("Scrooge's \"money bin\" is really a huge building.") >>> Scrooge's "money bin" is really a huge building. Personally, in cases like that, I prefer to use (and escape) the double quotes, because they don't escape my attention like an apostrophe will tend to do. But remember, we also have those triple quotes ( """), which we could use here too. print("""Scrooge's "money bin" is really a huge building.""") >>> Scrooge's "money bin" is really a huge building. Before you start wrapping all your strings in triple quotes for convenience, however, remember that I said there was something special about them. In fact, there's two things. First, triple quotes are multiline. In other words, I can use them to do this... print("""\ Where do you suppose Scrooge keeps his Number One Dime?""") >>> Where do you suppose >>> Scrooge keeps his >>> Number One Dime? Everything, including newlines and leading whitespace, is literal in triple quotes. The only exception is if we escape something using a backslash ( \), like I did with that newline at the beginning. We typically do that, just to make the code cleaner. The built-in textwrap module has some tools for working with multi-line strings, including ones that allow you to have "proper" indentation without it being included ( textwrap.dedent). The other special use of triple quotes is in creating docstrings, which provide basic documentation for modules, classes, and functions. def swimInMoney(): """ If you're not Scrooge McDuck, please don't try this. Gold hurts when you fall into it from forty feet. """ pass These are often mistaken for comments, but they're actually valid code that is evaluated by Python. A docstring must appear on the first line of whatever it's about (such as a function), and has to be wrapped in triple quotes. Later, we can access that docstring in one of two ways, both shown here: # This always works print(swimInMoney.__doc__) # This works in the interactive shell only help(swimInMoney) Special String Types I want to briefly touch on two other types of strings Python offers. Actually, they're not really different types of strings - they're all immutable instances of the class str - but the string literal is processed a bit differently by the language. Raw strings are preceded with an r, such as... print(r"I love backslashes: \ Aren't they cool?") In a raw string, the backslash is treated like a literal character. Nothing can be "escaped" inside of a raw string. This has implications for what type of quotes you use, so beware. print("A\nB") >>> A >>> B print(r"A\nB") >>> A\nB print(r"\"") >>> \" This is particularly useful for regular expression patterns, where you're likely to have plenty of backslashes that you want as part of the pattern, not interpreted out by Python before it gets there. Always use raw strings for regular expression patterns. Gotcha Alert: If the backslash is the last character in your raw string, it'll still act to escape out your closing quote, and create a syntax error as a result. That has to do with Python's own language lexing rules, not with strings. The other "type" of string is a formatted string, or f-string, which is new as of Python 3.6. It allows you to insert the values of variables into a string in a very pretty way, without having to bother with concatenation or conversion like we did earlier. We precede the string with an f. Inside, we can substitute our variables by wrapping them in {...}. We put it all together like this... netWorth = 52348493767.50 richestDuck = "Scrooge McDuck" print(f"{richestDuck} has a net worth of ${netWorth}.") >>> Scrooge McDuck has a net worth of $52348493767.5. You're not just limited to variables in those curly braces ( {...}) either! You can actually put just about any valid Python code in there, including math, function calls, expressions...whatever you need. Compared to the older str.format() methods and % formatting (neither of which I'll be covering here), f-strings are much faster. That's because they're evaluated before the code is run. Formatted strings were defined by PEP 498, so go there for more information. Functions While we're getting basic stuff out of the way, let's talk a bit about Python functions. I won't sport your intelligence by redefining "functions" yet again. It'll suffice to provide a basic example. def grapplingHook(direction, angle, battleCry): print(f"Direction = {direction}, Angle = {angle}, Battle Cry = {battleCry}") grapplingHook(43.7, 90, "") def says we're defining a function, and then we provide the name, and the names of the arguments in parenthesis. Yawn Let's make this a bit more interesting. (The following works in Python 3.6 and later.) def grapplingHook(direction: float, angle: float, battleCry: str = ""): print(f"Direction = {direction}, Angle = {angle}, Battle Cry = {battleCry}") grapplingHook(angle=90, direction=43.7) Believe it or not, that's valid Python! There's a lot of nifty little goodies in there, so let's break it down. Calling Functions When we call a function, we can obviously provide the arguments in the order they appear in the function definition, like in the first example: grapplingHook(43.7, 90, ""). However, if we want, we can actually specify which argument we're passing which values to. This makes our code more readable in many cases: grapplingHook(angle=90, direction=43.7). Bonus, we don't actually have to pass the arguments in order, so long as they all have a value. Default Arguments Speaking of which, did you notice that I left out the value for battleCry in that second call, and it didn't get mad at me? That's because I provided a default value for the argument in the function definition... def grapplingHook(direction, angle, battleCry = ""): In this case, if no value is provided for battleCry, then the empty string "" is used. I could actually put whatever value I wanted there: "Yaargh", None, or whatever. It's pretty common to use None as a default value, so you can then check if the argument has a value specified, like this... def grapplingHook(direction, angle, battleCry = None): if battleCry: print(battleCry) But then, if you're just going to do something like this instead... def grapplingHook(direction, angle, battleCry = None): if battleCry: battleCry = "" print(battleCry) ...at that point, you might as well just give battleCry that default value of "" from the start. Gotcha Alert: Default arguments are evaluated once, and shared between all function calls. This has weird implications for mutable types, like an empty list []. Immutable stuff is fine for default arguments, but you should avoid mutable default arguments. Gotcha Alert: You must list all your required arguments (those WITHOUT default values) before your optional arguments (those WITH default values). (direction=0, angle, battleCry = None) is NOT okay, because the optional argument direction comes before required angle. Type Hinting and Function Annotations If you're familiar with statically typed languages like Java and C++, this might make you a little excited... def grapplingHook(direction: float, angle: float, battleCry: str = "") -> None: But this doesn't do what you think it does! We can provide type hints in Python 3.6 and later, which offer exactly that: hints about what data type should be passed in. Similarly, the -> None part before the colon ( :) hints at the return type. However... - Python will not throw an error if you pass the wrong type. - Python will not try to convert to that type. - Python will actually just ignore those hints and move on as if they aren't there. So what's the point? Type hinting does have a few advantages, but the most immediate is documentation. The function definition now shows what type of information it wants, which is especially helpful when your IDE auto-magically shows hints as you type arguments in. Some IDEs and tools may even warn you if you're doing something weird like, say, passing a string to something type-hinted as an integer; PyCharm is very good at this, in fact! Static type checkers like Mypy also do this. I'm not going into those tools here, but suffice it to say, they exist. I should make it extra clear, those type hints above are a type of function annotation, which has all sorts of neat use cases. Those are defined in more detail in PEP 3107. There are a bunch more ways you can use type hinting, even beyond function definitions, with the typing module that was added in Python 3.5. Overloaded Functions? As you might guess, since Python is dynamically typed, we don't have much of a need for overloaded functions. Thus, Python doesn't even provide them! You generally can only have one version. If you define a function with the same name multiple times, the last version we defined will just shadow (hide) all the others. Thus, if you want your function to be able to handle many different inputs, you'll need to take advantage of Python's dynamically typed nature. def grapplingHook(direction, angle, battleCry: str = ""): if isinstance(direction, str): # Handle direction as a string stating a cardinal direction... if direction == "north": pass elif direction == "south": pass elif direction == "east": pass elif direction == "west": pass else: # throw some sort of error about an invalid direction else: # Handle direction as an angle. Note, I left the type hints out above, as I'm handling multiple possibilities. That was honestly a terrible example, but you get the idea. Gotcha Alert: Now, while that was perfectly valid, it is almost always a "code smell" - a sign of poor design. You should try to avoid isinstance() as much as possible, unless it is absolutely, positively, the best way to solve your problem...and you may go an entire career without that ever being the case! Return Types If you're new to Python, you may have also noticed something missing: a return type. We don't actually specify one outright: we simply return something if we need to. If we want to leave the function mid-execution without returning anything, we can just say return. def landPlane(): if getPlaneStatus() == "on fire": return else: # attempt landing, and then... return treasure That bare return is the same as saying return None, while return treasure will return whatever the value of treasure is. (By the way, that code won't work, since I never defined treasure. It's just a silly example.) This convention makes it easy for us to handle optional returns: treasure = landPlane() if treasure: storeInMoneyBin(treasure) NoneType is truly a wonderful thing. Gotcha Alert: You'll notice, all the other functions in this guide lacked return statements. A function automatically returns None if it reaches the end without finding a return statement; no need to tack one on the end. Review I hope you feel a bit less confused by Python's type system, and that you didn't bump your head on too many chairs during your trip down the rabbit hole. Here's the highlights again: Python is dynamically typed, meaning it figures out the data type of an object during run time. Python is strongly typed, meaning there are strict rules about what you can do to any given data type. Many data types in Python are immutable, meaning only copy of the data exists in memory, and each variable containing that data just points to that one master copy. Mutable types, on the other hand, don't do this. ischecks if the operands are the same instance of an object, while ==compares values. Don't confuse them. Systems Hungarian notation (e.g. intFoo) is a bad idea. Please don't do that. You can wrap strings in single ( '...') or double quotes ( "..."). Triple quote strings ("""...""") are for multiline strings. They can also be used for docstrings, documenting a function, class, or module. Raw strings ( r"\n") treat any backslash as literal. This makes them great for regular expression patterns. Formatted strings ( f"1 + 1 = {1+1}") let us magically substitute the result of some code into a string. Default values can be specified for function arguments, making them optional arguments. All optional arguments should come AFTER required arguments. Type hinting lets you "hint" what type of data should be passed into a function argument, but this will be treated as a suggestion, not a rule. As usual, you can find out lots more about these topics on the Python documentation. Python Reference: Built-in Types Python Reference: Built-in Types - Text Sequence Type ( str) Python Reference: Built-in Functions - isinstance() Python Reference: Expressions - Identity comparisons Python Reference: textwrap - PEP 498: Literal String Interpolation (f-strings) PEP 3107: Function Annotations Python Wiki: Why is Python a dynamic language and also a strongly typed language Ned Batchelder: Facts and Myths about Python names and values (PyCon 2015) - YouTube StackOverflow: What are Type hints in Python 3.5 dev.to: 3 Tricky Python Nuances (Thanks, as usual, to deniska and grym from Freenode IRC for the edits.) (open source and free forever ❤️) Stickers Contest!!🐶😍🤓 Something that we must have in our laptops are stickers. Who has de best collection? isis not the same as ==. Thank you. I've had to point this out to a few developer more than once. Also, thank you for pointing out that your hinting is only that, it's hinting at what you ought to pass in and not enforced. Thanks, Jason for this amazing series -- I minor correction. I think this output of this code isn't accurate.") it should only produce: You're correct that ==and isare not the same thing. However, that code does indeed work as I posted (I tested it). Strings are immutable, meaning that richestDuck is "Scrooge McDuck"evaluates to Trueon the basis that richestDuckis bound directly to the immutable string literal "Scrooge McDuck". A huge shout out to you for such an awesome series. <3
https://practicaldev-herokuapp-com.global.ssl.fastly.net/codemouse92/dead-simple-python-data-typing-and-immutability-41dm
CC-MAIN-2019-09
refinedweb
4,564
65.32
Pointers. Part 3. Unmanaged pointers and arrays. Pointer to structure. Pointer to class This topic is based on the theme: “Pointers. Managed and unmanaged pointers”. Content - How to declare an unmanaged pointer (*) to an array of integers? Example - How can define an unmanaged pointer (*) to an array of real numbers? - Ways to assign an unmanaged pointer (*) to the value of the address of a certain element of the array. Examples - How to access the array items using unmanaged (*) pointer? Examples - How to access the items of a two-dimensional array through a pointer? Examples - Access to items of a multidimensional array through a pointer. Examples - How to describe a unmanaged (*) pointer to a structure? Example - How to define unmanaged (*) pointer to the class? Example - Example of definition a managed pointer (^) to a class 1. How to declare an unmanaged pointer (*) to an array of integers? Example To set up a pointer to an array, you must assign this pointer address of the first element of the array. You can also, if desired, assign a pointer to the address of i-th element of the array. Example. Suppose we are given an array of integers and a pointer to an int: // Setting a pointer to an array of integers int M[20]; // Array M of 20 integers int *p; // pointer to int, value of p is undefined To set up a pointer to the first element of the array, there are 2 ways: // way №1 p = &M[0]; // Pointer p points to the first element of the array // way №2 p = M; // Too, as in the No. 1 2. How can define an unmanaged pointer (*) to an array of real numbers? For real numbers, the pointer to the array is set exactly as for integers: // Setting a pointer to an array of real numbers float X[20]; // array X of 20 real numbers float *p; // Pointer to float, p is undefined // way №1 p = &X[0]; // way №2 p = X; // Access to an array element via a pointer *p = -4.5f; // X[0] = -4.5 3. Ways to assign an unmanaged pointer (*) to the value of the address of a certain element of the array. Examples Example 1. Methods for assigning to pointer the address of the first item of one-dimensional array. // Access to array elements through the pointer int A[20]; // array int * p; // pointer // way 1 p = A; // p points to array A // way 2 p = &A[0]; // p points to array A Example 2. Methods for assigning an address to the i-th element of a one-dimensional array. // Access to array elements through the pointer int A[20]; // array int * p; // pointer int i; i = 3; // way 1 p = &A[i]; // pointer p points to the i-th element of the array // way 2 p = A+i; 4. How to access the array items using unmanaged (*) pointer? Examples Example 1. Suppose given array A, containing 10 real numbers. Using the pointer, you need to change the values of the elements of the array with indices 0, 2, 7. // one-dimensional array and pointer float A[10]; // array of 10 real numbers float *p; // pointer to float p = &A[0]; // p points to A *p = 13.6; // A[0] = 13.6 *(p+2) = 50.25; // A[2] = 50.25 *(p+7) = -3.2; // A[7] = -3.2 Figure 1 shows the result of the above example. Figure 1. The result of Example 1 Example 2. Zeroing an array of integers using a pointer to the array. // Access to array elements through the pointer int A[20]; // array int * p; // pointer int i; for (i=0; i<20; i++) A[i] = i; p = A; // p points to A // way 1 for (i=0; i<20; i++) *(p+i) = 0; // way 2 for (i=0; i<20; i++) *(A+i) = 0; // The name of the array is also a pointer // way 3 for (i=0; i<20; i++) p[i] = 0; 5. How to access the items of a two-dimensional array through a pointer? Examples For the pointer to point to a two-dimensional array, the pointer must be assigned the value of the first item of the two-dimensional array. There are several ways for this. For a pointer to point to a row in a two-dimensional array, you need to assign the value of the address of that string. Example 1. Given a two-dimensional array M of integers and pointer p. Implement access to the items of array M through a pointer. // pointer to two-dimensional array int M[6][9]; // two dimensional array M of integers int *p1, *p2, *p3; // pointers to int p1 = (int *)M; // p1 = &M[0], p1 points to the string with index 0 p2 = (int *)M[2]; // p2 points to the string with index 3 p3 = (int *)&M[5]; // p3 points to the string with index 5 // Access to a specific item of an array through a pointer *p1 = 39; // M[0][0] = 39 *p2 = 88; // M[2][0] = 88 *p3 = 100; // M[5][0] = 100 *(p1+3) = 27; // M[0][3] = 27 *(p2+5) = -13; // M[2][5] = -13 *(p3+8) = 19; // M[5][8] = 19 Figure 2 shows the result of the above code. Figure 2. Two dimensional array and pointer Example 2. A two-dimensional array of real numbers of size 3×4 is given. By using a pointer you need to get access to the array item that is in position 2, 3. // A pointer to a specific item double B[3][4]; // array double * pb; // pointer to double pb = &B[2][3]; // Pb points to an item of array B with index (2,3) *pb = 8.55; // B[2][3] = 8.55 6. Access to items of a multidimensional array through a pointer. Examples Example. Description of the pointer to the three-dimensional array of size 2 × 3 × 5. Access to the element of the array with a pointer. // pointer to a three-dimensional array float A[2][3][5]; // three-dimensional array float *p; // pointer to float p = (float *)A; // p points to the firs item of array p = &A[0][0][0]; // same // Access to an array element with index [2] [0] [3] p = &A[2][0][3]; *p = -30; // A[2][0][3] = -30 7. How to describe a unmanaged (*) pointer to a structure? Example Example 1. Let outside the description of the class describe a new type – the structure ARRAY_INTS: typedef struct Array_Ints { int n; int A[20]; } ARRAY_INTS; Then, in some class method (for example, the click event handler on a button), you can use the structure ARRAY_INTS AI; // structure variable AI ARRAY_INTS * pAI; // pointer to the structure pAI = &AI; // pAI points to structure of ARRAY_INTS type // access fields of the structure using a variable AI AI.n = 2; AI.A[0] = 25; AI.A[1] = 33; // access fields of the structure using pointer pAI pAI->n = 3; // AI.n = 3 pAI->A[0] = 28; // AI.A[0] = 28 pAI->A[1] = 345; // AI.A[1] = 345 pAI->A[2] = -33; // AI.A[2] = -33 Example 2. Allocating memory for an ARRAY_INTS structure (see previous example) and access to structure fields via a pointer: // Example of allocating memory for a pointer to a structure ARRAY_INTS * pAI2; // A pointer to an ARRAY_INTS structure pAI2 = (ARRAY_INTS *)malloc(sizeof(ARRAY_INTS)); // memory allocation // Filling fields pAI2->n = 2; pAI2->A[0] = 82; pAI2->A[1] = 34; 8. How to define unmanaged (*) pointer to the class? Example Let in the module “MyClass.h” the description is given: // Class definition in the module "MyClass.h" class MyClass { private: int x; // class fields int y; public: // class methods void SetXY(int nx, int ny); int GetX(void); int GetY(void); MyClass(void); }; The “MyClass.cpp” module provides the implementation of class methods: // Implementation of class methods in the module "MyClass.cpp" #include "StdAfx.h" #include "MyClass.h" MyClass::MyClass(void) { x = 0; y = 0; } void MyClass::SetXY(int nx, int ny) { x = nx; y = ny; } int MyClass::GetX(void) { return x; } int MyClass::GetY(void) { return y; } To access the fields and methods of a class through a pointer, you can write the following code: // Access to methods and fields of a class through a pointer MyClass * p; // pointer to a class // Allocating memory for the pointer p = (MyClass *)malloc(sizeof(MyClass)); // Access to class methods through a pointer p->SetXY(5, 6); // Call the SetXY() method of the class int x = p->GetX(); // x = 5 int y; y = p->GetY(); // y = 6 9. Example of definition a managed pointer (^) to a class. In Visual C++, if an application is created to run in the CLR environment, you can describe a managed pointer to the class. In this case, the class must be declared with the qualifier ref. The allocation of memory for the pointer is performed by the gcnew utility. Example. Let there be given a class that is described in the module “MyClass2.h“. ref class MyClass2 { private: int x; int y; public: void SetXY(int nx, int ny); int GetX(void); int GetY(void); MyClass2(void); }; Implementation of class methods in the module “MyClass2.cpp” is as follows: #include "StdAfx.h" #include "MyClass2.h" MyClass2::MyClass2(void) { x = 0; y = 0; } void MyClass2::SetXY(int nx, int ny) { x = nx; y = ny; } int MyClass2::GetX(void) { return x; } int MyClass2::GetY(void) { return y; } Then you can use a managed pointer to the class as follows: // An example of class access via a managed pointer MyClass2 ^p; // managed pointer to a class int x, y; p = gcnew MyClass2; // Allocating memory for the pointer p->SetXY(-8, 32); // Calling the SetXY() method via pointer p x = p->GetX(); // x = -8 y = p->GetY(); // y = 32 Related topics - Pointers. General concepts. Pointer types. Managed and unmanaged pointers. Pointers to a function. Examples of the use of pointers - Pointers. Unmanaged pointers. Operation on pointers. Pointer to type void. Memory allocation. A null pointer. The operation of getting the address & - Pointers and strings. Using pointers when converting strings - Pointers. Memory allocation for the pointer. Arrays of pointers to basic types, functions, structures, classes - Pointers. Composite managed and native data types. Managed pointers (^) in CLR. Memory allocation. The ref and value qualifiers. Managed pointers to structures and classes - Arrays. Array definition. One-dimensional arrays. Initializing array - Arrays. Two-dimensional arrays. Arrays of strings. Multidimensional arrays - Structures. Composite data types. The template of structure. Structural variable. Structures in the CLR. Declaring and initializing a structured variable - Structures. Memory allocation for the structure. Nested structures. Arrays of native structures
http://www.bestprog.net/en/2017/04/02/pointers-part-3-unmanaged-pointers-and-arrays-pointer-to-structure-pointer-to-class/
CC-MAIN-2017-39
refinedweb
1,769
64.81
The below code is working with nodejs 4.4 "use strict"; const test = (res) => { return (data) => { return res.json({"message": "testing"}); }; }; module.exports = test; const ES6 Yes, you can use const like that. const means "the value of this variable cannot be changed" and the interpreter will complain if you try to assign a new value to it. Is the code above "correctly written using ES6"? Depends what you mean... for example, ES6 uses export instead of module.exports, but what you've written is not wrong. After all, it works. ES6 is not a different language - it's Javascript with some new features. It's up to you to decide how many of those features you want to use.
https://codedump.io/share/qpo5xz63EPwE/1/how-to-write-module-in-ecmascript-6
CC-MAIN-2016-44
refinedweb
120
78.96
h3. Purpose One of the paradigms of modern architecture is service orientation (SOA) with it’s SAP flavour ESA (enterprise service architecture). An important principle of SOA is implementation transparency, i.e. it doesn’t matter in which language a service is being implemented. This allows to combine services from different specialised environments into composite applications or processes. PHP is a scripting language with an always growing userbase having its strenghts in a number of areas, in which it can complement ABAP well (e.g. xml schema validation, small footprint graphics manipulation, cryptology, …) Integrating PHP and ABAP therefore adds a valuable opportunity for those creating state-of-the-art solutions around the Netweaver platform. In this weblog I’ll give a simple tutorial to show how to make use of webservices technologies to integrate PHP and ABAP. In my opinion the features of webservices and SOAP make the use of the good old PHPRFC library – which did such a good job for many years, obsolet. One interface technology less. h3. Prerequisites The examples shown here require ABAP WAS 6.40+ and PHP 5.x. The soap libraries in PHP are not enabled by default, so we have to enable them by adding extension=php_soap.dll to the php.ini file. For development we don’t want the WSDLs to be cached once downloaded so we have to set soap.wsdl_cache_enabled=0 in php.ini. h3. Calling an ABAP webservice from PHP First we have to create the webservice in ABAP. We create a very simple webservice, accepting one input parameter and returning a parameter based on it. So, we create a function module ZTW_WS1: FUNCTION ZTW_FUB1. *”——————————————————————— *”*”Lokale Schnittstelle: *” IMPORTING *” VALUE(P_IN) TYPE STRING *” EXPORTING *” VALUE(P_OUT) TYPE STRING *”——————————————————————— concatenate ‘ABAP says: Hello’ space p_in into p_out separated by space. concatenate p_out ‘!’ into p_out. ENDFUNCTION. This is probably the most simple function module to test the functionality. It expects a name, say +Sue+, and returns +ABAP says: Hello Sue!+ We _remote-enable_, save and activate it. Next we create a webservice based on it. Since this is a bit different for different versions of ABAP WAS, please consult the documentation for it. Regardles of the version it’s quiet simple, just right-click the function module and find the appropriate action or create a new object of type webservice interface. Do not forget to release the service for the soap runtime. When everything is set up, we get the WSDL (webservice description language; ‘the service description’) at the following address: where tonysub is the name of our ABAP host. Just view it in your browser if you have never seen one 🙂 The next thing is to create a client in PHP to consume this service. SOAP support is built-in for PHP 5.x so we can immediately start to code our client: $wsdlurl =”“; $login = “bcuser”; $password = “minisap”; ?> Your name: Debug? : $client = new SoapClient($wsdlurl, array(‘login’ => $login, ‘password’ => $password, ‘trace’ => true, ‘exceptions’ => true)); try { echo $client->ZtwFub1(array(‘PIn’ => $name))-> POut; } catch (SoapFault $e) { echo ‘Caught an Error: (‘ . $e->faultcode . ‘) – ‘ . $e->faultstring; } if(isset($debug)) { echo “—- \n”; echo “Request :\n”.htmlspecialchars($client->__getLastRequest()) .”\n”; echo “Response:\n”.htmlspecialchars($client->__getLastResponse()).”\n”; echo ” “; } } ?> There’s ‘a lot’ of coding masking the simplicity of the webservice call, so here’s the crucial parts again: The constructor: $client = new SoapClient($wsdlurl, array(‘login’ => $login, ‘password’ => $password, ‘trace’ => true, ‘exceptions’ => true)); And the actual service call(and display of the result): echo $client->ZtwFub1(array(‘PIn’ => $name))->POut; The rest of the coding shows ** an HTML form letting you enter your name and decide if you want to see debugging information ** SOAP error handling ** debugging of the SOAP communication Here is what we’ve got so far. An input screen to input your name: An the result showing the answer of the webservice h3. Calling a PHP webservice from ABAP First we create a webservice in PHP. This is not so easy as one might expect from PHP. The reason for this is that using the standard SOAP functions you need a WSDL first to instanciate a webservice. There are libraries on the internet allowing to autogenerate a WSDL based on some PHP functions but those libraries are not too elaborate yet. One good solution is the one provided by Mainly we adopt some namespaces to point to our own namespace +urn:tonysub/soapexample1+ and the SOAP:address location to point at our webservice at. We save this as soapserver1.wsdl to our webdirectory. Havin the WSDL it’s very easy to create the webservice in PHP function ZtwFub1($name) { return (array(POut => “PHP says: Hello ” . $name->PIn . “!”)); } $server = new SoapServer(“soapserver1.wsdl”); $server->addFunction(“ZtwFub1”); $server->handle(); ?> That’s the PHP webservice. We could test it by simply pointing the client above to the new WSDL location, i.e. (and clearing login an password for this example). But we quickly move on to test it from the ABAP side. We go to SE80 again and create a so called webservice proxy obbject using the same WSDL. We follow the wizard and get a proxy object, e.g. ZTW_CO_ZTW_WSX. DDIC structures for the input and and output parameters are created, here ZTW_ZTW_FUB1 and ZTW_ZTW_FUB1. Finally we have to go to transaction LPCONFIG and create a logical port for our proxy object. So, the only thing left to do is write a report to make use of this proxy object. *&——————————————————————– * *& Report ZTW_CALL_WS *& *&——————————————————————– * *& *& *&——————————————————————– * REPORT ZTW_CALL_WS. data: my_client type ref to ztw_co_ztw_wsx, my_result type ztw_ztw_fub1response, my_input type ztw_ztw_fub1. parameters: my_name(20) type c. my_input-pin = my_name. create object my_client. call method my_client->ZTW_FUB1 exporting INPUT = my_input importing OUTPUT = my_result. write: / my_result-pout. Voilá, we end up with the following: An input screen to enter a name An the result showing the answer of the PHP webservice Greetings, Blag. Regards, Sumith This one too... PHP + IDocs Cheers Kathir I don´t speak and write english, I sorry!. I Created a WS in ABAP, equality your indications and I created the aplication in PHP, but it not generates the consuming... not continued where the line next: $login, 'password' => $password, 'trace' => true, 'exceptions' => true)); try { echo $client->ZtwFub1(array('P_IN' => $name))->P_Out; } catch (SoapFault $e) { echo 'Caught an Error: [' . $e->faultcode . '] - ' . $e->faultstring; } if(isset($debug)) { echo " -------------------------------------------------------------------------------- \n"; echo "Request :\n".htmlspecialchars($client->__getLastRequest()) ."\n"; echo "Response:\n".htmlspecialchars($client->__getLastResponse())."\n"; echo " "; } } ?> the characteristic in php.ini too they were modified. attempt to make also with “nusoap” and does not work. It can give a hand me! Thank You!!! Is your PHP version 5.x ? (do you know PHP?) Did you implement the webservice in ABAP? What is your errormessage? regards,anton No problem, single it eliminates lines that were not necessary in my case, but thank you for your help!!!, jiji... Regards! the blog states the requirement as 6.40+; AFAIK 'minisap' was the name of the demo versions accompanying ABAP programming books some years ago, the last of which was a 4.6 system. Later test systems were (and still) are available for download at SDN as trial versions. so, 4.6 systems do not support webservices at all, 6.20 systems have a very early version of the SOAP runtime (webservice environment), which is no fun to work with, 6.40+ systems support webservices with growing functionality and maturity. if you have your suzitable system set up the rest is fairly easy. goto SE80, find the function module you want to expose, right click it and find the menu entry to create a webservice; you'll be supported by a wizard. if you're done with that, you got to configure your service which differs from version to version; see the documentation for that and join the SOA forum for advanced questions (after you searched for solutions yourself 😉 ) hope it helps, anton First of all thx for being so fast and accurate in your explanations. ...well, I succeeded in creating the web service [I named it ZHELLO] in SAP Netweaver 7.0 [Abap Trial Version], however, I don't know how to find out the name of the ABAP host, in order to get the 'service description' in the browser [http://......]. So, please, any suggestion is very welcome. Kind Regards! Hello !Federico I think, your problem is that you search the content in the header instead of the response body. Instead of re-reading my own old blog I quickly made you a working example of reading T005. Just try it out and tell me if it works for you: The function module: create a webservice from it and call it in PHP like follows: hope you get the formatting right, it's a bit of a pain here. But it should give you the required table. regards, anton Was precisely the solution that was implemented, but did not work with WS made by a supplier outside SAP. I asked the provider to generate the WS you put as an example and it worked fine. Taking your example, the supplier made the corrections in their WS to return data as a table type and now it works. Thanks for the help, example and your time. Regards, Federico PHP SOAPCLIENT works well for the default sap-client and ONLY WSDL 1.1; if the user password is for a different client seems to fail Say 300 and 600 are 2 SAP-clients and 300 is default WSDL for 600 works but with a user belonging to 600 I cannot login as there is no provision for sap-client in $client = new SoapClient( $wsdl, array( "login" => $user, "password" => $passwd ) ); Works fine if user belongs to 300(default) =================================================== PHP SoapClient Does NOT support WSDL 2.0 as generated by ECC6 Wizard It only supports WSDL 1.1 as prevailed in 4.7 era; luckily still supported Read WS02 PHP Soap does support WSDL 2.0 but I think it best a) to stick to SAP PHP RFC b) use SOAPCLIENT WSDL 1.1 route It is time for a official SAP PHP that is as good as JCO! Is SAP Listening? Hi Jay, if you use the URL-parameter &(?)sap-client=nnn in your service call you can access the service on a specific SAP client. I can't re-check your info right now, but I know that I have called numerous webservices using a PHP client against any ECC6 system version. It always worked easily. So, either 2.0 is not the default version of generated services or the soapclient of a reasonable recent PHP client supports 2.0. Will re-check when time permits. regards, anton
https://blogs.sap.com/2006/03/05/integrating-php-and-abap-using-webservices/
CC-MAIN-2021-21
refinedweb
1,770
64.81
Android API Demos for Studio When learning to program Android apps nothing beats seeing working example code. Unfortunately rapid changes in the Android Operating System (OS), and the Google preferred tools, means that it is sometimes hard to find good working samples. (Hence the free Android Studio example projects provided here on Tek Eye.) Google does provide plenty of sample projects and code, even though the right piece of code can sometimes be hard to find. This article is about running the old Android API demos sample app in Android Studio. What Ever Happened to the Old Android Sample Projects Earlier versions of the Android Software Development Kit (SDK) shipped samples that could be loaded and run in the preferred Integrated Development Environment (IDE) of the time, the Eclipse IDE. Each new release of the SDK would usually introduce new sample projects, mainly to demonstrate a new Application Programming Interface (API). When Google switched to the new Android Studio IDE the Eclipse samples were moved to a legacy folder and new Gradle based samples were added. With the release of Android Nougat (API 23) the samples moved online. The legacy samples are useful because they cover many of the basic programming functions that new Android developers need to know about. In particular the API Demos legacy sample provides demos and code for many of the fundamental built in Android classes. The API Demos were even installed by default in some Android Virtual Devices (AVDs). What if you want to run the legacy API Demos app? It can be done, but it is a bit painful. You need to get hold of Android samples from Android Jelly Bean MR1 (API 17). The legacy folder has the API Demos Eclipse Project. Then you can import the Eclipse project. Sort out the Gradle syncing, rename a file, and begin working through a list of errors reported in Studio. Plus refactor the API Demo namespace if you don't want to remove the existing API Demos app in an AVD. Alternatively just use the Studio compatible API Demos project that is available here on Tek Eye. Running the Android API Demos in Studio The Android API Demos project for Studio will load and run. It is not perfect due to deprecated code and other Android changes over the years. But at least it will compile and run, and most of the demos work. This is great if you want to see some of the fundamental Android classes in action. Download the API Demos zip file. Extract the code to a directory on the PC (it will remain in that location). Then use the Android Studio Import Project option to load it up. (Selecting the build.gradle file.) Wait for Studio and Gradle to do it's thing. Click OK on the message Gradle settings for this project are not configured yet. (Unless you want to configure Gradle manually). Accept the sync message if it appears. Use the status messages at the bottom of the Studio window to monitor Gradle progress. Once loaded the API Demos should run on a suitable AVD, or an Android device configured for development (otherwise resolve any errors listed). Known Issue On some versions of Android, after API 19 (Android KitKat), the API Demos app will exception (error) on loading with java.lang.RuntimeException: Package manager has died. The error reported is !!! FAILED BINDER TRANSACTION !!! when a call is made to queryIntentActivities in the ApiDemos.java file. It appears a change in later versions of Android (APIs 21 to 23, Lollipop and Marshmallow) causes a limit to the amount of Activities that can be defined in AndroidManifest.xml. The manifest file for the API Demos app is over 3000 lines long. If about half the defined activites are removed from AndroidManifest.xml the app will run on APIs 21 to 23. Fortunately, as well as Anndroid KitKat and earlier versions, API Demos will load and run in Android Nougat and Oreo, APIs 24 and later. A few of the many demos in the API Demos app will also exception, due to chnages to the Android API over the years. The project was a straight conversion to a Studio project from the original Eclipse code. Some messaging (MMS) code examples were commented out due to later SDK incompatibilites, but no other code changes have been made. At least the project loads, compiles and runs in Studio (on the right API AVD or device), providing access to lots of API demo code. See Also - See the other Android Studio example projects to learn Android app programming. - For a full list of the articles on Tek Eye see the full site Index Author:Daniel S. Fowler Published:
https://tekeye.uk/android/examples/android-api-demos-for-studio
CC-MAIN-2018-26
refinedweb
784
72.66
The days of the Wild West are coming to their end in the world of Python testing. It was not many years ago that nearly every project built with Python seemed to have its own idioms and practices for writing and running tests. But now, the frontier is finally beginning to close. The community is rallying around a few leading solutions that are bringing convenience and common standards to the test suites of hundreds of popular projects. This is the first in a series of three articles that will serve as a guide to the new testing frameworks. In this article, you will be introduced to three popular testing frameworks and see the radically simpler test style that the newest generation of tools are encouraging. The second article, Discovering and selecting tests, will step back and look at the larger question of how these frameworks automate the task of finding and cataloging your project's tests in the first place. Finally, Test reporting with a Python test framework will look at the powerful features these frameworks provide for viewing the results of your test runs. By learning the common idioms of these three frameworks, you will not only be better prepared to read through other programmer's Python packages, but to build elegant and powerful test suites for your own applications as well. The candidates: Three Python testing frameworks There are three Python testing frameworks that seem to be in use on large code bases today. Taking them in chronological order, they are: zope.testing As usual, the developers working on the Zope project seem to have been early innovators. They needed a uniform way to discover and run tests across their large code base, and their answer was the zope.testingpackage, which remains heavily used to this day. The zope.testingpackage only supports traditional Python test styles like unittestand doctest, and not the radically simpler styles permitted by the more recent frameworks. But it does offer a powerful system of layers with which whole directories full of tests can rely on common setup code that creates once, for the layer (rather than once for each test), the environment in which the tests need to run. py.test It was in 2004 that Holger Krekel renamed his stdpackage, whose name was often confused with that of the Standard Library that ships with Python, to the (only slightly less confusing) name 'py.' Though the package contains several sub-packages, it is now known almost entirely for its py.testframework. The py.testframework sets a new standard for Python testing, and is popular with many developers today. The elegant and Pythonic idioms it introduced for test writing have made it possible for test suites to be written in a far more compact style than was possible before, as you shall see below. nose The noseproject was released in 2005, the year after py.testreceived its modern guise. It was written by Jason Pellerin to support the same test idioms that had been pioneered by py.test, but in a package that is easier to install and maintain. Though py.testhas in several ways caught up, and today is quite easy to install, nosehas retained its reputation for being very sleek and easy to use. At Python conventions, it is now common to see developers wearing black T-shirts showing the nosetestscommand, followed by the field of periods with which it denotes successful tests. Interest in nosecontinues to increase, and one often sees posts on other project mailing lists in which the local developers ask the project leads when their project will be permitted to make the switch to nose. Of the three projects, it looks like nose might well become the standard, with py.test having a smaller but loyal community and zope.testing remaining popular only for projects built atop the Zope framework. But all are actively maintained, and each has some unique features. Keep reading, and learn about the features and differences among the three so that you can make the right choice for your own projects. The testing revolution The py.test framework transformed the world of Python testing by accepting plain Python functions as tests instead of insisting that tests be packaged inside of larger and heavier-weight test classes. Since the nose framework supports the same idiom, these patterns are likely to become more and more popular. Imagine that you want to check whether the Python truth values True and False are really, as Python promises, equivalent to the Boolean numbers 1 and 0. Either py.test or nose will accept and run the following few lines of code as valid tests that answer this question: # test_new.py - simple tests functions def testTrue(self): assert True == 1 def testFalse(self): assert False == 0 In contrast to the simplicity of the above example, you will find that older documentation about Python testing is replete with verbose example tests that all go something like this: # test_old.py - The old way of doing things import unittest class TruthTest(unittest.TestCase): def testTrue(self): assert True == 1 def testFalse(self): assert False == 0 if __name__ == '__main__': unittest.main(). Experienced users of unittest might try to argue that the above example should use the testing methods that my new TruthTest class has inherited from TestCase. For example, they would encourage me to use assertEqual() instead of an assert statement that tests manually for equality, in which case the test would indeed use self instead of ignoring it: # alternate version of the TestTrue method ... def testTrue(self): self.assertEqual(True, 1) ... There are three responses to this recommendation. First, calling a method hurts readability. While the assertEqual() method name does indicate that the two values are being tested for equality, the code still does not look like a comparison in the way that the Python == operator looks like a comparison to someone familiar with the language. Second, as you will see in the third article in this series, the new testing frameworks now know how to introspect assert statements to inspect the condition that made the test fail, which means that a bare assert statement can now lead to test failure messages that are just as informative as the results of calling the old methods like assertEqual(). Finally, even if assertEqual() were still necessary, it would surely be simpler and more Pythonic to import such a function from a testing module, instead of using class inheritance merely to make functions available! You will see below, in fact, that when both py.test and nose want to make additional routines available to support tests, they simply define them as functions and expect users to import them into their code. Of course, when authors actually need setup and teardown routines that cache state for later use in test cases, unittest subclasses still make eminent sense, and both py.test and nose fully support them. And many Python tests these days are written as doctests, which are supported by Python's standard library and need not make use of either functions or classes: Doctest For The Above Example ----------------------------- The truth values in Python, named "True" and "False", are equivalent to the Boolean numbers one and zero. >>> True == 1 True >>> False == 0 True But when programmers want to write simple test code without all the verbiage involved in a doctest, then test functions are a wonderful way to write. Above all, test functions vastly enhance what might be called the writability of tests. Instead of making each programmer remember, re-invent, or copy the test scaffolding from the last test he wrote, the new conventions enable a Python programmer to write tests as one usually writes Python code: By simply opening an empty file, and typing! Framework-specific conveniences Both py.test and nose provide special routines that make writing tests easier. You might say that they each allow you to write tests using their own particular dialect of convenience functions. This can make test writing simpler and less error-prone, and also result in test code that is shorter and more readable. But using these routines also carries an important consequence: your tests are then tied to the framework whose functions you are using. The trade-off is one of convenience versus compatibility. If you write all of your tests from the ground up using only the clunky standard Python unittest module, then they will work under any testing framework you choose. Going a step further, if you adopt the simpler and sleeker practice of writing test functions (as described above), then your tests will at least work under both py.test and nose. But if you start using features peculiar to one testing framework, then a good deal of rewriting might be necessary in the future if another one of the frameworks develops important new features and you decide to migrate. Both py.test and nose provide an alternative for the assertRaises() method of TestCase. The version provided by py.test is a bit fancier, because it can also accept a string to execute, which is more powerful because you can test expressions that raise exceptions rather than only function calls: # conveniences.py import math import py.test py.test.raises(OverflowError, math.log, 0) py.test.raises(ValueError, math.sqrt, -1) py.test.raises(ZeroDivisionError, "1 / 0") import nose.tools nose.tools.assert_raises(OverflowError, math.log, 0) nose.tools.assert_raises(ValueError, math.sqrt, -1) # No equivalent for third example! Beyond the testing of exceptions, however, the two frameworks part ways. The only other py.test convenience seems to be a function to determine whether a particular call triggers a DeprecationWarning: py.test.deprecated_call(my.old.function, arg1, arg2) On the other hand, nose seems to have a quite rich set of assertion functions, both for cases where you want to avoid a bare assert statement, and where you need to do something more complicated. You should consult its documentation for details, but here is a quick synopsis of the possibilities offered by nose.tools: # nose.tools support functions for writing tests assert_almost_equal(first, second, places=7, msg=None) assert_almost_equals(first, second, places=7, msg=None) assert_equal(first, second, msg=None) assert_equals(first, second, msg=None) assert_false(expr, msg=None) assert_not_almost_equal(first, second, places=7, msg=None) assert_not_almost_equals(first, second, places=7, msg=None) assert_not_equal(first, second, msg=None) assert_not_equals(first, second, msg=None) assert_true(expr, msg=None) eq_(a, b, msg=None) ok_(expr, msg=None) The routines above that check for an approximate value are especially important when dealing with floating-point results, if you want to write tests flexible enough to succeed on Python implementations with subtle differences in their handling of floating point. Distributed testing Tests seem to get run more and more often these days. The practice of continuous testing has now been adopted in many shops, where project tests are run with every check-in to the team's version-control system. And as test-driven development grows in popularity, many developers now write and run the tests for a new module before they even bring up their editor to start writing the module's code. If tests take a long time to run, then they can become an important roadblock to developer productivity. It is therefore an advantage to be able to bring as much computing power as possible to bear against the task of running tests. On a small scale, this can mean running multiple testing processes to take advantage of all of the CPU cores on your machine. For larger projects, whole farms of test machines are configured, either using dedicated servers ready to run tests in parallel, or even using the combined idle time of all of the developer's workstations together. In the area of parallel and distributed testing, the three testing frameworks the article looks at have quite significant differences: - The zope.testingcommand line has a -joption that specifies that several testing processes should be started instead of all tests being done in the same process. Since each process can run on a different CPU core, running -j 4on a four-CPU machine would allow all four CPUs to be active in running tests at once. - The noseproject reports that they have support for parallel tests now committed to their project trunk, but normal users will have to wait for the next release before trying it out. - The py.testtool not only supports a multiprocessing option ( -n) for running on several CPU cores like zope.testing, but it actually has the tools to distribute tests among an entire farm of test servers. Of these three frameworks, py.test looks like the clear leader in this area. Not only can you give it multiple --tx options, each describing an environment or remote server on which you want to run tests, but it actually supports distributing tests for two quite different reasons! With --dist=load, it will use your server farm for the traditional task of spreading your running tests across several machines to reduce the time you spend waiting. But with dist=each, it does something more sophisticated; it will make sure that each test gets run on each of the different testing environments that you have made available to py.test. This means that py.test can simultaneously test your product on multiple versions of the Python interpreter, and on multiple operating systems. This makes py.test a very strong contender if your project supports multiple platforms and you want a testing solution that will support you out of the box, without requiring you to write your own scripts for copying tests to several different platforms and running them. Customization and extensibility All three testing frameworks provide ways for both individual users and for whole projects to select the behaviors and options they want from their test framework. - The zope.testingmodule is, in Zope packages, often called by a buildoutrecipe that specifies default options. This means that developers running the tests will get a uniform set of results, while still being free to specify their own command-line switches when the behaviors selected at the project level do not fit their needs. - Per-user customization is supposed by the noseframework through a nose.cfgor a .nosercfile in a user's home directory, where he can specify his own personal preferences for how test results are displayed. - Per-project options can be provided for either framework. The py.testframework will detect conftest.pyfiles in any project which it is testing, where it will look for per-project options like whether to detect and run doctests and for the patterns that it should use to detect test files and functions in the first place. The noseframework, on the other hand, looks for a project-wide setup.cfgfile, which is an already-standard way of providing information about a Python package, and looks for a [nosetests]section inside of it. And, going beyond what can be accomplished by varying their configuration, both py.test and nose provide support for plug-ins, user-supplied modules that can install new command-line options and add new behaviors to both tools. Conclusion Adopting one of the new generation of Python testing frameworks will provide concise idioms and uniform testing techniques that, in the past, every Python project had to supply for itself. The next article will begin to examine the larger testing machinery that each framework implements, the techniques that they use to examine your project in search of test modules and test files. Visit the next article to read more. Resources Learn -: Describes how to use the Distutils to make Python modules and extensions. - PEAK Setuptools - Wiki of Python testing tools - 2004 IBM DeveloperWorks article on unittest and doctest -.
http://www.ibm.com/developerworks/aix/library/au-python_test/index.html
CC-MAIN-2014-52
refinedweb
2,621
58.92
This blog post is part of a series about how Windows Phone 8.1 affects developers. This blog post talks about how we can use the text-to-speech functionality and is written by Alexander Persson at Jayway and was originally posted here. This blog post is part of a series about how Windows Phone 8.1 affects developers. The series is written in collaboration with Microsoft evangelist Peter Bryntesson, check out his blog here. Introduction Using Text to speech (TTS) in your app can make it stand out from the crowd as a cool feature. But it can also come in handy when you want to make your app accessible for more people that has a hard time reading the often small fonts on the screen. Speak text In Windows Phone 8.0 it was really easy to get TTS to work. You just added a few lines of code and it was all set: private async Task TextToSpeach(string textToRead) { using (var speech = new SpeechSynthesizer()) { await speech.SpeakTextAsync(textToRead); } } This code will just take your parameter and speak the text. You had to turn on Speech recognition capability in your app manifest or you would get an Access denied. But overall it’s not that much code. In Windows Phone 8.1 we have a new API that works like it does in Windows Store apps. Notice we have to enable Microphone capability. private async Task TextToSpeach(string textToRead) { using (var speech = new SpeechSynthesizer()) { var voiceStream = await speech.SynthesizeTextToStreamAsync(textToRead); player.SetSource(voiceStream, voiceStream.ContentType); player.Play(); } } A little bit more code and another namespace than in Windows Phone 8.0. Before we are able to speak our text we must make it into a stream. This stream do we then set as source in our MediaElement (named as player in XAML) before we can play the text. So a few more lines and we now also require a MediaElement to be able to play the text. The nice with this is that now we get a stream and we can do so much fun with a stream. It’s possible to save to disk, share to another player app and use in some awesome audio mixer application. Get all voices installed As a result of new namespaces the API for getting the installed voices has also changed. In Windows Phone 8.0 we had: var voices = InstalledVoices.All; And in Windows Phone 8.1 we have: var voices = SpeechSyntheizer.AllVoices; Both return an IReadOnlyList of VoiceInformation, so the result is the same on both platforms. Getting voice and set a new one It’s possible to get the current voice we are using and it’s done similar on both platforms. In Windows Phone 8.0: using (var speech = new SpeechSynthesizer()) { var currentVoice = speech.GetVoice(); } In Windows Phone 8.1: using (var speech = new SpeechSynthesizer()) { var currentVoice = speech.Voice; } To set a new voice, in this example the first female voice. In Windows Phone 8.0: using (var speech = new SpeechSynthesizer()) { speech.SetVoice(InstalledVoice.All .First(i => i.Gender == VoiceGender.Female)); } And in Windows Phone 8.1: using (var speech = new SpeechSynthesizer()) { speech.Voice = SpeechSynthesizer.AllVoices .First(i => i.Gender == VoiceGender.Female); } Summary This post described some of the new features of text to speech and how the API changed from Windows Phone 8.0. As we could see the API hadn’t change that much besides how we did play the text aloud. In Windows Phone 8.1 we now require a MediaElement but we benefit from getting a stream we can do whatever we want to. Please tell me WP 8.1 added the ability to run TTS while background audio is playing… I tried TTS in a test app and it would not work. I copied the code from your article, enable Microphone capability and added a MediaElement. It failed to talk on the 8.1 emulator. ( I can get TTS to work on 8.0 emulator) I tried playing around with the code but it will not work for me. Is there a problem with TTS on 8.1 or is it my coding/setup ?? ( I am using VS 2013 Ultimate update 2. I am coding a (not a Silverlight) 8.1 app) Apart from my problem, this is a useful article. There is no example of TTS in the 8.1 samples from MS. David
https://blogs.msdn.microsoft.com/thunbrynt/2014/04/15/windows-phone-8-1-for-developers-text-to-speech/
CC-MAIN-2016-30
refinedweb
733
76.32
Your message dated Thu, 11 Apr 2002 14:01:00 -0400 with message-id <[email protected]> and subject line Bug#61149: fixed in sfio 2000 Mar 2000 07:21:47 +0000 Received: (qmail 16302 invoked from network); 27 Mar 2000 07:21:47 -0000 Received: from c454468-a.frmt1.sfba.home.com (HELO tytlal.z.streaker.org) ([email protected]) by master.debian.org with SMTP; 27 Mar 2000 07:21:47 -0000 Received: from chip by tytlal.z.streaker.org with local (Exim 3.12 #1 (Debian)) id 12ZTqN-0000oB-00 for <[email protected]>; Sun, 26 Mar 2000 23:21:47 -0800 Date: Sun, 26 Mar 2000 23:21:47 -0800 From: Chip Salzenberg <[email protected]> To: [email protected] Subject: Conflict in L_* macros between sfio and stdio Message-ID: <[email protected]> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.1.9i Sender: Chip Salzenberg <[email protected]> Package: sfio-dev Version: 1999-3 Among other useful things, /usr/include/sfio/stdio.h defines some limit macros: L_cuserid, L_ctermid, and L_tmpnam. However, if a program that has included <sfio/stdio.h> later includes <limits.h>, that leads to inclusion of <bits/xopen_lim.h>, which then includes <bits/stdio_lim.h>, which redefines L_tmpnam differently from sfio. In any case, since sfio as built for Debian doesn't have its own tmpnam function, it shouldn't be using a different L_tmpnam definition from the one in <bits/stdio_lim.h>. Suggested solution: Since we know that Debian is using glibc, portability to non-glibc targets isn't our concern. I suggest the following patch: --- stdio.h.distrib Mon Dec 13 23:34:16 1999 +++ stdio.h Sun Mar 26 23:18:48 2000 @@ -11,11 +11,23 @@ #include <sfio.h> +#if defined __USE_SVID || defined __USE_XOPEN +/* Default path prefix for `tempnam' and `tmpnam'. */ +# define P_tmpdir "/tmp" +#endif + +/* Get the values: + L_tmpnam How long an array of chars must be to be passed to `tmpnam'. + TMP_MAX The minimum number of unique filenames generated by tmpnam + (and tempnam when it uses tmpnam's name space), + or tempnam (the two are separate). + L_ctermid How long an array to pass to `ctermid'. + L_cuserid How long an array to pass to `cuserid'. + FOPEN_MAX Minimum number of files that can be open at once. + FILENAME_MAX Maximum length of a filename. */ +#include <bits/stdio_lim.h> + #define _IOFBF 0 #define _IONBF 1 #define _IOLBF 2 -#define L_ctermid 32 -#define L_cuserid 32 -#define P_tmpdir "/tmp/" -#define L_tmpnam (sizeof(P_tmpdir)+32) #define fpos_t Sfoff_t -- Chip Salzenberg - a.k.a. - <[email protected]> "I wanted to play hopscotch with the impenetrable mystery of existence, but he stepped in a wormhole and had to go in early." // MST3K --------------------------------------- Received: (at 61149-close) by bugs.debian.org; 11 Apr 2002 18:09:31 +0000 >From [email protected] Thu Apr 11 13:09:31 2002 Return-path: <[email protected]> Received: from auric.debian.org [206.246.226.45] (mail) by master.debian.org with esmtp (Exim 3.12 1 (Debian)) id 16vj0l-0007jI-00; Thu, 11 Apr 2002 13:09:31 -0500 Received: from katie by auric.debian.org with local (Exim 3.12 1 (Debian)) id 16visW-0005mu-00; Thu, 11 Apr 2002 14:01:00 -0400 From: Stephen Zander <[email protected]> To: [email protected] X-Katie: $Revision: 1.5 $ Subject: Bug#61149: fixed in sfio 2000-1 Message-Id: <[email protected]> Sender: Archive Administrator <[email protected]> Date: Thu, 11 Apr 2002 14:01:00 -0400 Delivered-To: [email protected] We believe that the bug you reported is fixed in the latest version of sfio, which is due to be installed in the Debian FTP archive: sfio-dev_2000-1_i386.deb to pool/main/s/sfio/sfio-dev_2000-1_i386.deb sfio1999_2000-1_i386.deb to pool/main/s/sfio/sfio1999_2000-1_i386.deb sfio2000_2000-1_i386.deb to pool/main/s/sfio/sfio2000_2000-1_i386.deb sfio_2000-1.diff.gz to pool/main/s/sfio/sfio_2000-1.diff.gz sfio_2000-1.dsc to pool/main/s/sfio/sfio_2000-1.dsc sfio_2000.orig.tar.gz to pool/main/s/sfio/sfio_2000.orig.tar.gz A summary of the changes between this version and the previous one is attached. Thank you for reporting the bug, which will now be closed. If you have further comments please address them to [email protected], and the maintainer will reopen the bug report if appropriate. Debian distribution maintenance software pp. Stephen Zander <[email protected]> (supplier of updated sfio package) (This message was generated automatically at their request; if you believe that there is a problem with it please contact the archive administrators by mailing [email protected]) -----BEGIN PGP SIGNED MESSAGE----- Format: 1.7 Date: Sun, 31 Mar 2002 21:12:36 -0800 Source: sfio Binary: sfio1999 sfio-dev sfio2000 Architecture: source i386 Version: 2000-1 Distribution: unstable Urgency: low Maintainer: Stephen Zander <[email protected]> Changed-By: Stephen Zander <[email protected]> Description: sfio-dev - Enhanced library for managing I/O streams (development). sfio1999 - Enhanced library for managing I/O streams. sfio2000 - Enhanced library for managing I/O streams. Closes: 61149 123521 Changes: sfio (2000-1) unstable; urgency=low . * New upstream source. * Include bits/stdio_lim.h in sfio/stdio.h to prevent conflicts between various L_* macros, Closes: #61149 * New maintainer, Closes: 123521 Files: 9680b8e9d9c62ecc846c209c7eb04bdb 723 - optional sfio_2000-1.dsc 947763fbba34cc2de16e226f6be0eda6 377458 - optional sfio_2000.orig.tar.gz 95de261b78f94e25efd3887736b4beeb 21835 - optional sfio_2000-1.diff.gz 06114e92f132a5d177ae5be6eaf95bdb 115528 libs optional sfio2000_2000-1_i386.deb e2e9e60fdc25847d63ae9622330bef79 172998 devel optional sfio-dev_2000-1_i386.deb d66bc6e6827e3f2ed14c57bd0fe226f1 56784 oldlibs optional sfio1999_2000-1_i386.deb -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: For info see iQCVAwUBPLEOWWNl6w3rzdpRAQH4AwP/SiTzjSPMNYIYgOabLd+0zCq24/AKCWlN eNCvxUk+/G6FGKNO1GaSzFxqQ2icB1Oxzsh4T7qt3yw14WrR3B+t4nmRhKsvBHXz 7tBDaI889MG99S8R71uhI1jfXe7Xxsh0gQpWHZjM/4HxEHt92dr08e6HpW2eNEEb rQUfXmt1PMg= =gcwI -----END PGP SIGNATURE----- -- To UNSUBSCRIBE, email to [email protected] with a subject of "unsubscribe". Trouble? Contact [email protected]
https://lists.debian.org/debian-qa-packages/2002/04/msg00064.html
CC-MAIN-2016-50
refinedweb
1,003
53.78
Opened 8 years ago Closed 8 years ago #12843 closed enhancement (fixed) Make zeromq and pyzmq optional packages Description The single cell server already requires zeromq+pyzmq, and the ordinary notebook server probably will soon. Since they are also useful to work with any kind of distributed memory machines, we should just make them default spkgs. They are also pretty small, about 2MB altogether. These spkgs are based on version that William created in January 2011, but updated to the newest upstream version and include a SPKG.txt: Change History (10) comment:1 Changed 8 years ago by - Keywords sd40.5 added - Reviewers set to Benjamin Jones comment:2 Changed 8 years ago by I updated the spkg files to remove the backup files and committed changes. comment:3 Changed 8 years ago by Sorry, looks like we missed in zeromq: - .hgignore~ The pyzmq spkg looks good. What's the procedure, after a positive review? A sage-devel vote on inclusion of the packages as standard? comment:4 Changed 8 years ago by First it should optional for some time, usually. So a vote to make it optional would usually be next. comment:5 Changed 8 years ago by - Status changed from new to needs_review - Summary changed from Make zeromq and pyzmq standard packages to Make zeromq and pyzmq optional packages I updated the zeromq spkg to remove the .hgignore~ I'll change this ticket to say "make optional package" and we can take it to sage-devel from there. We don't need to vote on that, only to make it standard. comment:6 Changed 8 years ago by As long as nothing else in Sage depends on zeromq, obviously zeromq should not be a standard package. comment:7 Changed 8 years ago by - Status changed from needs_review to positive_review Looks good. Positive review. comment:8 Changed 8 years ago by - Component changed from packages to optional packages comment:9 Changed 8 years ago by both spkges are in the server's list of optional ones and on their way around the world :) comment:10 Changed 8 years ago by - Resolution set to fixed - Status changed from positive_review to closed I started to review the two spkg's. zeromq-2.2.0.p0.spkginstalls successfully. I unpacked and re-packaged using sage-pkgwithout problems. I did see some tmp files lying around: and pyzmq-2.1.11.p0.spkghas: After install import zmqworks and I experimented with some simple examples from the ZMQ guide. If anyone wants to do some simple testing, you can grab the client and server scripts in:, open two terminals and run: 1st terminal: 2nd terminal:
https://trac.sagemath.org/ticket/12843
CC-MAIN-2019-51
refinedweb
441
61.06
RDF Calendar This. This is a draft for discussion in the www-rdf-calendar mailing list, part of the Semantic Web Interest Group. It is subject to change without notice. See also the 29 September 2005 publication as an Interest Group Note. The Web did two things for sharing information with documents: first, HTML and TCP/IP provided a neutral answer to the questions about which word processor's format to use, which operating system, and which networking technology; second, the Web integrated individual documents into a whole information system so that if the information was already in the Web somewhere, you could just link to it. HTML is feature-poor when compared to other document formats, but the integration benefits of linking outweigh the costs. Fifteen years later, this works pretty well for documents. If you have a document and someone asks you to provide it to each of a dozen different people that each use different kinds of computers, you can just put it on the Web in HTML and be reasonably sure they can all read it. But the integration problem is still there for data. When a soccer coach distributes a schedule for the season, each of the players has to re-key the information for their calendar system if they want their computer to help them manage conflicts. When an airline sends itineraries, each passenger manually processes them. The problem is addressed at least in part by an Internet standardst for calendar data, iCalendar[RFC2554]. But it's not clear that iCalendar provides sufficient integration benefits to outweigh the cost of migrating to open systems from more mature closed calendaring systems. At a Semantic Web calendaring workshop in October 2002, we explored the additional benefits of applying the Resource Description Framework (RDF) to iCalendar data, allowing us to linked it with social networking data (FOAF), syndicated content (RSS), multimedia metadata (dublin core, musicbrainz) etc. The iCalendar specification is fairly large, with 142 sections and a number of complex interactions. The widely available software seems to cover much of the useful functionality, but not every aspect of the specification; for example, we have not seen tool support for exception rules. Meanwhile, at the workshop, we did have a number of actual iCalendar data files, representing real-world events, that had been converted to RDF either manually or with scripts. The resulting RDF/XML analogs served useful purposes to at least some of the participants and seemed to be correct, by inspection, to all present. This provided critical mass to begin maintaining a test suite. A particularly rewarding aspect of this collaboration is exploring language and culture boundaries. Even though there is a six hour time gap between Chicago and London, office hours overlap regularly, and while the difference in dialect and etiquette is often entertaining, it is rarely an obstacle to understanding. Our colleagues from Japan are much more able to converse in English than we are in Japanese. Even so, without the benefit of non-verbal clues, remote conversations are particularly challenging. Email offers the chance to read and compse at your own pace, but the timezone gaps between America, Europe, and Asia effectively impose a 24 hour round-trip time that is a real barrier to conversation. Internet Relay Chat (IRC) allows near-real-time feedback and clarification as well as the clarity of written text and a chance to reflect at least a few minutes to digest one message and compose another, but only if all parties can devote their attention at the same time. We use an archived mailing list, [email protected], as the forum of record, with any significant work that happens by chance in IRC reported there after the fact. We also conduct a form of meeting over IRC, called with advanced notice of a week or so, where some conscious effort is given to agenda management, due process for decision making, follow-up on actions, and the like. We have given the name ScheduledTopicChat to this collaboration pattern. RdfCalendarMeetings serves as an index of meeting records. At a glance, converting iCalendar data to RDF is quite straightforward; in iCalendar terms, an event is a component with various properties: BEGIN:VEVENT UID:20020630T230445Z-3895-69-1-7@jammer DTSTART;VALUE=DATE:20020703 DTEND;VALUE=DATE:20020706 SUMMARY:Scooby Conference LOCATION:San Francisco END:VEVENT and RDF/XML has analagous class and property constructs: <Vevent> <uid>20020630T230445Z-3895-69-1-7@jammer</uid> <dtstart>2002-07-03</dtstart> <dtend>2002-07-06</date> <summary>Scooby Conference</summary> <location>San Francisco</location> </Vevent> The terms Vevent, uid, etc. in the RDF/XML example above are actually abbreviations. The Vevent element is dominated by an element with namespace declarations: <rdf:RDF … <Vevent> <uid>20020630T230445Z-3895-69-1-7@jammer</uid> <dtstart>2002-07-03</dtstart> <dtend>2002-07-06</date> <summary>Scooby Conference</summary> <location>San Francisco</location> </Vevent> … </rdf:RDF> The result is that the element name Vevent is short for a full URI. iCalendar data typically consists of a CALENDAR component with VEVENT components and such inside it. An initial design identified the calendar object with the RDF/XML document ala <Vcalendar rdf: … </Vcalendar> i.e. "this document is a Vcalendar with … ." But we ran into a case of iCalendar data with more than one calendar in a file. There was some discrepancy among implementations as to whether this was good data; mozilla did not seem to accept it, but this was reported as a bug#179985 and indeed, section 4.4 iCalendar Object says The Calendaring and Scheduling Core Object is a collection of calendaring and scheduling information. Typically, this information will consist of a single iCalendar object. However, multiple iCalendar objects can be sequentially grouped together. So we decided2003-02-12 to drop rdf:about="" from our icalendar<->RDF mapping.. We decided2003-02-12 to use ical:component to relate calendars to events. We have explored using the iCalendar uid property to make URIs for event components2003-08-19. It's not clear whether events in separate files bearing the same uid should be considered identical or merely different views of the same event. For example, if they are identical, they have the same alarms. One approach that seems to work well is to use the uid as a fragment identifier rather than as a full URI. While these examples suggest that the mapping is straightforward, they also demonstrate one of the early issues: capitalization. In iCalendar, component and property names are case insensitive and conventionally written in all caps. But due to internationalization and simplicity considerations, XML names and URIs are case sensitive, and RDF class and property names inherit constraints from XML and URIs. In addition, the established convention is that RDF class names are capitalized and RDF property names begin with a lower case letter, and both use camelCase to join words. The first attempts to convert iCalendar data to RDF were perl scripts of a hundred lines or so that just manipulated the punctuation. But this approach breaks down when the punctuation of a property depends on the name of the propertyp. Soon it became clear that there were details beyond capitalization that varied from property type to property type; the conversion process needed information from a schema. DTSTART;VALUE=DATE:19960401. Capitalization is one of many issues that had a number of efforts to relate iCalendar and RDF (A quick look at iCalendar by Tim Berners-Lee in 2001, hybrid.rdf by Miller and Arick in 2001, ical2rdf.pl by Connolly in 2002) had explored independently. At the workshop 2002, we agreed to work together on a shared RDF Schema, that is: a shared document in the Web that provides definitions, of a sort, for a number of related terms. After consideration of preserving the investment in each of the existing iCalendar schemas, it seemed the data that referenced them might have been composed with an expectation that those schemas would not change. We chose a new namespace name,. The issues around managing changes to an RDF schema are similar to managing changes in other documents: should you update the content in place, or should you keep the old version there and put the new version at a different place in the Web? We chose a process that is reasonably simple and has proven to be quite robust and scalable: the schema is subject to change, with notice and appeal; that is: all changes to the schema are announced to the www-rdf-calendar mailing list; if anyone objects within a week, the change is rolled back for further discussion. The regular structure of the iCalendar specification, with components and properties, suggests declaring corresponding RDF classes and properties in an RDF schema should be straightforward. But an attempt to do it manually ( hybrid.rdf by Miller and Arick in 2001) proved unwieldy. Notation 3 is a compact and readable alternative to RDF's XML syntax and an extension to express logical rules. We explored using rules to generate an RDF schema from our example data. For example, rules such as if something is related to something else by ?P, then ?P is a Property can be expressed in Notation3 rule syntax: { [] ?P []. } => { ?P a r:Property }. The ?P syntax is for variables. And in the same way: if something is a ?C, then ?C is a Class can be expressed in Notation 3 syntax as: { [] a ?C } => { ?C a s:Class }. This approach worked to enumerate the classes and properties we were using in our test data, but it did not provide important schema information such as value types. The iCalendar specification has a very regular structure for value types and such: 4.8.2.4 Date/Time Start. We converted this structured plain text to XHTML with semantic markup for two reasons: A python program, slurpIcalSpec.py, produces XHTML including typed links from properties to value types: -. The markup uses semantic class names and link relationships: <h2 id="sec4.8.2.4">4.8.2.4 Date/Time Start</h2> <dl> <dt id="dtstart">Property Name</dt> <dd class="PropertyName"> <pre> DTSTART </pre> </dd> <dt>Purpose</dt> <dd class="Purpose"> <pre> This property specifies when the calendar component begins. </pre> </dd> <dt>Value Type</dt> <dd class="ValueType">The default value type is <a rel="default-value-type" href="#Value_DATE-TIME">DATE-TIME</a> <pre> . The time value MUST be one of the forms defined for the <a rel="allowed-type" href="#Value_DATE">DATE</a>-TIME value type. The value type can be set to a <a rel="allowed-type" href="#Value_DATE">DATE</a> value type. </pre> </dd> </dl> An XSLT transformation, webize2445.xsl, turns this into RDF/OWL: <rdf:Description rdf: <rdfs:label>DTSTART</rdfs:label> <rdfs:comment>This property specifies when the calendar component begins.</rdfs:comment> <rdfs:comment> default value type: DATE-TIME</rdfs:comment> <spec:valueType>DATE-TIME</spec:valueType> <rdf:type rdf: <rdfs:range> <owl:Class> <owl:unionOf rdf: <owl:Class rdf: <owl:Class rdf: <owl:Class rdf: </owl:unionOf> </owl:Class> </rdfs:range> This approach does not produce schema information for the component property discussed above, nor for properties such as interval and byday used in recurrence rules. Those should be added in due course. The schema also currently lacks information about which properties are functional or inverse-functional, which are needed for certain diff/sync techniques2004-03-23. Unfortunately, adding that information conflicts with certain OWL DL restrictions, and makes it harder to use OWL DL checking tools with this schema. This remains an open issue. Another python program, compDecls.py, reads the schema and prints it as a python data structure for use in our iCalendar to RDF conversion utilitytd, fromIcal.py: ('Vevent', {"ATTACH": ('attach', 'URI', 0, None), "CATEGORIES": ('categories', "TEXT", 0, None), "SUMMARY": ('summary', "TEXT", 0, None), "DTEND": ('dtend', 'DATE-TIME', 0, None), "DTSTART": ('dtstart', 'DATE-TIME', 0, None), The iCalendar syntax allows extension tokens in a number of places. Ideally, we would like to ground these extension tokens in URI space as well, but none of the approaches we have tried is completely satisfactory. One approach was to specify a namespace for x- tokens on the command line, at conversion time. The drawbacks of this approach are iCalendar data is labelled with a product ID that serves a similar role to an XML namespace name, though it is not expressed as a URI. We decided2003-02-26 to institute an ical product registry. When we found some extensions used by some product, we would publish a schema for those extensions using a URI starting with '' followed by a function of the product id. The drawback of this approach is that it does not seem to be worth the trouble. In practice, we seem to be more content to just disregard the extended properties. Handling extensions remains an outstanding issue in our test suite2003-08-20. Now that we have explored the schema and extension tokens, let's look at the calendar data itself. Consider the case of a shop with regular hours. Contemporary directory services provide telephone numbers and street addresses, complete with automated driving directions. But you still have to pick up the phone and call them to find out when they're open. Typical shop hours can be expressed using recurring events in iCalendar. The bus-hrs.rdf test expresses "Open 11:30a-11:30p Wed-Sun; Open Tue 4-11p" in RDFttl: @prefix : <> . @prefix NY: <> . <#20030314T052745Z-25601-69-1-8@dirk> a :Vevent; :class "PUBLIC"; :dtend "2003-03-12T23:00:00"^^NY:tz; :dtstart "2003-03-12T11:30:00"^^NY:tz; :rrule [ :byday "SU,WE,TH,FR,SA"; :freq "WEEKLY"; :interval "1" ]; :summary "Open 11:30a-11:30p Wed-Sun". <#20030314T052656Z-25601-69-1-0@dirk> a :Vevent; :class "PUBLIC"; :dtend "2003-03-11T23:00:00"^^NY:tz; :dtstart "2003-03-11T16:00:00"^^NY:tz; :rrule [ :byday "TU"; :freq "WEEKLY"; :interval "1" ]; :summary "Open Tue 4-11p". There is a question of whether timezone rules should be given by reference or by copy. Some data from early releases of Apple's iCal application lacked explicit VTIMEZONE components, but the specification is clear that they are required and this was acknowledged as a bug in Apple's iCal2003-03-12 and has since been fixed. So the bus-hrs.rdf file includes timezone rules: NY:tz a :Vtimezone; :daylight [ :dtstart "1970-04-05T02:00:00"^^:dateTime; :rrule [ :byday "1SU"; :bymonth "4"; :freq "YEARLY"; :interval "1" ]; :tzname "EDT"; :tzoffsetfrom "-0500"; :tzoffsetto "-0400" ]; :standard [ :dtstart "1970-10-25T02:00:00"^^:dateTime; :rrule [ :byday "-1SU"; :bymonth "10"; :freq "YEARLY"; :interval "1" ]; :tzname "EST"; :tzoffsetfrom "-0400"; :tzoffsetto "-0500" ]; :tzid "/softwarestudio.org/Olson_20011030_5/America/New_York"; x-lic:location "America/New_York" . While those rules are an accurate model of the timezone in New York at least as far back as 1970, the Energy Policy Act of 2005 is intended to change them in 2006. While the Olson database is likely to reflect these changes in due course, the copies in all the iCalendar data out there will fail to accurately represent the timezone rules for New York. One approach, exemplified by the datetime design pattern in the microformats community, is to not use iCalendar timezones, but only UTC datesdelt. Another approach is to put the timezone rules in the Web, establish change control policies with some minimum notice, and pass timezones around by reference. As a step in this direction, we have published each entry in a version of the Olsen database in our RDF Calendar workspace. For example, NY:tz, i.e.. We are considering some way to connect this data to the relevant political decision-making processes. Ultimately, it would be best if the respective policitcal organizations published the data by themselves. There are a few other issues to note from the example above, some of which are resolved: and some of which are not: NY:tztimezone is used as a datatype. Earlier, we used separate properties for time and timezone, which is initially appealing but problematic for reasons that are detailed in the InterpretationProperties pattern. …2002/12/cal/ical#schema. This design is using a somewhat experimental2005-03-30 namespace name, …2002/12/cal/icaltzd#. We hope to find a consensus with a number of parties on issues around timezones and recurring events: The iCalendar specification includes two features related to places. The location property "… defines the intended venue for the activity defined by a calendar component." That is, it gives a name of the place where the event occurs. For example: <#20020630T230445Z-3895-69-1-7@jammer> a :Vevent; :summary "X3 Conference"; :location "San Francisco"; :description "can't wait!\n"; :dtstart "2002-07-03"^^XML:date; :dtend "2002-07-06"^^XML:date; :uid "20020630T230445Z-3895-69-1-7@jammer" . The geo property takes a list of 2 floats: latitude and longitude. For example: geo:CDC474D4-1393-11D7-9A2C-000393914268 a :Vevent; :summary "meeting 23"; :geo ( 40.442673 -79.945815 ); :dtstart "2003-01-08T13:00:00"^^New:tz; :dtend "2003-01-08T14:00:00"^^New:tz . The relationship of these properties to the Basic Geo (WGS84 lat/long) Vocabulary also in development in the Semantic Web Interest Group has been discussed, but not conclusively. Note that location relates an event to the name of a place, not the place itself; likewise, geo relates an event to a pair of coordinates, not a place. If we take :place to be a property that relates an event to a place where it occurscyc, It seems reasonable to relate them using rules such as: { ?E cal:geo (?LAT ?LONG) } <=> { ?E :place [ geo:lat ?LAT; geo:long ?LONG ] }. { ?E cal:location ?TXT } <=> { ?E :place [ rdfs:label ?TXT ] }. The vehicle for our exploration of schema and data issues is a test suite. At this point, we have a schema supported by a useful, if not complete, collection of tests and conversion tools: $testcase.icsfile that is judged to be a correct iCalendar file (i.e. it conforms to RFC 2445 and one or more popular iCalendar tool consumes it correctly and/or allows the user to produce it) and a corresponding $testcase.rdffile that is judged to agree. For example, cal01.icsand cal01.rdfare one test case. $testcase.icsto a temporary $testcase-actual.rdffile. $testcase.rdf, the expected results, using an RDF graph comparison tool. $testcase.rdfto $testcase-temp.ics. $testcase-temp.icsto $testcase-actual.rdf. $test-case-actual.rdfto the expected results, $testcase.rdf, using an RDF graph comparison tool. $testcase.rdfis logically consistent with the schema. The following table shows, for each component and property, the test case files that use that property on that type of component: There are a number of related calendar data format projects. XCalendar is a simple syntactic conversion of iCalendar to XML. For events with simple attribute-value properties it produces results very similar to RDF case; the differences are syntactic (capitalization) or have to do with the model RDF imposes. An XSLT transformation from xCalendar to iCalendar is provided. We have considered a syntactic profile of RDF calendar that would meet the same requirements, but we have not managed to develop a tool to produce this profile given an arbitrary RDF calendar graph as input. RSS Events is a proposed module for RSS 1.0. It uses a simple vocabulary inspired by iCalendar: > </item> It uses the homepage of an event as the url for the description of the event. While the RDF Calendar vocabulary is still a work-in-progress, it provides anyone with RDF or XML tools a useful alternative to dealing with the character-level syntax of iCalendar. Our test-driven approach to Semantic Web vocabulary development has allowed us to manage changes as we explored and resolved a variety of issues. The "subject to change with notice and appeal" change policy for our schema seems to work well. We have exploited the graph model of RDF in our round-trip testing work, but explorations into comparisons, especially for the purpose of synchronizing changes, are at an early stage. See Delta: an ontology for the distribution of differences between RDF graphs for one approach. We are still in relatively early stages of mixing calendar data with other Semantic Web data. As an illustrative example, consider using the FOAF vocabulary to describe an attendee of an event: [] a :Vevent ; :attendee [ a foaf:Person ; :calAddress <mailto:[email protected]> ; foaf:mbox <mailto:[email protected]> ; foaf:name "My name" ] ; :dtend "2002-06-30T10:30:00"^^NY:tz ; :dtstart "2002-06-30T09:00:00"^^NY:tz ; :summary "Tree Conference" . The iCalendar specification has some prohibitions against publishing email addresses. This is one of many privacy considerations with calendar data. Queries and rules to relate photos to events via metadata such as date-taken is another promising area of work. hCalendar is an emerging microformat, i.e. a set of semantic XHTML markup conventions. It is based on iCalendar. The approach to human-readability is interesting; for example: <abbr class="dtstart" title="2005-10-05">October 5</abbr> We are working on a glean-hcal.xsl, a transformation from hCalendar to RDF Calendar. Using hCalendar with GRDDL is particularly promising, though it goes beyond the scope of this report. Note that open source implementaitons of the following transformations are available, either from the RDF Calendar workspace or projects nearby: We are greatful to Masahide Kanzaki for his RDF calendar service, to Olivier Gutknecht and Paul Cowles for their commercial product development perspective, and the collaborators in www-rdf-calendar, including Julian Reschke, Doug Royer, Tim Berners-Lee, Dave Thewlis, Karl Dubost, Charles McCathieNevile, Michael Arick, Norman Walsh, Tim Hare, and Dan Brickley.
http://www.w3.org/2002/12/cal/report1173
crawl-002
refinedweb
3,615
53
Archived | Access a local development Hyperledger Composer REST server on the internet Access a local Hyperledger Composer REST server using Secure Gateway Archive date: 2019-05-01This content is no longer being updated or maintained. The content is provided “as is.” Given the rapid evolution of technology, some content, steps, or illustrations may have changed. Note: IBM Blockchain Platform now utilizes Hyperledger Fabric for end-to-end development. Users may use Hyperledger Composer at their choice, but IBM will not provide support for it. Developers who want to get a fast start in developing a blockchain solution can quickly begin by using Hyperledger Composer on a local development system. After a business network is created and deployed locally, it’s also possible to deploy a REST server that exposes the business network for easy access by a front-end application. But what happens when the target front-end application is for a mobile system or is running on a cloud runtime environment, such as a Cloud Foundry application or a Docker container, and the app needs to access the local blockchain business network? Or in general, you might be looking for a way to establish a connection to a network service that is running on a host that has outbound internet access, but it doesn’t support inbound access. The network service might be behind a firewall or on a dynamic IP address. You can solve these problems by creating an internet-based proxy that accepts network connections and then forwards them to the service of interest. In this tutorial, you learn how to create this proxy for a Hyperledger Composer REST server by using the IBM Secure Gateway service. Learning objectives Complete this tutorial to understand how to create a REST server for a Blockchain business network and how to make it available on the internet. The tutorial shows how to configure a simple business network using Hyperledger Composer running on a local virtual machine. Then, you use the IBM Secure Gateway service to provide an internet-reachable network service that proxies connections to the REST server on the virtual machine. Prerequisites To complete this tutorial, you need: - Vagrant - VirtualBox - An IBM Cloud pay-as-you-go, subscription or trial account. This tutorial does not cover the development of blockchain business networks using Hyperledger Composer. For more information about developing those blockchain business networks, see the Hyperledger Composer tutorials. Estimated time The steps in this tutorial take about 30-45 minutes to complete. Add time to create the desired business network, if you are creating one from scratch. Steps Complete the following steps to create a local virtual machine (VM) that is capable of serving a Composer Business Network as a REST API endpoint. First, you use Vagrant to configure a VM with Docker support. After the VM is configured, continue by following the Hyperledger Composer set-up steps for a local envionment at Installing the development environment. Finally, after you have the local Composer REST server running locally, configure a Secure Gateway instance to expose the API on the IBM Cloud. Configure a VM with Docker support Create a directory for the project: mkdir composer Copy the contents of the Vagrantfile into the directory. Start the Vagrant image from the directory (this might take a little while): vagrant up After the VM is up, log in to start configuring Hyperledger Fabric: vagrant ssh Set up Hyperledger Composer Follow the pre-requisite setup steps for a local Hyperledger Composer environment for Ubuntu at Installing prerequisites. Complete these steps as an ordinary user and not a root user on the VM. Log out from vagrant with exitand reconnect with vagrant sshwhen prompted. curl -O chmod u+x prereqs-ubuntu.sh ./prereqs-ubuntu.sh After you finish installing pre-requisites, set up the Hyperledger Fabric local development environment as described at Installing the development environment, starting with the CLI tools. npm install -g [email protected] npm install -g [email protected] npm install -g [email protected] npm install -g yo Install Composer Playground. npm install -g [email protected] Optional: Follow steps to set up IDE Step 3 of Installing the development environment. Complete Step 4 from the set-up instructions to get Hyperledger Fabric docker images installed. mkdir ~/fabric-dev-servers && cd ~/fabric-dev-servers curl -O tar -xvf fabric-dev-servers.tar.gz cd ~/fabric-dev-servers export FABRIC_VERSION=hlfv12 ./downloadFabric.sh Proceed with the steps under “Controlling your dev environment” to start the development fabric and create the PeerAdmin card: cd ~/fabric-dev-servers export FABRIC_VERSION=hlfv12 ./startFabric.sh ./createPeerAdminCard.sh Start the web app for Composer (“Playground”). Note: Starting the web app does not start up a browser session automatically as described in the documentation, because the command is running inside the VM instead of on the workstation. composer-playground After the service starts, navigate with a browser tab to(this local port is mapped by the Vagrantfileconfiguration to the VM). Develop a business network and test in the Composer Playground as usual. If you’ve never used composer playground, the Playground Tutorial is a good place to start. After you have completed testing the intended business network, deploy the Composer REST server, providing the card for the network owner ( admin@marbles-networkin this example). See Step 5 from Developer tutorial for creating a Hyperledger Composer solution for explanations on the responses to the input prompts. The Secure Gateway connectivity steps in this tutorial were tested with the following options. composer-rest-server ? Enter the name of the business network card to use: admin@marbles-network ? Specify if you want namespaces in the generated REST API: always use namespaces ? Specify if you want to use an API key to secure the REST API: No ? Specify if you want to enable authentication for the REST API using Passport: No ? Specify if you want to enable the explorer test interface: Yes ? Specify a key if you want to enable dynamic logging: ? Specify if you want to enable event publication over WebSockets: Yes ? Specify if you want to enable TLS security for the REST API: No To restart the REST server using the same options, issue the following command: composer-rest-server -c admin@marbles-network -n always -u true -w true Discovering types from business network definition ... Discovering the Returning Transactions.. Keep the REST server running in the terminal. When finished with the REST API server, you can use Ctrl-C in the terminal to terminate the server. Test the REST API server by opening a browser. Configure a Secure Gateway instance to expose the API on the cloud Open the IBM Cloud catalog entry for Secure Gateway to create a Secure Gateway instance in your IBM Cloud account. You need either a paid account or Trial promo code. The Essentials service plan is sufficient for implementing traffic forwarding for a development hyperledger fabric network with a capacity of 500 MB/month of data transfer. Verify that this plan is selected, and click on Create. Click Add Gateway in the Secure Gateway Service Details panel. Enter a name in the panel, for example: “Blockchain”. Keep the other gateway default settings of Requre security token to connect clients and Token expriation before 90 days. Click othe Add Gateway button to create the gateway. Click the Connect Client button on the Secure Gateway Service Details panel to begin setting up the client that runs on the VM and connect to the Secure Gateway service. Choose Docker as the option to connect the client and copy the provided docker runcommand with the Gateway id and security token. Open a new local terminal window, change directory to the folder with the Vagrantfileand then connect to the VM using vagrant ssh. Paste the docker runcommand shown into this terminal to start the Secure Gateway client and leave a CLI running in the terminal. Do not close this terminal. After the container starts, you see messages like the following example, indicating a successful connection: [2018-10-20 18:34:01.451] [INFO] (Client ID 1) No password provided. The UI will not require a password for access [2018-10-20 18:34:01.462] [WARN] (Client ID 1) UI Server started. The UI is not currently password protected [2018-10-20 18:34:01.463] [INFO] (Client ID 1) Visit localhost:9003/dashboard to view the UI. [2018-10-20 18:34:01.760] [INFO] (Client ID 11) Setting log level to INFO [2018-10-20 18:34:02.153] [INFO] (Client ID 11) The Secure Gateway tunnel is connected [2018-10-20 18:34:02.304] [INFO] (Client ID HxzoYUW6z74_PZ9) Your Client ID is HxzoYUW6z74_PZ9 HxzoYUW6z74_PZ9> After the client has started, close the web ui panel to display the Secure Gateway service details. On another terminal on the vagrant VM, use the ip address showcommand to find the IP address of the VM. Many interfaces are listed. Select the one that begins with enpor eth. In the examples that follow, the VM IP address is 10.0.2.15. Return to the terminal for the Secure Gateway client docker container, create an acl entry that allows traffic to the composer REST API server running on port 3000. acl allow 10.0.2.15:3000 1 Define a basic http connection through the Secure Gateway service to the Composer REST API server. For more advanced security settings refer to the Secure Gateway documentation. Click on the Destinations tab in the Secure Gateway service details. Next, click on the “+” icon to open the Add Destination wizard. Select the Guided Setup option. For the “Where is your resource located?” item, select On-premises and then click on Next. For “What is the host and port of your destination?”, put in the IP address from step 20 as the hostname and 3000 as the port. Then click on Next. For the connection protocol, select HTTP and then click on Next. For the destination authentication, select None and then click on Next. Skip entry of the IP address and ports for the options to “… make your destination private, add IP table rules below” step and click on Next. Enter a name like Composer REST serverfor the name of the destination and click on Add Destination Click on the gear icon for the tile of the destination that was just created to display the details. Copy the Cloud Host : Port – which looks something like: cap-sg-prd-2.integration.ibmcloud.com:17870. This host and port is the Cloud endpoint that can be accessed. Traffic is forwarded by the Secure Gateway service to the running Composer REST server. Append /explorerafter the host and port and open this url in a web browser. For the example, the final url would be:. Summary At this point you should be able to access the Composer REST server to perform actions in the deployed business network, using the host name and the port from the Secure Gateway destination. This server is reachable from any system with access to the internet and is best suited to development and testing, and not production use. You can develop the application locally on the host (instead of within the vagrant VM) without going out to the cloud endpoint. The Vagrantfile maps the local port 3000 to the Composer REST server. This mapping allows you to use the endpoint when developing your application locally. When deploying to the cloud (as a Cloud Foundry application, or Docker container) switch the endpoint to the cloud URL (for example). The Hyperledger Composer can generate a basic Angular interface to the business network. This step is described in Writing Web Applications. To see how to deploy this Angular application to Cloud Foundry using DevOps, check out the Continuously deploy your Angular application tutorial. There are two changes to the tutorial for the generated Angular application. First, use the full project contents by leaving the Build Archive Directory empty in the Delivery Pipeline Build stage. Second, the application reads the REST API server endpoint from the environment, set this in the Delivery Pipeline Deploy stage by adding an environment property of REST_SERVER_URL with a value of the cloud URL.
https://developer.ibm.com/tutorials/access-local-hyperledger-composer-rest-server-secure-gateway/
CC-MAIN-2020-29
refinedweb
2,033
55.13
IRC log of ua on 2009-08-06 Timestamps are in UTC. 16:53:09 [RRSAgent] RRSAgent has joined #ua 16:53:09 [RRSAgent] logging to 16:53:19 [AllanJ] rrsagent, set logs public 16:53:32 [KFord] KFord has joined #ua 16:54:03 [KFord] zakim list agenda 16:57:21 [Zakim] WAI_UAWG()1:00PM has now started 16:57:28 [Zakim] +[Microsoft] 16:57:38 [KFord] zakim, microsoft is kford 16:57:38 [Zakim] +kford; got it 16:59:14 [Jan] Jan has joined #ua 16:59:45 [Zakim] +[IPcaller] 16:59:49 [Zakim] +AllanJ 17:00:15 [Jan] zakim, [IPcaller] is really Jan 17:00:15 [Zakim] +Jan; got it 17:00:30 [Zakim] +??P1 17:00:50 [KFord] Agenda+ Logistics (Regrets, agenda requests, comments)? 17:00:50 [KFord] Agenda+ HTML 5 and UAAG review - 17:00:50 [KFord] Agenda+ Review any new proposals sent to list. 17:00:55 [Jan] zakim, is really Henny 17:00:55 [Zakim] I don't understand 'is really Henny', Jan 17:01:04 [Jan] zakim, ??P1 is really Henny 17:01:04 [Zakim] +Henny; got it 17:01:04 [KFord] rrsagent, make minutes 17:01:04 [RRSAgent] I have made the request to generate KFord 17:01:13 [KFord] rrsagent, make logs public 17:01:30 [KFord] Regrets:Greg 17:01:44 [sharper] sharper has joined #ua 17:02:00 [KFord] rrsagent, make minutes 17:02:00 [RRSAgent] I have made the request to generate KFord 17:02:35 [KFord] regrets+Jeanne 17:02:36 [sharper] zakim, code? 17:02:36 [Zakim] the conference code is 82941 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), sharper 17:02:36 [AllanJ] regrets: +Greg 17:04:13 [KFord] Chair: Jim_Allan 17:04:35 [KFord] Present:Jan, Jim, Simon, Henny 17:04:37 [AllanJ] zakim, agneda? 17:04:37 [Zakim] I don't understand your question, AllanJ. 17:04:43 [KFord] Present+Kford 17:04:47 [AllanJ] zakim, agenda? 17:04:47 [Zakim] I see 3 items remaining on the agenda: 17:04:48 [Zakim] 1. Logistics (Regrets, agenda requests, comments)? [from KFord] 17:04:49 [Zakim] 2. HTML 5 and UAAG review - [from KFord] 17:04:53 [Zakim] 3. Review any new proposals sent to list. [from KFord] 17:04:55 [Zakim] +??P4 17:05:00 [sharper] zakim, ??P4 is sharper 17:05:07 [Zakim] +sharper; got it 17:05:58 [KFord] zakim, take up item 1 17:05:58 [Zakim] agendum 1. "Logistics (Regrets, agenda requests, comments)?" taken up [from KFord] 17:06:05 [KFord] zakim, close item 1 17:06:05 [Zakim] agendum 1, Logistics (Regrets, agenda requests, comments)?, closed 17:06:06 [Zakim] I see 2 items remaining on the agenda; the next one is 17:06:08 [Zakim] 2. HTML 5 and UAAG review - [from KFord] 17:06:30 [KFord] JALLAN: Overall opinions of HTMl5 draft. 17:06:54 [KFord] Simon: I had read it before, followed lots of discussion. 17:07:12 [KFord] JAN: Haven't had a chance to review yet. 17:07:37 [KFord] Jim: bringing up Simon's comments. 17:08:16 [KFord] Simon: I think what we are saying and what others are saying with respect to access keys might be a bit difficult. 17:08:30 [KFord] Scribe: KFord 17:09:00 [Zakim] +??P6 17:09:23 [mth] mth has joined #ua 17:10:32 [KFord] Simon's mail on concerns is at 17:10:48 [KFord] Present+Mark 17:11:40 [KFord] Mark: Looking at WAI ARIA. Within the user agent we can identify widgets. 17:13:16 [AllanJ] KF: ARIA role mapping to Accessibility API exits 17:13:48 [AllanJ] ...if authro doesnot define all behaviors in the script, there is nothing the UA can do. 17:14:10 [AllanJ] MH: how does UA repair, or preempt mapping 17:14:44 [AllanJ] KF: Keyboard behaviors are a problem with ARIA and developers 17:16:16 [KFord] Henny: Saw some comments on lists that the intent is there to do some ARIA in HTML5 but not much action yet. 17:16:17 [AllanJ] HS: discussion on lists for ARIA in HTML5, intent is to include it 17:16:17 [KimPatch] KimPatch has joined #ua 17:16:27 [KFord] Present+Kim 17:16:44 [KFord] rrsagent, make minutes 17:16:44 [RRSAgent] I have made the request to generate KFord 17:16:55 [Zakim] +Kim 17:18:28 [KFord] Jim: Restating Mark's idea of user agent mapping keys to ARIA roles. 17:18:50 [KFord] Kim: How viable is that? 17:19:32 [KFord] Mark: Brings up various technical issues like whgat happens when elements of item are not defined. 17:19:50 [KFord] s/wghat/what 17:21:17 [AllanJ] KF: Question: intrigued by Mark's suggestion. Could HTML 5 require that a specific control have x semantics 17:22:36 [KFord] Kim: Are there ways to handle things if a developer leaves something out? 17:23:44 [KFord] Simon: Browsers do handle certain errors today for missing sections of tags and such. 17:26:47 [KFord] Kim: We try to get people to do things with speech in ways that don't use the mouse if possible. 17:27:20 [KFord] Kim: What works better with speech is not having to find the mouse but rather being able to say I want to put the mouse in location x. 17:28:32 [AllanJ] Discussion of Drag-and-drop and accessibility 17:28:59 [KFord] Kim: If you had an absolute pointing tablet this is easier. 17:29:25 [KFord] kford: Gave example of iPhone and touch being absolute. 17:29:58 [KFord] Jim: I was reading in HTML 5 on the drag stuff. 17:30:53 [KFord] Jim: They kind of talk about user agents without pointing devices and saying the user would need to be able to say what they want to drag. 17:31:21 [KFord] Kim: Talked about Dragon Naturally Speaking approach where you need to indicate drop target first. 17:34:55 [KFord] Jim: One thing we can take to WAI is concerns about keyboard, script, AJAX and such. Isn't necessarily specific to HTML5 but problem continues to grow. 17:36:10 [KFord] Simon: Expressed concern over HTML5 hidden data. 17:40:43 [AllanJ] KF: DOM should get updated when mutation event fires 17:42:25 [AllanJ] SH: Decision to fire is at UA discretion. Concern is how does UA decide appropriate firing of event, might be different for different device users or AT users 17:43:01 [KFord] Jim: A couple of items I noticed that said they were violations. 17:43:52 [AllanJ] These requirements are a willful violation of the XPath 1.0 specification, motivated by desire to have implementations be compatible with legacy content while still supporting the changes that this specification introduces to HTML regarding which namespace is used for HTML elements. [XPATH10] 17:43:54 [KFord] Jim: Talked about xpath. 17:44:28 [KFord] Jim: I looked at user agent behaviors. 17:45:02 [KFord] Jim: Whole thing is about how things should interact with the DOM. Most seemed pretty reasonable. 17:45:12 [KFord] Jim: A couple of issues. 17:45:45 [KFord] Jim: HTML5 definition of plugin is different from ours. They don't define a method of interacting. This is supposed to be to the user agent or platform. 17:46:53 [KFord] HTML5 definition for plugin. 17:46:54 [KFord] 2.1.4 Plugins 17:46:54 [KFord] The term plugin is used to mean any content handler for Web content types that are either not supported by the user agent natively or that do not expose a DOM, which supports rendering the content as part of the user agent's interface. 17:47:25 [KFord] Jim: Also looked at iframe element and attribute called sandbox. 17:47:49 [KFord] Jim: Sandbox sets behavior such as allowing the iframe to behave like a full browser. 17:50:12 [KFord] Sandbox definition: 17:50:13 [KFord] 17:52:14 [KFord] Kim: User needs to be in control, they get confused when they set things and then they don't work. 17:52:38 [AllanJ] Action: JAllan to write SC for user override sandbox attribute in Iframe 17:52:38 [trackbot] Created ACTION-220 - Write SC for user override sandbox attribute in Iframe [on Jim Allan - due 2009-08-13]. 17:53:14 [KFord] Jim: My other concern is how many other attributes like this are there floating around? How do we generalize this? 17:54:28 [KFord] Kim: If you had a way to alert the user about things that the site wants to override this could help. 17:56:05 [AllanJ] KF: UA override, how to define list. 17:56:19 [AllanJ] ...lots of things could be included. 17:57:02 [AllanJ] ...pop-ups as an example. Authors want them, users block them. if user initiated then ok. 17:57:47 [AllanJ] ...User needs an intelligent way to set overrides, what it does, what can I effect, what will be the results. 18:03:20 [KFord] Jim: HTML 5 has concept of fallback content. Gives example from canvas. 18:04:04 [KFord] Jim: I think we have that covered from our cascade of alternatives. 18:05:17 [Zakim] -Henny 18:05:22 [KFord] Jim: Have concerns around datagrid and labels, images and such. 18:05:38 [KFord] Jim: Need to form these thoughts further. 18:07:31 [Zakim] +??P1 18:07:48 [Jan] zakim, ??P1 is really Henny 18:07:48 [Zakim] +Henny; got it 18:12:51 [KFord] zakim close item 2 18:13:46 [KFord] zakim, close item 2 18:13:46 [Zakim] agendum 2, HTML 5 and UAAG review -, closed 18:13:48 [Zakim] I see 1 item remaining on the agenda: 18:13:49 [Zakim] 3. Review any new proposals sent to list. [from KFord] 18:15:29 [KFord] Looking at mail from Simon. 18:17:54 [KFord] Now talking about 18:21:16 [Jan] q+ 18:23:25 [AllanJ] ack Jan 18:24:15 [mth] q+ 18:25:08 [AllanJ] ack mth 18:30:25 [AllanJ] SH: is there an API or bridge between javascript and platform Accessibility API 18:30:49 [AllanJ] KF: ARIA helps some (with roles) 18:31:16 [AllanJ] SH: is there a validity checker or something for accessible Javascript? 18:32:01 [AllanJ] KF: ARIA attempts to put semantics and mapping to accessibility API 18:32:13 [AllanJ] ...for javascript widgets 18:33:13 [AllanJ] JR: Java bridge, is a small set of java widgets that are passed to the platform AAPI. 18:33:56 [AllanJ] ...somebody declared a 'winner' for what the specific Java widget set would be. 18:34:23 [mth] 18:35:29 [Zakim] -Jan 18:35:53 [Zakim] -Henny 18:35:54 [Zakim] -Kim 18:35:54 [Zakim] -sharper 18:35:56 [Zakim] -??P6 18:36:12 [sharper] Marcos Cáceres, Opera Software ASA 18:41:52 [sharper] sharper has left #ua 18:44:24 [Zakim] -kford 18:44:25 [Zakim] -AllanJ 18:44:25 [Zakim] WAI_UAWG()1:00PM has ended 18:44:27 [Zakim] Attendees were kford, AllanJ, Jan, Henny, sharper, Kim 18:45:35 [AllanJ] rrsagent, make minutes 18:45:35 [RRSAgent] I have made the request to generate AllanJ 18:45:46 [AllanJ] zakim, please part 18:45:46 [Zakim] Zakim has left #ua 18:46:15 [AllanJ] rrsagent, draft minutes 18:46:15 [RRSAgent] I have made the request to generate AllanJ 18:46:37 [AllanJ] rrsagent, please part 18:46:37 [RRSAgent] I see 1 open action item saved in : 18:46:37 [RRSAgent] ACTION: JAllan to write SC for user override sandbox attribute in Iframe [1] 18:46:37 [RRSAgent] recorded in
http://www.w3.org/2009/08/06-ua-irc
CC-MAIN-2014-52
refinedweb
1,976
70.53
Unicode data¶ Django natively supports Unicode data everywhere. Providing your database can somehow store the data, you can safely pass around Unicode strings to templates, models and the database. (section 10.1.3.2 for MySQL 5.1) for details on how to set or alter the database character set encoding. - PostgreSQL users, refer to the PostgreSQL manual (section 21.2.2 in PostgreSQL 8) for details on creating databases with the correct Unicode strings, or you can use normal strings (sometimes called “bytestrings”) that are encoded using UTF-8.(u'Paris & Orléans') u'Paris%20%26%20Orl%C3%A9ans' >>> iri_to_uri(u'/favorites/François/%s' % urlquote(u'Paris & Orléans')) '/favorites/Fran%C3%A7ois/Paris%20%26%20Orl%C3%A9ans' If you look carefully, you can see that the portion that was generated by urlquote().¶ Because all strings are returned from the database as Unicode strings, Unicode when it needs to. Choosing between __str__() and __unicode__()¶ in get_absolute_url()¶), you’ll need to take care of the encoding yourself. In this case, use the iri_to_uri() and urlquote() functions that were documented above. For example: from django.utils.encoding import iri_to_uri from django.utils.http import urlquote def get_absolute_url(self): url = u'/person/%s/?x=0&y=0' % urlquote.) The database API¶\x85') # UTF-8 encoding of Å Templates¶ You can use either Unicode or bytestrings when creating templates manually: from django.template import Template t1 = Template('This is a bytestring template.') t2 = Template(u'...' msg = EmailMessage(subject, body, sender, recipients) msg.attach(u"Une pièce jointe.pdf", "%PDF-1.4.%...", mimetype="application/pdf") msg.send().
https://docs.djangoproject.com/en/1.4/ref/unicode/
CC-MAIN-2015-14
refinedweb
263
50.73
Oleg Nesterov wrote:> On 04/17, Ingo Molnar wrote:>> * O);>>>> ?> > Perhaps yes, I don't know...> > But please note that we heavily rely on the fact that nobody except idle> threads can have pid_nr == 0, and more importantly, each "struct pid" must> have the unique .nr withing the same namespace (init_pid_ns in this case).> I'd suggest to just add a small comment.> > > But wait... What _is_ the task_pid() after fork_idle() ???It is NULL, but every code getting one can handle such case :)> fork_idle() doesn't really attach the new thread to the init_struct_pid,> so ->pids[PIDTYPE_PID].pid just points the parent's pid, no?> > As for x86, the parent is /sbin/init (kernel_init->smp_prepare_cpus),> not so bad, it can't exit.> > But what about HOTPLUG_CPU? Suppose we add CPU, use some non-idle> kernel thread (workqueue) to fork the idle thread. CPU goes down,> parent exits and frees the pid. Now, if this CPU goes up again, the> idle thread runs with its ->pid pointing to the freed memory, not> good.Nope - it will be NULL.> Not serious perhaps, afaics we only need this ->pid to ensure that> swapper can safely fork /sbin/init, but still.> > Pavel, Eric, Sukadev? Please say I missed something! ;)> > Otherwise, we can change init_idle() to do attach_pid(init_struct_pid),> afaics we can do this lockless. In that case we should also change> INIT_STRUCT_PID() and remove the initialization of .tasks.Well, these was some request to make tasks always have pid linkpoint to not NULL (from Matt?) so we'll need this :)> Oleg.> > Thanks,Pavel
http://lkml.org/lkml/2008/4/17/335
CC-MAIN-2017-13
refinedweb
259
85.18
MARC::Errorchecks -- Collection of MARC 21/AACR2 error checks Module for storing MARC error checking subroutines, based on MARC. Returned warnings/errors are generated as follows: push @warningstoreturn, join '', ($field->tag(), ": [ERROR TEXT]\t"); return \@warningstoreturn; use MARC::Batch; use MARC::Errorchecks; #See also MARC::Lintadditions for more checks #use MARC::Lintadditions; #change file names as desired my $inputfile = 'marcfile.mrc'; my $errorfilename = 'errors.txt'; my $errorcount = 0; open (OUT, ">$errorfilename"); #initialize $infile as new MARC::Batch object my $batch = MARC::Batch->new('USMARC', "$inputfile"); my $errorcount = 0; #loop through batch file of records while (my $record = $batch->next()) { #if $record->field('001') #add this if some records in file do not contain an '001' field my $controlno = $record->field('001')->as_string(); #call MARC::Errorchecks subroutines my @errorstoreturn = (); # check everything push @errorstoreturn, (@{MARC::Errorchecks::check_all_subs($record)}); # or only a few push @errorstoreturn, (@{MARC::Errorchecks::check_010($record)}); push @errorstoreturn, (@{MARC::Errorchecks::check_bk008_vs_bibrefandindex($record)}); # report results if (@errorstoreturn){ ######################################### print OUT join( "\t", "$controlno", @errorstoreturn, "\t\n"); $errorcount++; } } #while Maintain check-all subroutine, a wrapper that calls all the subroutines in Errorchecks, to simplify calling code in .pl. Determine whether extra tabs are being added to warnings. Examine how warnings are returned and see if a better way is available. Add functionality. -Ending punctuation (in Lintadditions.pm, and 300 dealt with here, and now 5xx (some)). -Matching brackets and parentheses in fields? -Geographical headings miscoded as subjects. Possibly rewrite as object-oriented? If not, optimize this and the Lintadditions.pm checks. Example: reduce number of repeated breaking-out of fields into subfield parts. So, subroutines that look for double spaces and double punctuation might be combined. Remove local practice code or facilitate its modification/customization. Deal with other TO DO items found below. This includes fixing problem of "bibliographical references" being required if 008 contents has 'b'. Calls each error-checking subroutine in Errorchecks. Gathers all errors and returns those errors in an array (reference). Make sure to update this subroutine as additional subroutines are added. Checks to see if record is coded as an RDA record or not (based on 040$e). Looks for more than one period within subfields after 010. Exception: Exactly 3 periods together are treated as ellipses. Looks for multiple commas. Find exceptions where double periods may be allowed. Find exceptions where more than 3 periods can be next to each other. Find exceptions where double commas are allowed (URI subfields, 856 field). Deal with the exceptions. Currently, skips 856 field completely. Needs to skip URI subfields. Looks for more than one space within subfields after 010. Ignores 035 field, since multiple spaces could be allowed. Accounts for extra spaces between angle brackets for open date in 260c. Current version allows extra spaces in any 260 subfield containing angle brackets. Account for non-numeric tags? Will likely complain for non-numeric tags in a record, since comparisons rely upon numeric tag checking. Looks for extra spaces at the end of fields greater than 010. Ignores 016 extra space at end. Rewrite to incorporate 010 and 016 space checking. Consider allowing trailing spaces in 035 field. Code for validating 006s in MARC records. Validates each byte of the 006, based on #MARC::Errorchecks::validate008($field008, $mattype, $biblvl) Use validate008 subroutine: -Break byte 18-34 checking into separate sub so it can be used for 006 validation as well. -Optimize efficiency. Code for validating 008s in MARC records. Validates each byte of the 008, based on MARC::Errorchecks::validate008($field008, $mattype, $biblvl) Improve validate008 subroutine (see that sub for more information): -Break byte 18-34 checking into separate sub so it can be used for 006 validation as well. -Optimize efficiency. Revised 12-2-2004 to use new validate008() sub. Verifies 010 subfield 'a' has proper spacing. Compare efficiency of getting current date vs. setting global current date. Determine best way to establish global date. Think about whether subfield 'z' needs proper spacing. Deal with non-digit characters in original 010a field. Currently these are simply reported and the space checking is skipped. Revise local treatment of LCCN checking (invalid 8-digits pre-1980) for more universal use. Maintain date ranges in checking validity of numbers. Modify date ranges according to local catalog needs. Determine whether this subroutine can be implemented in MARC::Lintadditions/Lint--I don't remember why it is here rather than there? #this section could be implemented to validate 8-digit LCCN being between a specific set of years (1900-1980, for example). #code has been commented/podded out for general practice my $year = substr($subfielda, 0, 2); #should be old lccn, so first 2 digits are 00 or > 80 #The 1980 limit is a local practice. #Change the date ranges according to local needs (e.g. if LC records back to 1900 exist in the catalog, do not implement this section of the error check) if (($year >= 1) && ($year < 80)) {push @warningstoreturn, ("010: First digits of LCCN are $year.");} check_end_punct_300($record) Reports an error if an ending period in 300 is missing if 4xx exists, or if 300 ends with closing parens-period if 4xx does not exist. check_bk008_vs_300($record) 300 subfield 'b' vs. presence of coding for illustrations in 008/18-21. Ignores CIP records completely. Ignores non-book records completely (for the purposes of this subroutine). If 300 'b' has wording, reports errors if matching 008/18-21 coding is not present. If 008/18-21 coding is present, but similar wording is not present in 300, reports errors. Note: plates are an exception, since they are noted in $a rather than $b of the 300. So, they need to be checked twice--once if 'f' is the only code in the 008/18-21, and again amongst other codes. Also checks for 'p.' or 'v.' in subfield 'a' Only accounts for a single 300 field (300 was recently made repeatable). Older/more specific code checking is limited due to lack of use (by our catalogers). For example, coats of arms, facsim., etc. are usually now given as just 'ill.' So the error check allows either the specific or just ill. for all except maps. Depends upon 008 being coded for book monographs. Subfield 'a' and 'c' wording checks ('p.' or 'v.'; 'cm.', 'in.', 'mm.') only look at first of each kind of subfield. Take care of case of 008 coded for serials/continuing resources. Find exceptions to $a having 'p.' or 'v.' (and leaves, columns) for books. Find exceptions to $c having 'cm.', 'mm.', or 'in.' preceded by digits. Deal with other LIMITATIONS. Account for upcoming rule change in which metric units have no punctuation. When that rule goes into effect, move 300$c checking to check_end_punct_300($record). Reverse checks to report missing 008 code if specific wording is present in 300. Reverse check for plates vs. 'f' parse008vs300b($illcodes, $field300subb) 008 illustration parse subroutine checks 008/18-21 code against 300 $b To simplify the check_bk008_vs_300($record) subroutine, which had many if-then statements. This moves the additional checking conditionals out of the way. It may be integrated back into the main subroutine once it works. This was written while constructing check_bk008_vs_300($record) as a separate script. parse008vs300b($illcodes, $field300subb) #$illcodes is bytes 18-21 of 008 #$subfieldb is subfield 'b' of record's 300 field Integrate code into check_bk008_vs_300($record)? Verify possibilities for 300 text Move 'm' next to 'f' since it is likely to be indicated in subfield 'e' not 'b' of the 300. Our catalogers do not generally code for sound recordings in this way in book records. If 490 with 1st indicator '1' exists, then 8xx (800, 810, 811, 830) should exist. If 1xx exists then 240 1st indicator should be '1'. If 1xx does not exist then 240 should not be present. However, exceptions to this rule are possible, so this should be considered an optional error. If 1xx exists then 245 1st indicator should be '1'. If 1xx does not exist then 245 1st indicator should be '0'. However, exceptions to this rule are possible, so this should be considered an optional error. Provide some way to easily turn off reporting of "245: Indicator is 0 but 1xx exists." errors. In some cases, catalogers may choose to code a 245 with 1st indicator 0 if they do not wish that 245 to be indexed. There is not likely a way to programmatically determine this choice by the cataloger, so in situations where catalogers are likely to choose not to index a 245, this error should be supressed. Date matching 008, 050, 260 Attempts to match date of publication in 008 date1, 050 subfield 'b', and 260 subfield 'c'. Reports errors when one of the fields does not match. Reports errors if one of the dates cannot be found Handles cases where 050 or 260 (or 260c) does not exist. -Currently if the subroutine is unable to get either the date1, any 050 with $b, or a 260 with $c, it returns (exits). -Future, or better, behavior, might be to continue processing for the other fields. Handles cases where 050 is different due to conference dates. Conference exception handling is currently limited to presence of 111 field or 110$d. For RDA, checks 264 _1 $c as well as 1st 260$c. May not deal well with serial records (problem not even approached). Only examines 1st 260, does not account for more than one 260 (recent addition). Relies upon 260$c date being the first date in the last 260$c subfield. Has problem finding 050 date if it is not last set of digits in 050$b. Process of getting 008date1 duplicates similar check in validate008 subroutine. Improve Conference publication checking (limited to 111 field or 110$d being present for this version) This may include comparing 110$d or 111$d vs. 050, and then comparing 008date1 vs. 260$c. Fix parsing for 050$bdate. For CIP, if 260 does not exist, compare only 050 and 008date1. Currently, CIP records without 260 are skipped. Account for undetermined dates, e.g. [19--?] in 260 and 008. Account for older 050s with no date present. Ignores non-book records (other than cartographic materials). For cartographic materials, checks only for index coding (not bib. refs.). Examines 008 book-contents (bytes 24-27) and book-index (byte 31). Compares with 500 and 504 fields. Reports error if 008contents has 'b' but 504 does not have "bibliographical references." Reports error if 504 has "bibliographical references" but no 'b' in 008contents. Reports error if 008index has 1 but no 500 or 504 with "Includes .* index." Reports error if a 500 or 504 has "Includes .* index" but 008index is 0. Reports error if "bibliographical references" appears in 500. Allows "bibliographical reference." As with other subroutines, this one treats all 008 as being coded for monographs. Serials are ignored for the moment. Account for records with "Bibliography" or other wording in place of "bibliographical references." Currently 'b' in 008 must match with "bibliographical reference" or "bibliographical references" in 504 (or 500--though that reports an error). Reverse check for other wording (or subject headings) vs. 008 'b' in contents. Check for other 008contents codes. Check for misspelled "bibliographical references." Check spacing if pagination is given in 504. Compares first code in subfield 'a' of 041 vs. 008 bytes 35-37. Validates punctuation in various 5xx fields. Currently checks 500, 501, 504, 505, 508, 511, 538, 546. For 586, see check_nonpunctendingfields($record) Add checks for the other 5xx fields. Verify rules for these checks (particularly 505). Looks at various fields and reports fields with space-hypen-space as errors. Find exceptions. Looks at each non-control tag and reports an error if a floating period, comma, or question mark are found. Example: 245 _aThis has a floating period . Ignores double dash-space when preceded by a non-space (example-- [where functioning as ellipsis replacement]) -Add other undesirable floating punctuation. -Look for exceptions where floating punctuation should be allowed. -Merge functionality with findfloatinghyphens($record) (to reduce number of runs through the same record, especially). -Improve reporting. Current version reports approximately 10 characters before and after the floating text for fields longer than 80 characters, or the full field otherwise, to provide context, particularly in the case of multiple instances. Comparison of 007 coding vs. 300abc subfield data and vs. 538 data for video records (VHS and DVD). Focuses on videocassettes (VHS) and videodiscs (DVD and Video CD). Does not consider coding for motion pictures. If LDR/06 is 'g' for projected medium, (skipping those that aren't) and 007 is present, at least 1 007 should start with 'v' If 007/01 is 'd', 300a should have 'videodisc(s)'. 300c should have 4 3/4 in. Also, 538 should have 'DVD' If 007/01 is 'f', 300a should have 'videocassette(s)' 300c should have 1/2 in. Also, 538 should have 'VHS format' or 'VHS hi-fi format' (case insensitive on hi-fi), plus a playback mode. Checks only videocassettes (1/2) and videodiscs (4 3/4). Current version reports problems with other forms of videorecordings. Accounts for existence of only 1 300 field. Looks at only 1st subfield 'a' and 'c' of 1st 300 field. Account for motion pictures and videorecordings not on DVD (4 3/4 in.) or VHS cassettes. Check proper plurality of 300a (1 videodiscs -> error; 5 videocassette -> error) Monitor need for changes to sizes, particularly 4 3/4 in. DVDs. Expand allowed terms for 538 as needed and revise current VHS allowed terms. Update to allow SMDs of conventional terminology ('DVD') if such a rule passes. Deal with multiple 300 fields. Check GMD in 245$h Clean up redundant code. Validates bytes 5, 6, 7, 17, and 18 of the leader against MARC code list valid characters. Checks bytes 5, 6, 7, 17, and 18. $ldrbytes{$key} has keys "\d\d", "\d\dvalid" for each of the bytes checked (05, 06, 07, 17, 18) "\d\dvalid" is a hash ref containing valid code linked to the meaning of that code. print $ldrbytes{'05valid'}->{'a'}, "\n"; yields: 'Increase in encoding level' Customize (comment or uncomment) bytes according to local needs. Perhaps allow %ldrbytes to be passed into ldrvalidate($record) so that that hash may be created by a calling program, rather than relying on the preset MARC 21 values. This would facilitate adding valid OCLC-MARC bytes such as byte 17--I, K, M, etc. Examine other Lintadditions/Errorchecks subroutines using the leader to see if duplicate checks are being done. Move or remove such duplicate checks. Consider whether %ldrbytes needs full text of meaning of each byte. Reports absence of 043 if 651 or 6xx subfield z is present. Update/maintain list of exceptions (in the hash, %geog043exceptions). Looks for empty subfields. Skips 037 in CIP-level records and tags < 010. Reports error if 040 is not present. Can not use Lintadditions check_040 for this since that relies upon field existing before the check is executed. Checks for presence of punctuation in the fields listed below. These fields are not supposed to end in punctuation unless the data ends in abbreviation, ___, or punctuation. Ignores initialisms such as 'Q.E.D.' Certain abbrevations and initialisms are explicitly coded. Fields checked: 240, 246, 440, 490, 586. Add exceptions--abbreviations--or deal with them. Currently all fields ending in period are reported. Reports error if field is longer than 1870 bytes. (1879 is actual limit, but I wanted to leave some extra room in case of miscalculation.) This check relates to certain system limitations. Also reports records with more than 50 fields. Use directory information in raw MARC to get the field lengths. Add new subs with code below. sub { #get passed MARC::Record object my $record = shift; #declaration of return array my @warningstoreturn = (); push @warningstoreturn, (""); return \@warningstoreturn; } # Internal sub that checks the validity of 006 bytes. Used by the check_006 method for 006 validation. Checks the validity of 006 bytes. Continuing resources/serials 006 may not work (not thoroughly tested, since 006 would usually be coded for serials, with 006 for other material types?). Current version implements material specific validation through internal subs for each material type. Those internal subs allow for checking either 006 or 006 material specific bytes. parse008date($field008string) Subroutine parse008date returns four-digit year, two-digit month, and two-digit day. It requres an 008 string at least 6 bytes long. Also checks of current year, month, day vs. 008 creation date, reporting an error if creation date appears to be later than local time. Assumes 008 dates of 00mmdd to 70mmdd represent post-2000 dates. Relies upon internal _get_current_date(). my ($earlyyear, $earlymonth, $earlyday); print ("What is the earliest create date desired (008 date, in yymmdd)? "); while (my $earlydate = <>) { chomp $earlydate; my $field008 = $earlydate; my $yyyymmdderr = MARC::Errorchecks::parse008date($field008); my @parsed008date = split "\t", $yyyymmdderr; $earlyyear = shift @parsed008date; $earlymonth = shift @parsed008date; $earlyday = shift @parsed008date; my $errors = join "\t", @parsed008date; if ($errors) { if ($errors =~ /is too short/) { print "Please enter a longer date, $errors\nEnter date (yymmdd): "; } else {print "$errors\nEnter valid date (yymmdd): ";} } #if errors else {last;} } Remove local practice or revise for easier updating/customization. Reworking of the validate008 sub. Revised to work more like other Errorchecks and Lintadditions checks. Returns array ref of errors. Previous version returned hash ref of 008 byte key-value pairs, array ref of cleaned bytes, and scalar ref of errors. New version returns only an array ref of errors. Checks the validity of 008 bytes. Used by the check_008 method for 008 validation. Checks the validity of 008 bytes. Depends upon 008 being based upon LDR/06, so continuing resources/serials records may not work. Checks LDR/07 for 's' for serials before checking material specific bytes. Character positions 00-17 and 35-39 are defined the same across all types of material, with special consideration for position 06. Current version implements material specific validation through internal subs for each material type. Those internal subs allow for checking either 006 or 008 material specific bytes. use MARC::Record; use MARC::Errorchecks; #$mattype and $biblvl are from LDR/06 and LDR/07 #my $mattype = substr($leader, 6, 1); #my $biblvl = substr($leader, 7, 1); #my $field008 = $record->field('008')->as_string(); my $field008 = '000101s20002000nyu eng d'; my @warningsfrom008 = @{MARC::Errorchecks::validate008($field008, $mattype, $biblvl)}; print join "\t", @warningsfrom008, "\n"; Add requirement that 40 char string needs to be passed in. Add error checking for less than 40 char string. --Partially done--Less than 40 characters leads to error. Verify datetypes that allow multiple dates. Verify continuing resource checking (not thoroughly tested). Determine proper values for date type 'e'. ### This is not here for any particular reason, ### I just wanted to save it for future use if I needed it. #stop checking if record is not coded 'm', monograph unless ($biblvl eq 'm') { push @warningstoreturn, ("LDR: Record coded $biblvl, not monograph. Further parsing of 008 will not be done for this record."); return (\@warningstoreturn); } #unless bib level is 'm' #test code use MARC::Errorchecks; use MARC::Record; my $leader = '00050nam'; my $field008 = '000101s20002000nyu eng d'; my $mattype = substr($leader, 6, 1); my $biblvl = substr($leader, 7, 1); print "$field008\n"; my @warningsfrom008 = @{validate008($field008, $mattype, $biblvl)}; print join "\t", @warningsfrom008, "\n"; Internal sub to check 008 bytes 18-34 or 006 bytes 01-17 for Continuing Books. Electronic Cartographic Music and Sound Recordings. Visual Mixed Materials. Receives material type, bibliographic level, and a 17-byte string to be validated. The bytes should be bytes 18-34 of the 008, or bytes 01-17 of the 006. Internal sub for use with validate008($field008, $mattype, $biblvl) (actually with parse008date($field008string)). Returns the current year-month-day, in the form yyyymmdd. Also used by check_010($record). Version 1.18: Updated Oct. 8, 2012 to June 22, 2013. Released , 2013. -Updated _check_music_bytes for MARC Update 16 (Sept. 2012), adding 'l' as valid for 008/20. Version 1.17: Updated Oct. 8, 2012 to June 22, 2013. Released June 23, 2013. -Updated check_490vs8xx($record) to look only for 800, 810, 811, 830 rather than any 8XX. -Added functionality to deal with RDA records. -Updated parse008vs300b($illcodes, $field300subb, $record_is_RDA) to pass 3rd variable, "$record_is_RDA". -Updated _check_music_bytes for MARC Update 15 (Sept. 2012), adding 'k' as valid for 008/20. Version 1.16: Updated May 16-Nov. 14, 2011. Released . -Turned off check_fieldlength($record) in check_all_subs() -Turned off checking of floating hyphens in 520 fields in findfloatinghyphens($record) -Updated validate008 subs (and 006) related to 008/24-27 (Books and Continuing Resources) for MARC Update no. 10, Oct. 2009 and Update no. 11, 2010; no. 12, Oct. 2010; and no. 13, Sept. 2011. -Updated %ldrbytes with leader/18 'c' and redefinition of 'i' per MARC Update no. 12, Oct. 2010. Version 1.15: Updated June 24-August 16, 2009. Released , 2009. -Updated checks related to 300 to better account for electronic resources. -Revised wording in validate008($field008, $mattype, $biblvl) language code (008/35-37) for ' '/zxx. -Updated validate008 subs (and 006) related to 008/24-27 (Books and Continuing Resources) for MARC Update no. 9, Oct. 2008. -Updated validate008 sub (and 006) for Books byte 33, Literary form, invalidating code 'c' and referring it to 008/24-27 value 'c' . -Updated video007vs300vs538($record) to allow Blu-ray in 538 and 's' in 07/04. Version 1.14: Updated Oct. 21, 2007, Jan. 21, 2008, May 20, 2008. Released May 25, 2008. -Updated %ldrbytes with leader/19 per Update no. 8, Oct. 2007. Check for validity of leader/19 not yet implemented. -Updated _check_book_bytes with code '2' ('Offprints') for 008/24-27, per Update no. 8, Oct. 2007. -Updated check_245ind1vs1xx($record) with TODO item and comments -Updated check_bk008_vs_300($record) to allow "leaves of plates" (as opposed to "leaves", when no p. or v. is present), "leaf", and "column"(s). version 1.18 of MARC::Lint::CodeData. Version 1.12: Updated July 5-Nov. 17, 2006. Released Feb. 25, 2007. -Updated check_bk008_vs_300($record) to look for extra p. or v. after parenthetical qualifier. -Updated check_bk008_vs_300($record) to look for missing period after 'col' in subfield 'b'. -Replaced $field-tag() with $tag in error message reporting in check_nonpunctendingfields($record). -Turned off 50-field limit check in check_fieldlength($record). -Updated parse008vs300b($illcodes, $field300subb) to look for /map[ \,s]/ rather than just 'map' when 008 is coded 'b'. -Updated check_bk008_vs_bibrefandindex($record) to look for spacing on each side of parenthetical pagination. -Updated check_internal_spaces($record) to report 10 characters on either side of each set of multiple internal spaces. -Uncommented level-5 and level-7 leader values as acceptable. Level-3 is still commented out, but could be uncommented for libraries that allow it. -Includes version 1.14 of MARC::Lint::CodeData. Version 1.11: Updated June 5, 2006. Released June 6, 2006. -Implemented check_006($record) to validate 006 (currently only does length check). --Revised validate008($field008, $mattype, $biblvl) to use internal sub for material specific bytes (18-34) -Revised validate008($field008, $mattype, $biblvl) language code (008/35-37) to report new 'zxx' code availability when ' ' is the code in the record. -Added 'mgmt.' to %abbexceptions for check_nonpunctendingfields($record). Version 1.10: Updated Sept. 5-Jan. 2, 2006. Released Jan. 2, 2006. -Revised validate008($field008, $mattype, $biblvl) to use internal subs for material specific byte checking. --Added: ---_check_cont_res_bytes($mattype, $biblvl, $bytes), ---_check_book_bytes($mattype, $biblvl, $bytes), ---_check_electronic_resources_bytes($mattype, $biblvl, $bytes), ---_check_cartographic_bytes($mattype, $biblvl, $bytes), ---_check_music_bytes($mattype, $biblvl, $bytes), ---_check_visual_material_bytes($mattype, $biblvl, $bytes), ---_check_mixed_material_bytes, ---_reword_008(@warnings), and ---_reword_006(@warnings). --Updated Continuing resources byte 20 from ISSN center to Undefined per MARC 21 update of Oct. 2003. -Updated wording in findfloatinghyphens($record) to report 10 chars on either side of floaters and check_floating_punctuation($record) to report some context if the field in question has more than 80 chars. -check_bk008_vs_bibrefandindex($record) updated to check for 'p. ' following bibliographical references when pagination is present. -check_5xxendingpunctuation($record) reports question mark or exclamation point followed by period as error. -check_5xxendingpunctuation($record) now checks 505. -Updated check_nonpunctendingfields($record) to account for initialisms with interspersed periods. -Added check_floating_punctuation($record) looking for unwanted spaces before periods, commas, and other punctuation marks. -Renamed findfloatinghyphens($record) to fix spelling. -Revised check_bk008_vs_300($record) to account for textual materials on CD-ROM. -Added abstract to name. Version 1.09: Updated July 18, 2005. Released July 19, 2005 (Aug. 14, 2005 to CPAN). -Added check_010.t (and check_010.t.pl) tests for check_010($record). -check_010($record) revisions. --Turned off validation of 8-digit LCCN years. Code commented-out. --Modified parsing of numbers to check spacing for 010a with valid non-digits after valid numbers. --Validation of 10-digit LCCN years is based on current year. -Fixed bug of uninitialized values for matchpubdates($record) 050 and 260 dates. -Corrected comparison for year entered < 1980. -Removed AutoLoader (which was a remnant of the initial module creation process) Version 1.08: Updated Feb. 15-July 11, 2005. Released July 16, 2005. -Added 008errorchecks.t (and 008errorchecks.t.txt) tests for 008 validation -Added check of current year, month, day vs. 008 creation date, reporting error if creation date appears to be later than local time. Assumes 008 dates of 00mmdd to 70mmdd represent post-2000 dates. --This is a change from previous range, which gave dates as 00-06 as 200x, 80-99 as 19xx, and 07-79 as invalid. -Added _get_current_date() internal sub to assist with check of creation date vs. current date. -findemptysubfields($record) also reports error if period(s) and/or space(s) are the only data in a subfield. -Revised wording of error messages for validate008($field008, $mattype, $biblvl) -Revised parse008date($field008string) error message wording and bug fix. -Bug fix in video007vs300vs538($record) for gathering multiple 538 fields. -added check in check_5xxendingpunctuation($record) for space-semicolon-space-period at the end of 5xx fields. -added field count check for more than 50 fields to check_fieldlength($record) -added 'webliography' as acceptable 'bibliographical references' term in check_bk008_vs_bibrefandindex($record), even though it is discouraged. Consider adding an error message indicating that the term should be 'bibliographical references'? -Code indenting changed from tabs to 4 spaces per tab. -Misc. bug fixes including changing '==' to 'eq' for tag numbers, bytes in 008, and indicators. Version 1.07: Updated Dec. 11-Feb. 2005. Released Feb. 13, 2005. -check_double_periods() skips field 856, where multiple punctuation is possible for URIs. -added code in check_internal_spaces() to account for spaces between angle brackets in open dates in field 260c. -Updated various subs to verify that 008 exists (and quietly return if not. check_008 will report the error). -Changed #! line, removed -w, replaced with use warnings. -Added error message to check_bk008_vs_bibrefandindex($record) if 008 book index byte is not 0 or 1. This will result in duplicate errors if check_008 is also called on the record. Version 1.05 and 1.06: Updated Dec. 6-7. Released Dec. 6-7, 2004. -CPAN distribution fix. Version 1.04: Updated Nov. 4-Dec. 4, 2004. Released Dec. 5, 2004. -Updated validate008() to use MARC::Lint::CodeData. -Removed DATA section, since this is now in MARC::Lint::CodeData. -Updated check_008() to use the new validate008(). -Revised bib. refs. check to require 'reference' to be followed by optional 's', optional period, and word boundary (to catch things like 'referenced'. Version 1.03: Updated Aug. 30-Oct. 16, 2004. Released Oct. 17. First CPAN version. -Moved subs to MARC::QBIerrorchecks --check_003($record) --check_CIP_for_stockno($record) --check_082count($record) -Fixed bug in check_5xxendingpunctuation for first 10 characters. -Moved validate008() and parse008date() from MARC::BBMARC (to make MARC::Errorchecks more self-contained). -Moved readcodedata() from BBMARC (used by validate008) -Moved DATA from MARC::BBMARC for use in readcodedata() -Remove dependency on MARC::BBMARC -Added duplicate comma check in check_double_periods($record) -Misc. bug fixes Planned (future versions): -Account for undetermined dates in matchpubdates($record). -Cleanup of validate008 --Standardization of error reporting --Material specific byte checking (bytes 18-34) abstracted to allow 006 validation. Version 1.02: Updated Aug. 11-22, 2004. Released Aug. 22, 2004. -Implemented VERSION (uncommented) -Added check for presence of 040 (check_040present($record)). -Added check for presence of 2 082s in full-level, 1 082 in CIP-level records (check_082count($record)). -Added temporary (test) check for trailing punctuation in 240, 586, 440, 490, 246 (check_nonpunctendingfields($record)) --which should not end in punctuation except when the data ends in such. -Added check_fieldlength($record) to report fields longer than 1870 bytes. --This should be rewritten to use the length in the directory of the raw MARC. -Fixed workaround in check_bk008_vs_bibrefandindex($record) (Thanks again to Rich Ackerman). Version 1.01: Updated July 20-Aug. 7, 2004. Released Aug. 8, 2004. -Temporary (or not) workaround for check_bk008_vs_bibrefandindex($record) and bibliographies. -Removed variables from some error messages and cleanup of messages. -Code readability cleanup. -Added subroutines: --check_240ind1vs1xx($record) --check_041vs008lang($record) --check_5xxendingpunctuation($record) --findfloatinghypens($record) --video007vs300vs538($record) --ldrvalidate($record) --geogsubjvs043($record) ---has list of exceptions (e.g. English-speaking countries) --findemptysubfields($record) -Changed subroutines: --check_bk008_vs_300($record): ---added cross-checking for codes a, b, c, g (ill., map(s), port(s)., music) ---added checking for 'p. ' or 'v. ' or 'leaves ' in subfield 'a' ---added checking for 'cm.', 'mm.', 'in.' in subfield 'c' --parse008vs300b ---revised check for 'm', phono. (which our catalogers don't currently use) --Added check in check_bk008_vs_bibrefandindex($record) for 'Includes index.' (or indexes) in 504 ---This has a workaround I would like to figure out how to fix Version 1.00 (update to 0.95): First release July 18, 2004. -Fixed bugs causing check_003 and check_010 subroutines to fail (Thanks to Rich Ackerman) -Added to documentation -Misc. cleanup -Added skip of 787 fields to check_internal_spaces -Added subroutines: --check_end_punct_300($record) --check_bk008_vs_300($record) ---parse008vs300b --check_490vs8xx($record) --check_245ind1vs1xx($record) --matchpubdates($record) --check_bk008_vs_bibrefandindex($record) Version 1 (original version (actually version 0.95)): First release, June 22, 2004 MARC::Record -- Required for this module to work. MARC::Lint -- In the MARC::Record distribution and basis for this module. MARC::Lintadditons -- Extension of MARC::Lint for checks involving individual tags. (vs. cross-field checking covered in this module). Available at (and may be merged into MARC::Lint). MARC pages at the Library of Congress () Anglo-American Cataloging Rules, 2nd ed., 2002 revision, plus updates. Library of Congress Rule Interpretations to AACR2. MARC Report () -- More full-featured commercial program for validating MARC records. This code may be distributed under the same terms as Perl itself. Please note that this module is not a product of or supported by the employers of the various contributors to the code. Bryan Baldus [email protected]
http://search.cpan.org/dist/MARC-Errorchecks/lib/MARC/Errorchecks.pm
CC-MAIN-2014-10
refinedweb
5,074
59.7
Up to [DragonFly] / src / sys / net@ Add NETISR_FLAG_NOTMPSAFE, which could be used as the last parameter to netisr_register(), more expressive and less error-prone than 0. Suggested-by: hsu@@ Nuke unused function Introduce experimental MPLS over ethernet support. Add 'options MPLS' to the kernel config file to enable it. This modification increases the footprint of each route in the FIB by 12 bytes, used to hold up to 3 label operations per route. Hints-from: Ayame, NiSTswitch implementations. Reviewed-by: dillon@, sephe@, hsu@, has. do early copyin / delayed copyout for socket options. Add MPSAFE version of netmsg_service_loop() Kernel part of bluetooth stack ported by Dmitry Komissaroff. Very much work in progress. Obtained-from: NetBSD via OpenBSD@ *. Remove weird license clause which has [email protected]>. The default protocol threads also need the check for same thread synchronous execution. Reported by: YONETANI Tomokazu <[email protected]> Make the declaration of notifymsglist visible outside #ifdef _KERNEL for struct selinfo. Reported by: Donghui Wen <[email protected]> Diagnosed by: YONETANI Tomokazu <[email protected]> Add predicate message facility. Push the lwkt_replymsg() up one level from netisr_service_loop() to the message handler so we can explicitly reply or not reply as appropriate. Make tcp_drain() per-cpu. Make tcp_drain() per-cpu.. NETISR_POLL cannot use isr 0. Use isr 1. Bug reported by: TC Lewis <[email protected]>. __P() removal. Add the DragonFly cvs id and perform general cleanups on cvs/rcs/sccs ids. Most ids have been removed from !lint sections and moved into comment sections. import from FreeBSD RELENG_4 1.21.2.5
http://www.dragonflybsd.org/cvsweb/src/sys/net/netisr.h?f=h
CC-MAIN-2015-14
refinedweb
268
54.18
Use Azure portal to create a Service Bus namespace and a queue This quickstart shows you how to create a Service Bus namespace and a queue using the Azure portal. It also shows you how to get authorization credentials that a client application can use to send/receive messages to/from the queue. What are Service Bus queues? Service Bus queues support a brokered messaging communication model. When using queues, components of a distributed application do not communicate directly with each other; instead they.. Prerequisites To complete this quickstart, make sure you have an Azure subscription. If you don't have an Azure subscription, you can create a free account before you begin.,,. Create a queue in the Azure portal On the Service Bus Namespace page, select Queues in the left navigational menu. On the Queues page, select + Queue on the toolbar. Enter a name for the queue, and leave the other values with their defaults. Now, select Create. Next steps In this article, you created a Service Bus namespace and a queue in the namespace. To learn how to send/receive messages to/from the queue, see one of the following quickstarts in the Send and receive messages section.
https://docs.microsoft.com/en-gb/azure/service-bus-messaging/service-bus-quickstart-portal
CC-MAIN-2021-10
refinedweb
200
64.71
... CodingForums.com > :: Server side development > PHP > odd and even numbers PDA odd and even numbers ptmuldoon 11-16-2007, 03:51 PM does php have any built in functions to determine if a number is odd or even? I came across the below when doing some google searches. But is that the simpliest way? function is_odd($number) { return $number & 1; // 0 = even, 1 = odd } I'm looking to create an if statement, and if the number is odd, do X, or if even, do Y. johnnyb 11-16-2007, 04:04 PM I'm not aware of such a function, but you could try this: function is_odd($n) { $test = $n/2; // divide the number you want to test by 2 if(strpos($test,'.') === false) { // if there's no decimal found in the result then the number must have been even. return false; // so return false } else { // otherwise we can assume the number is odd return true; // so we return true } } This is untested so make sure it works first! Also, this assumes that a decimal point will be used after the whole numbers and before the part numbers, (like "2.5"). If your server is in a non-english language this may not be the case. For example a French PHP installation may return 5/2 = 2,5 in which case the function above would have to be modified. aedrin 11-16-2007, 04:08 PM Wow, that is a... weird solution. function isOdd($n) { return $n % 2; } marek_mar 11-16-2007, 04:19 PM Ptmuldoons function is the best. Bitwise and is faster than modulo. aedrin 11-16-2007, 04:22 PM You're telling me that a division and a string search are faster than just a modulus operator (which is essentially a division)? Not to mention that this is a common way of determining odd/even. I'm sure there's more inventive methods, but this is the easiest to read. marek_mar 11-16-2007, 04:30 PM I did say "ptmuldoon" not "johnnyb"... rpgfan3233 11-16-2007, 04:50 PM Bitwise is probably the fastest, but the compiler might simply optimize it to use bitwise given the situation: function is_odd ($n) { return ($n & 1); }Why does it work? All it does is compare the bits in the ones place. If they are the same, the AND operation returns 1. Otherwise, it returns 0. Obviously if something is even, then the last bit would be 0 since binary is the base-2 system, which is even in itself. A quick table should help: 0 = 000 = even 1 = 001 = odd 2 = 010 = even 3 = 011 = odd 4 = 100 = even 5 = 101 = odd 6 = 110 = even 7 = 111 = odd See? Every time the number is odd, the very last bit (the bit in the ones place) is 1. I won't get any more specific than this since it would go into the AND truth table and stuff. An interesting thing to note, which may not be practiced much anymore: when you have $n % $m, where $m is a power of 2 (1, 2, 4, 8, 16, etc.), you can use $n & ($m - 1). Example: $n = 40; echo $n % 16, '<br />'; echo $n & 15; //16-1=15 obviouslySee? Same result, right? The only thing to note is that the '&' operator has lower precedence than ==, !=, etc., so if you have ($n & 1 != 1), for example, the result will be equal to $n & 0, which is 0. The right way to do it is to use (($n & 1) != 1). That way, $n & 1 gets evaluated first, instead of the 1 != 1. The difference is in the data type. If you need an explanation, just know that the comparison operators have a higher precedence than the bitwise operators - PHP: Operators - Operator Precedence () johnnyb 11-16-2007, 05:13 PM I never said that mine was fastest or best, just that it would probably work ;) Actually, I forgot about modulo - I never really use it. The bitwise solution looks pretty nice to me. aedrin 11-16-2007, 06:14 PM I understand how the binary one works (I'd seen it before but never used so it hadn't 'stuck') I did say "ptmuldoon" not "johnnyb"... My visual compiler abstracted away the code in the original post, so I hadn't seen that he had posted that (or perhaps it was added afterwards since there was an edit). Hence I thought there was only mine and johnnyb's solution so I didn't think to look back up who you were talking about. My bad. vtjustinb 11-16-2007, 06:51 rpgfan3233 11-16-2007, 08:29 I'm not this old (thank goodness), but before there was a MOD instruction built into mainstream architectures, the bitwise AND existed as a short way of doing n MOD (2^m), but these days the speed difference is probably negligible, if there is a difference in the amount of time each takes. Honestly, I think there may be. If all that MOD does is use the DIV or IDIV instruction, which returns the quotient in the EAX register and the remainder (result of a MOD operation) in the EDX register, and moves the remainder from EDX to EAX and calls it a new instruction, I wouldn't be surprised. However, that would mean that AND is much faster. DIV takes about 14 clocks on an 80386 processor, while AND only takes 2 clocks. I sure we have really come far enough to have closed that gap to a more negligible speed. Back in those days, such a difference was probably a huge deal though. It is like creating a recursive function to calculate the 1024th Fibonacci number. It is much faster as an iterative function (loop) than as a recursive function (keep calling the function over and over and over until you get a stack overflow error like many people), not to mention how much less system intensive it is. I think that is why the bitwise AND trick became so widely adopted for a while, similar to how you can swap values with 3 bitwise XOR operations: a ^= b; b ^= a; a ^= b; Of course in C, you could just do a ^= b ^= a ^= b; :p Of course, creating a temporary variable is usually fast enough these days, not to mention that those operations might not get translated directly as one might expect. That's probably why inline Assembly code exists. :) aedrin 11-16-2007, 08:41 PM similar to how you can swap values with 3 bitwise XOR operations: That's one of those tricks that shouldn't really be used. Simply because it creates unreadable code. I'm sure back then the difference in resources was worth it though. vtjustinb 11-16-2007, 09:35 PM That's one of those tricks that shouldn't really be used. Simply because it creates unreadable code. I'm sure back then the difference in resources was worth it though. Psht.. Code obfuscation is job security ;) rpgfan3233 11-16-2007, 09:50 PM Psht.. Code obfuscation is job security ;) LOL I've heard that one before, and I love hearing it. :thumbsup: moos3 11-16-2007, 10:45 PM mod is the best way for that, I agree. EZ Archive Ads Plugin for vBulletin Computer Help Forum vBulletin® v3.8.2, Copyright ©2000-2013, Jelsoft Enterprises Ltd.
http://www.codingforums.com/archive/index.php/t-127960.html
CC-MAIN-2013-20
refinedweb
1,232
69.62
Now that I’ve got a new project to work against, we can generate the Pex test project and ask Pex to get busy. First we’ll ask Pex to create a new xUnit test project. Right-clicking the Facade project exposes the context menu, we’ll choose “Create Parameterized Unit Test Stubs”, aka PUTs. This gives us the following dialogs to set our project properties. I’ll leave the blanks blank, those exist to filter the namespace, type name, and method names you want to filter for the project under test. We want all of them, so they’ll be left blank. A variation on this is if you open the context menu inside the class file. The filter fields are filled in for you to limit Pex’ interaction. Here are the settings I’ve chosen, I’m going with xUnit this time around. The “Settings…” dialog is where I tell Pex that for all unit tests created, each will have the PexTest suffix. "Mark all test results Inconclusive by default" - This setting will add the Assert.Inconclusive() assertion at the bottom of the test method which is the default for all Visual Studio unit tests when they are generated in the IDE. "Globally qualify all types" - This setting will prefix all fields in the test class with global qualifier. "Use Code Patterns" - Pex utilizes many different code patterns which are beyond the scope of this document. You can find them in this document's appendix. "Generate Stubs file for project under test" - The Stubs Framework utilized by Pex is explained by one of the authors here. We click OK on the settings dialog. We click OK on the main dialog, and Pex shows us the dialog pictured below. We’re getting a unit test project called …\ProjectName.Tests (plural suffix); now is your only chance to move this project around. If you like to keep your unit test libraries somewhere else besides the project it’s testing move it to that place. The MSTest dialog gives us a singular suffix. So if you have both, this might help you keep them apart. Hopefully they will allow us to (re)label them in future versions of Pex. Click OK. If Pex can’t find your test runner, you’ll be prompted for the location. Here are the new pieces Pex has added. A few things to notice here. We get one Pex class for each of our classes so the test methods are segregated neatly. The point to make here is to get everything neat before you generate the test project. Also, notice the suffix, "PexTest". We can't rename the class library suffix, so I chose to rename the class files so I know by looking at the project members which classes are mine, and which ones were generated by Pex. Also, the stub file mentioned previously is generated for us. Here’s what Reflector can show us about what was created as well, the crunchy layer looks pretty much identical to the Crunchy layer. namespace PeanutButter.Business.Facade.Creamy { /// <summary>This class contains parameterized unit tests for CreamyLayerManager</summary> [PexClass(typeof(CreamyLayerManager))] [PexAllowedExceptionFromTypeUnderTest(typeof(InvalidOperationException))] [PexAllowedExceptionFromTypeUnderTest(typeof(ArgumentException), AcceptExceptionSubtypes = true)] public partial class CreamyLayerManagerPexTest { /// <summary>Test stub for Add(!!0)</summary> [PexGenericArguments(typeof(int))] [PexMethod] public void Add<T>([PexAssumeUnderTest]CreamyLayerManager target, T entityToAdd) { // TODO: add assertions to method CreamyLayerManagerPexTest.Add(CreamyLayerManager, !!0) target.Add<T>(entityToAdd); } /// <summary>Test stub for Remove(!!0)</summary> public void Remove<T>( [PexAssumeUnderTest]CreamyLayerManager target, T entityToRemove ) // TODO: add assertions to method CreamyLayerManagerPexTest.Remove(CreamyLayerManager, !!0) target.Remove<T>(entityToRemove); /// <summary>Test stub for Update(!!0)</summary> public void Update<T>( T entityToUpdate // TODO: add assertions to method CreamyLayerManagerPexTest.Update(CreamyLayerManager, !!0) target.Update<T>(entityToUpdate); } The only Source Analysis violation are related to moving the using statements inside the namespace declaration and the adding the method arguments to the summary block. Source and Code Analysis compliance was one of the latest features added to the Pex release in use. Notice the method decoration "[PexMethod]" This method won't be visible to the xUnit test outline since unit tests are reference by the FactAttribute. If it would have been decorated with "[Fact]" it would. Also notice Pex methods are parameterized (PUTs) tests. The xUnit methods will be generated from, and by, the PUTs. The PexMethod(s) will be used during the Pex Exploration we will step through now. Right-clicking the test project exposes the Pex Exploration menu item which starts the exploration process. Pex Exploration finishes and the Pex Explorer tells us we have some problems - all of my methods are throwing NotImplementedExceptions” - nice. I’ll stop here, fix my code and pick-up with an Exploration exercise to allow Pex to do something meaningful with my test project. Until then.
http://geekswithblogs.net/onefloridacoder/archive/2009/05/16/flex-your-pex-part-1.aspx
CC-MAIN-2014-15
refinedweb
801
56.25
Content count191 Joined Last visited Community Reputation380 Neutral About 00Kevin - RankMember Personal Information - Website - InterestsArt Design DevOps Programming Social - @ProjectXSYS. Turn Based Action Selection and UI design 00Kevin posted a topic in Game Design and TheoryCan anyone knowledgeable in turn-based combat games share their insight into how targeted actions are best implemented? Specifically what is the best sequence of events for the selection of actions and targeting. a. Select target -> pick an action -> action executes (no confirmation box) b. Select target -> pick an action -> action executes upon confirmation from user (via dialog, double click, or right click) c. Select action -> cursor changes to attack icon-> click on target- > action executes d. Select action -> cursor changes to targeting icon-> click on target -> action executes upon confirmation from user (via dialog, double click, or right click) Selecting a target first has advantages in that you can filter out invalid action icons based on the target selected. The user doesn't need to hover his mouse over the enemy to determine if it's attack-able. The player only needs to cycle through each target in range (via next target or prev target buttons) once, and since there are generally less targets than actions (attack types, targeted spells, talents, etc) it's far more efficient a process. Selecting an action first is more akin to how games many strategy games like Panzer General work. Once the action is selected the player can move the mouse over a target to evaluate the details of the action. Any thoughts? Are confirmations really all that important? Rats and Holy Hand Grenades 00Kevin posted a blog entry in Project XSYS - WIPOver ) - 00Kevin started following Project XSYS - WIP and Rats and Holy Hand Grenades Flying High Again 00Kevin posted a blog entry in Project XSYS - WIPLooks like I've been neglecting this blog a bit too much lately so here is an update. I've been working on the Combat system a little more. Flying After cleaning up the unit action system, I've finally had time to work on the flying feature. I really need to find the right type of camera for this game or it might be a bit too confusing for the player. In addition, it's clear to me that I need to add height/position indicators so that the player knows what tile the movement selector, targeted unit, and selected unit are over. Perhaps a translucent pillar of some sort will suffice. Guarding The game will now allow you to spend action points to guard with one or more weapons attacks. The number of readied attacks is only limited by your action point total. When an enemy enters your weapon reach, you attack first (gold box game style) It's setup right now so that all your guarding attacks fire off at the triggering enemy. I might change that by giving the player an option or only allow one attack per enemy. Adding over-watch ability (guarding with ranged weapons) is next on my hit list. Hopefully both systems will play nicely together. Coding this was really complicated too, especially when several units are guarding the same square. Without the internal Event Messaging System I created the task would be near impossible. (Don't mind the Animations or the UI as I haven't focused on it. The artifacts in this Gif are from GifCam, which I'm not using anymore) Project XSYS - WIP 00Kevin posted a blog entry in Project XSYS - WIPTHE GAME: Project XSYS is an untitled indie game that features world map HEX crawling, turn based combat, and multi-character party RPG adventuring. Features thus far: Hex Crawling World map hex crawling is a unique feature. It's up to the player to ensure his/her party is well prepared and provisioned for each journey. Like many 4X games,time controls allow the player to speed up, slow down, and pause the passage of time. As time passes, various events will occur such as combat encounters, interaction scenarios, inter-party personality conflicts, discoveries, etc. The screenshot below is the most current view of the hex map. It is auto-generated by stitching smaller hand crafted Terrains together. But, yes, it's pure programmer art. Tactical Turn Based Combat The combat system was built to satisfy the itch of tactical turn based enthusiasts. In addition, I've worked hard to ensure that the combat grid is truly 3D. This means that characters can climb, fly, crawl, and jump over and under obstacles. The action point based combat system is designed around the concept that your character can try to do anything. For example, if you want to fight with two weapons, trip, guard, parry, or disarm you don't need a special skill to try it. There are no talent trees just proficiency levels. Character Party The character creation system is extensive and is a multi-step process. Steps include selecting Class, Race, Ability Scores, Interaction Skills, Combat Skills, Exploration Skills, Spells, Appearance, Personality, Backgrounds, etc. At the moment, the system allows you to create up to six characters. At the moment, the graphics are nothing more than prototypes / placeholders. I don't plan on hiring or partnering with an artist until next year. One of the first things I will have commissioned is a base character model and various pieces of equipment for each race. For the moment, Adam and a pair of Mixamo thugs will do just nicely. Development Thus far, I've spent many long nights coding and refractoring the games numerous subsystems (true 3d grid pathfinding, sqlite database, event message system, animations, personality system, combat mechanics, inventory, character sheets, items, vendors, character creation, terrain based hex map, encounter system, survival mechanics, campaign events, etc). The game has truly become a labor of love and I'm quite happy with the code thus far, but of course it's not perfect... yet. My current focus is developing the content pipeline and assessing what Unity assets (if any) I will be integrating. In fact, I can't wait to start adding more creative elements to my game. The game is now moving from being a game framework to an actual game, but there are still many detailed decisions that have not been made. Anyway, I hope you enjoy watching my game take shape. I really need all the feedback I can get as I have much to learn. Do you usually prefix your classes with the letter 'C' or something else? 00Kevin replied to lucky6969b's topic in General and Gameplay Programmingbtw, you gotta love coders who, before they even begin to make modifications, waste half the day or more changing the coding style. One guy I worked with changed all the database commands to upper case. I was like really? As for camelCase or PascalStyle, my pinky has been brutalized on this shift key because you! Do you usually prefix your classes with the letter 'C' or something else? 00Kevin replied to lucky6969b's topic in General and Gameplay ProgrammingThis is a subjective opinion presented as an objective fact. ...and it's not even a popular opinion. Lowercase with underscores is actually the most 'official' style that C++ has, as it's used by the standard library (and many other projects). The people who choose it would say the opposite opinion: that CamelCase is bad style and makes code less readable.. I work with one programmer who hates underscores. So we all try to add a few extra once and a while just to hear him scream "I hate f**king underscores!'. Do you usually prefix your classes with the letter 'C' or something else? 00Kevin replied to lucky6969b's topic in General and Gameplay ProgrammingThe problem is that this kind of convention is brittle. It encodes information about scope into the variable name, but if you refactor to change scope the encoding is wrong and the variable must be renamed. Forget or neglect to rename the variable and now you've got misleading information. A far more robust convention is to require the use of this-> for members and to require full namespace naming for globals. The compiler can then help you by catching incorrect usage. In an ideal world C++ would have had these requirements from the outset. this, a thousand times this (pun absolutely intended). I sometimes wish c++ had gone the same route as python or rust with an explicit self/this parameter. which could also enable things like templates based on the reference qualifiers of the object. shortly after the wheel we invented find/replace and yet things still get missed. Do you usually prefix your classes with the letter 'C' or something else? 00Kevin replied to lucky6969b's topic in General and Gameplay ProgrammingActually there is. Be consistent, at least in your own codebase. If I know something is called a foo bar I should be able to deduce its name without having to check on which day it was written. FooBar, CFooBar, TFooBar, foo_bar, ... are all somehow acceptable provided they follow a clear pattern. I still wished C or T prefixes would remain in the dustbin of history though. That doesn't happen. The point I'm making is that readability (for the next guy) is far more important than a bunch of strict coding styles that are largely subjective and change over time. I'm talking from years of experience developing large enterprise level systems that evolve. Code bases developed by multiple developers often contain mixed styles. That's the reality regardless of how idealistic you are. As programmers we deal with what IS and not with what should be. The fact is most successful business systems contain mixed conventions and poorly written code anyway . So unless you are the sole developer for the entire life of an application, there's no sense in being strict in regards to programming style. For success, focus on the architecture of your product, which is the true measure of success, and don't be guilty of using meaningless notations. Now, I'm not trying to write a book here or publish a thesis to become famous, I'm just telling you the way it really is. With that said, I'm frequently reminded of all those jokers in my class that couldn't code very well. I found out the hard way that they are busy writing endless lines of bad code for us all. :) So in the grand scheme of things, stylistic conventions are the least of our concerns as programmers. Do you usually prefix your classes with the letter 'C' or something else? 00Kevin replied to lucky6969b's topic in General and Gameplay ProgrammingThis reminds me that many years ago it was also common to use the prefix letter 'T' for the class definition... I wonder what happened to that. And the prefix 'C'? don't remind me the horror that is MFC. With that said, after working on a several large business system across many different languages I hate prefix / Hungarian notation with a passion. I find that coding conventions (especially prefixes) are rarely enforced, change over time, and are usually only grudgingly accepted (if at all) by subsequent developers who inherit your work. In the end, everyone tries to adopt their own conventions and the code base becomes a complete mess. IMO, just make your code readable. There are no strict standards for success. In fact, sometimes the variable name 'i' is more readable than a long description and sometimes it isn't. If you write your code for other people to read then you'll be successful. If you code for yourself and include your own bullshit or the latest fad conventions you won't. HTML5 / Javascript , box2dweb, createjs pitfalls? 00Kevin replied to 00Kevin's topic in General and Gameplay Programmingthanks for the advice. I've actually just moved over to the Intel XDK which uses Cordova. The app preview functionality is really great. I can test on any device simply by installing the app. HTML5 / Javascript , box2dweb, createjs pitfalls? 00Kevin posted a topic in General and Gameplay ProgrammingHi, I've created a game using HTML5 / javascript, box2dweb and createjs. My plan is to publish the game to the playstore and the appstore by using PhoneGap/Cordova. If you have any experience publishing and supporting a game using any of the above technologies could you could share your experiences? Is there anything I should lookout for? Thanks Good place to find a 2d grahpics artist? 00Kevin posted a topic in Production and ManagementHi, Can anyone here recommend a place where I can find a 2d game artist to help complete a mobile game I've developed? I'm not necessarily looking to hire someone, but I am looking to find someone to team up with. At this point, the game is completely functional and just needs a graphic artist to replace all my horrible programmer art. :) Thanks. Your development workflow 00Kevin replied to 00Kevin's topic in General and Gameplay ProgrammingHaving a vision for your game and a small amount of planning is definitely beneficial. You don't want to dive into a game project that is too large in scope and you want to have a rough idea of what you final game will be like but this waterfall approach has a major flaw. You cannot anticipate everything you game needs or even if it will be fun in the planning stages. If you lay out a detailed map of what your final game will be like you will find major problems with the gameplay partway through development. At this point you can scrap all the work you did in planning or just stick to the script and have your gameplay suffer because of it. Alberth describes pretty well what I try to do. Build the game in small incremental steps. Try out the gameplay early with simple prototypes. Be willing to kill an idea that will either take more resources than it is worth or just isn't fun. Identify problems with gameplay and come up with solutions as you find them. Eventually you begin to converge on your final game, which may be quite different from your original idea. I think that if the above waterfall method is used per feature it won't have such a flaw. I agree you can't anticipate everything.
https://www.gamedev.net/profile/90537-00kevin/?tab=topics
CC-MAIN-2018-05
refinedweb
2,402
62.78
72 [details] Issue reproducing sample. Hi Support, We have drawn the Rectangle in the layer of the UIImageView. If we try to open a UIActivityViewController in a button handle, contents drawn in the layer gets cleared. We have attached with this thread for reproducing the issue in your end. iOS version: 11 or more Xamarin.iOS SDK: 11 or more Issue with controllers: UIActivityViewController, UIMailComposeViewController Can you please look into this issue and provide the details ASAP. Note: We cannot reproduce this issue in iOS 10 or below. Thanks, Balasubramanian S Hello, I do not quite understand what is trying to be achieved in the sample code. As far as I understand, you want to draw a red rectangle in a view that will be correctly drawn and will be kept after the button has been clicked. The implementation is something I quite do not understand. In order to achieve what you are attempting, I would modify the ImageEx as follows: [Register("ImageViewEx")] public class ImageViewEx : UIView { internal nint PageNumber = 0; public ImageViewEx(IntPtr p) : base(p) { //Your initialization code. Initialize(); } public ImageViewEx() { Initialize(); } public ImageViewEx(CGRect bounds) : base(bounds) { } public override void Draw(CGRect rect) { using (var context = UIGraphics.GetCurrentContext()) { var myRectangleButtonPath = new CGPath(); myRectangleButtonPath.AddRect( new RectangleF(new PointF(100, 10), new SizeF(200, 400))); context.SetFillColor((UIColor.Red).CGColor); context.AddPath(myRectangleButtonPath); context.DrawPath(CGPathDrawingMode.FillStroke); } } } Of course, this is a very dull example, and I'm assuming you are attempting to use a CATiledLayer to do more interesting things (maybe show a huge large image). I recommend you take a look at the implementation in the following sample => The sample interests you in the following path => This path shows you a UIView subclass that uses a CATiledLayer which I think is what you are trying to attempt in the sample you provided. Hi Manuel, Thank you for the update. Like you said, We draw the images in the CATiledLayer. In this case, the drawn images are cleared once the share activity controller disappeared. In order to reproduce the issue, we have drawn the rectangle to the layer of the UIImageView and we could see a similar behavior (Rectangle contents are cleared when the share activity controller once disappeared from the UIViewController) in the attached sample too. We could not reproduce this issue in iOS 10 or below. Can you please look at the sample and let me know why it is happening in iOS 11 alone? Thanks, Bala Hi Manuel, We cannot reproduce this issue, If we draw the rectangle straight away to the graphics of the UIView. We could reproduce the issue only if we draw the rectangle on the Layer of the UIView. Thanks, Bala Created attachment 26003 [details] Swift sample following the C# sample implementation. Hello, I did take a look at your sample project, and as I mentioned, the implementation you are using is not quite right and the behaviour you are experience is the expected one with the code you provided. Please, take a look at the sample I pointed that shows how to correctly used a CATiledLayer for the drawing of large images. In order to show you that the issue is not related with Xamarin, please take a look at the sample I have written in swift that follows the design you did. As you can see, when you ran in on iOS 11 the behaviour is the same one as the one shown in your sample, if you run the sample in iOS 10 you will have the old behaviour. PS: You will notice that the swift sample does not use a event handler but a reference to the view, that is a small detail that had to be changed due to swift.
https://bugzilla.xamarin.com/61/61069/bug.html
CC-MAIN-2021-25
refinedweb
625
59.53
Doctests aren't confined to simple text files. You can put doctests into Python's docstrings. Why would you want to do that? There are a couple of reasons. First of all, docstrings are an important part of the usability of Python code (but only if they tell the truth). If the behavior of a function, method, or module changes and the docstring doesn't get updated, then the docstring becomes misinformation, and a hindrance rather than a help. If the docstring contains a couple of doctest examples, then the out-of-date docstrings can be located automatically. Another reason for placing doctest examples into docstrings is simply that it can be very convenient. This practice keeps the tests, documentation and code all in the same place, where it can all be located easily. If the docstring becomes home to too many tests, this can destroy its utility as documentation. This should be avoided; if you find yourself with so many tests in the docstrings that they aren't useful as a quick reference, move most of them to a separate file. Time for action – embedding a doctest in a docstring We'll embed a test right inside the Python source file that it tests, by placing it inside a docstring. - Create a file called test.py with the following contents: def testable(x): r""" The `testable` function returns the square root of its parameter, or 3, whichever is larger. >>> testable(7) 3.0 >>> testable(16) 4.0 >>> testable(9) 3.0 >>> testable(10) == 10 ** 0.5 True """ if x < 9: return 3.0 return x ** 0.5 - At the command prompt, change to the directory where you saved test.py and then run the tests by typing: $ python -m doctest test.py - If everything worked, you shouldn't see anything at all. If you want some confirmation that doctest is doing something, turn on verbose reporting by changing the command to: python -m doctest -v test.py As mentioned earlier before, if you have an older version of Python, this isn't going to work for you. Instead, you need to type python -c "__import__('doctest').testmod(__import__('test'))" For older versions of Python, instead use python -c "__import__('doctest').testmod(__import__('test'), verbose=True)" What just happened You put the doctest right inside the docstring of the function it was testing. This is a good place for tests that also show a user how to do something. It's not a good place for detailed, low-level tests (the above example, which was quite detailed for illustrative purposes, is skirting the edge of being too detailed), because docstrings need to serve as API documentation. You can see the reason for this just by looking back at the example, where the doctests take up most of the room in the docstring, without telling the readers any more than they would have learned from a single test. Any test that will serve as good API documentation is a good candidate for including in the docstrings. Notice the use of a raw string for the docstring (denoted by the r character before the first triple-quote). Using raw strings for your docstrings is a good habit to get into, because you usually don't want escape sequences—e.g. n for newline—to be interpreted by the Python interpreter. You want them to be treated as text, so that they are correctly passed on to doctest. Doctest directives Embedded doctests can accept exactly the same directives as doctests in text files can, using exactly the same syntax. Because of this, all of the doctest directives that we discussed before can also be used to aff ect the way embedded doctests are evaluated. Execution scope Doctests embedded in docstrings have a somewhat different execution scope than doctests in text files do. Instead of having a single scope for all of the tests in the file, doctest creates a single scope for each docstring. All of the tests that share a docstring, also share an execution scope, but they're isolated from tests in other docstrings. The separation of each docstring into its own execution scope often means that we don't need to put much thought into isolating doctests, when they're embedded in docstrings. That is fortunate, since docstrings are primarily intended for documentation, and the tricks needed to isolate the tests might obscure the meaning. Putting it in practice: an AVL tree We'll walk step-by-step through the process of using doctest to create a testable specification for a data structure called an AVL Tree. An AVL tree is a way to organize key-value pairs, so that they can be quickly located by key. In other words, it's a lot like Python's built-in dictionary type. The name AVL references the initials of the people who invented this data structure. As its name suggests, an AVL tree organizes the keys that are stored in it into a tree structure, with each key having up to two child keys—one child key that is less than the parent key by comparison, and one that is more. In the following picture, the key Elephant has two child keys, Goose has one, and Aardvark and Frog both have none. The AVL tree is special, because it keeps one side of the tree from getting much taller than the other, which means that users can expect it to perform reliably and efficiently no matter what. In the previous image, an AVL tree would reorganize to stay balanced if Frog gained a child. We'll write tests for an AVL tree implementation here, rather than writing the implementation itself. Therefore, we'll elaborate over the details of how an AVL tree works, in favor of looking at what it should do when it works right If you want to know more about AVL Trees, you will find many good references on the Internet. Wikipedia's entry on the subject is a good place to start with:. We'll start with a plain language specification, and then interject tests between the paragraphs. You don't have to actually type all of this into a text file; it is here for you to read and to think about. English specification The first step is to describe what the desired result should be, in normal language. This might be something that you do for yourself, or something that somebody else does for you. If you're working for somebody, hopefully you and your employer can sit down together and work this part out. In this case, there's not much to work out, because AVL Trees have been fully described for decades. Even so, the description here isn't quite like one you'd find anywhere else. This capacity for ambiguity is exactly the reason why a plain language specification isn't good enough. We need an unambiguous specification, and that's exactly what the tests in a doctest file can give us. The following text goes in a file called AVL.txt, (which you can find in its final form in the accompanying code archive. At this stage of the process, the file contains only the normal language specification.): An AVL Tree consists of a collection of nodes organized in a binary tree structure. Each node has left and right children, each of which may be either None or another tree node. Each node has a key, which must be comparable via the less-than operator. Each node has a value. Each node also has a height number, measuring how far the node is from being a leaf of the tree -- a node with height 0 is a leaf. The binary tree structure is maintained in ordered form, meaning that of a node's two children, the left child has a key that compares less than the node's key and the right child has a key that compares greater than the node's key. The binary tree structure is maintained in a balanced form, meaning that for any given node, the heights of its children are either the same or only differ by 1. The node constructor takes either a pair of parameters representing a key and a value, or a dict object representing the key-value pairs with which to initialize a new tree. The following methods target the node on which they are called, and can be considered part of the internal mechanism of the tree: Each node has a recalculate_height method, which correctly sets the height number. Each node has a make_deletable method, which exchanges the positions of the node and one of its leaf descendants, such that the the tree ordering of the nodes remains correct. Each node has rotate_clockwise and rotate_counterclockwise methods. Rotate_clockwise takes the node's right child and places it where the node was, making the node into the left child of its own former child. Other nodes in the vicinity are moved so as to maintain the tree ordering. The opposite operation is performed by rotate_ counterclockwise. Each node has a locate method, taking a key as a parameter, which searches the node and its descendants for a node with the specified key, and either returns that node or raises a KeyError. The following methods target the whole tree rooted at the current node. The intent is that they will be called on the root node: Each node has a get method taking a key as a parameter, which locates the value associated with the specified key and returns it, or raises KeyError if the key is not associated with any value in the tree. Each node has a set method taking a key and a value as parameters, and associating the key and value within the tree. Each node has a remove method taking a key as a parameter, and removing the key and its associated value from the tree. It raises KeyError if no values was associated with that key. Node data The first three paragraphs of the specification describe the member variables of a AVL tree node, and tell us what the valid values for the variables are. They also tell us how tree height should be measured and define what a balanced tree means. It's our job now to take up those ideas, and encode them into tests that the computer can eventually use to check our code. We could check these specifications by creating a node and then testing the values, but that would really just be a test of the constructor. It's important to test the constructor, but what we really want to do is to incorporate checks that the node variables are left in a valid state into our tests of each member function. To that end, we'll define a function that our tests can call to check that the state of a node is valid. We'll define that function just after the third paragraph: Notice that this test is written as if the AVL tree implementation already existed. It tries to import an avl_tree module containing an AVL class, and it tries to use the AVL class is specific ways. Of course, at the moment there is no avl_tree module, so the test will fail. That's as it should be. All that the failure means is that, when the ti me comes to implement the tree, we should do so in a module called avl_tree, with contents that function as our test assumes. Part of the benefit of testing like this is being able to test-drive your code before you even write it. >>> from avl_tree import AVL >>> def valid_state(node): ... if node is None: ... return ... if node.left is not None: ... assert isinstance(node.left, AVL) ... assert node.left.key < node.key ... left_height = node.left.height + 1 ... else: ... left_height = 0 ... ... if node.right is not None: ... assert isinstance(node.right, AVL) ... assert node.right.key > node.key ... right_height = node.right.height + 1 ... else: ... right_height = 0 ... ... assert abs(left_height - right_height) < 2 ... node.key < node.key ... node.value >>> def valid_tree(node): ... if node is None: ... return ... valid_state(node) ... valid_tree(node.left) ... valid_tree(node.right) Notice that we didn't actually call those functions yet. They aren't tests, per se, but tools that we'll use to simplify writing tests. We define them here, rather than in the Python module that we're going to test, because they aren't conceptually part of the tested code, and because anyone who reads the tests will need to be able to see what the helper functions do. Constructor The fourth paragraph describes the constructor for an AVL node: The node constructor takes either a pair of parameters representing a key and a value, or a dict object representing the key-value pairs with which to initialize a new tree. The constructor has two possible modes of operation: - it can either create a single initialized node - or it can create and initialize a whole tree of nodes. The test for the single node mode is easy: >>> valid_state(AVL(2, 'Testing is fun')) The other mode of the constructor is a problem, because it is almost certain that it will be implemented by creating an initial tree node and then calling its set method to add the rest of the nodes. Why is that a problem? Because we don't want to test the set method here: this test should be focused entirely on whether the constructor works correctly, when everything it depends on works. In other words, the tests should be able to assume that everything outside of the specific chunk of code being tested works correctly. However, that's not always a valid assumption. So, how can we write tests for things that call on code outside of what's being tested? There is a solution for this problem. For now, we'll just leave the second mode of operation of the constructor untested. Recalculate height The recalculate_height method is described in the fifth paragraph. To test it, we'll need a tree for it to operate on, and we don't want to use the second mode of the constructor to create it. After all, that mode isn't tested at all yet, and even if it were, we want this test to be independent of it. We would prefer to make the test entirely independent of the constructor, but in this case we need to make a small exception to the rule(since it's difficult to create an object without calling its constructor in some way). What we'll do is define a function that builds a specific tree and returns it. This function will be useful in several of our later tests as well. Using this function, testing recalculate_height will be easy. >>> def make_test_tree(): ... root = AVL(7, 'seven') ... root.height = 2 ... root.left = AVL(3, 'three') ... root.left.height = 1 ... root.left.right = AVL(4, 'four') ... root.right = AVL(10, 'ten') ... return root >>> tree = make_test_tree() >>> tree.height = 0 >>> tree.recalculate_height() >>> tree.height 2 The make_test_tree function builds a tree by manually constructing each part of it and hooking it together into a structure that looks like this: Make deletable You can't delete a node that has children, because that would leave the node's children disconnected from the rest of the tree. If we delete the Elephant node from the bottom of the tree, what do we do about Aardvark, Goose, and Frog? If we delete Goose, how do we find Frog afterwards? The way around that is to have the node swap places with it's largest leaf descendant on the left side (or its smallest leaf descendant on the right side, but we'll not do it that way). We'll test this by using the same make_test_tree function that we defined before to create a new tree to work on, and then checking that make_deletable swaps correctly: Each node has a make_deletable method, which exchanges the positions of the node and one of its leaf descendants, such that the the tree ordering of the nodes remains correct. >>> tree = make_test_tree() >>> target = tree.make_deletable() >>> (tree.value, tree.height) ('four', 2) >>> (target.value, target.height) ('seven', 0) Something to notice here is that the make_deletable function isn't supposed to delete the node that it's called on. It's supposed to move that node into a position where it could be safely deleted. It must do this reorganization of the tree, without violating any of the constraints that define an AVL tree structure. Rotation first part of the test code for rotation just creates a tree and verifies that it looks like we expect it to: >>> tree = make_test_tree() >>> tree.value 'seven' >>> tree.left.value 'three' Once we have a tree to work with, we try a rotation operation and check that the result still looks like it should: >>> tree.rotate_counterclockwise() >>> tree.value 'three' >>> tree.left None >>> tree.right.value 'seven' >>> tree.right.left.value 'four' >>> tree.right.right.value 'ten' >>> tree.right.left.value 'four' >>> tree.left is None True Finally, we rotate back in the other directi on, and check that the final result is the same as the original tree, as we expect it to be: >>> tree.rotate_clockwise() >>> tree.value 'seven' >>> tree.left.value 'three' >>> tree.left.right.value 'four' >>> tree.right.value 'ten' >>> tree.right.left is None True >>> tree.left.left is None True Locating a node The locate method is expected to return a node, or raise a KeyError exception, depending on whether the key exists in the tree or not. We'll use our specially built tree again, so that we know exactly what the tree's structure looks like. >>> tree = make_test_tree() >>> tree.locate(4).value 'four' >>> tree.locate(17) # doctest: +ELLIPSIS Traceback (most recent call last): KeyError: … The locate method is intended to facilitate insertion, deletion, and lookup of values based on their keys, but it's not a high-level interface. It returns a node object, because it's easy to implement the higher-level operations, if you have a function the finds the right node for you. Testing the rest of the specification Like the second mode of the constructor, testing the rest of the specification involves testing code that depends on things outside of itself. Summary Here, we took a real-world specification for the AVL tree, and examined how to formalize it as a set of doctests, so that we could use it to automatically check the correctness of an implementation. Specifically, we covered how to write doctests in Python docstrings, and what it feels like to use doctest to turn a specification into tests. If you have read this article you may be interested to view :
https://www.packtpub.com/books/content/embedding-doctests-python-docstrings
CC-MAIN-2017-13
refinedweb
3,131
70.33
User:HagermanBot From Wikiversity [edit] Tasks - Place the unsigned template on a talk page and requested pages when a user adds a comment and forgets to sign. - Place the tilde template on the user's talk page when the user leaves two unsigned comments in a rolling 24-hour period. [edit] General Information [edit] Sandbox If you'd like to test the behavior of the bot, you may use the HagermanBot sandbox. [edit] Problems If the bot leaves an unsigned template on an edit you made when it shouldn't have done so, please notify Hagerman making sure to include the page. Feel free to remove the unsigned template, it won't put it back in the same spot twice. [edit] Turning it On The bot is enabled by default for all pages under the talk namespaces. If you want the bot to monitor a talk page, you don't need to do anything. However, if you want it to monitor a non-talk page, place the page in the appropriate category and the bot will begin marking unsigned comments. It may take up to 5 minutes for the bot to begin signing comments after the category is applied. - [[Category:Non-talk pages automatically signed by HagermanBot]] This will mark any unsigned comments on the page the category was applied to. - [[Category:Non-talk pages with subpages automatically signed by HagermanBot]] This will mark any unsigned comments on the subpages of the page the category was applied to. It will not mark comments on the parent unless it includes both categories. [edit] Turning it Off If you want to turn it off, this bot supports three functions for disabling the engine: - One-Time: If you think an edit you are making to a talk page might be interpreted as a comment when it shouldn't be, putting !NOSIGN! somewhere in your edit summary will cause the bot to ignore your edit. - Article Permanent: If you do not wish to have the bot monitor a specific talk page, putting <!--Disable HagermanBot--> somewhere on the page will cause the bot to stop watching the page until the flag is removed. Note that it must be placed directly on your talk page and not on a template that is transcluded. - User Permanent: If you do not wish to have the bot mark unsigned comments left by you, you may follow the instructions at opting out. [edit] Technical Information [edit] Architecture - Interface: Windows Service - Programming Language: Visual C# .NET - Libraries Used: WikiFunctions.dll from the AWB project. [edit] Conditions In order for the bot to classify an edit as a new unsigned comment, the following conditions must be met: - The edit must fall under the Talk or User Talk namespace or an article with a special category. - The edit must only contain the addition of new lines and those lines must all be adjacent. - The edit must not already contain a signature in the added lines. A signature is determined by either the presence of a link to the User namespace, a link to the User talk namespace, or the string "(UTC)". - The edit must either create a new heading or exist as an indent under an existing heading. - The edit must not contain a template. [edit] Monitoring All recent changes to watched pages are immediately placed in a queue for the signing engine. When the page reaches the end of the queue, the signing engine retrieves the page diff from Wikipedia and analyzes the changes to determine whether it needs to mark it as an unsigned comment. During most times of the day, there are only one or two pages in front of it, and an unsigned template is marked a few seconds after the page is initially saved. However, during extremely busy time periods, the queue can have several hundred pages in front of it, and as a result, it might take up to 10 minutes for a comment to get signed. The queue operates on the w:en:FIFO principle, no prioritization is applied. [edit] Clones The following Wikimedia projects run a clone of HagermanBot:
http://en.wikiversity.org/wiki/User:HagermanBot
crawl-002
refinedweb
684
59.43
Michael Niedermayer <michaelni at gmx.at> writes: > Are you sure its not worthwhile to fix the linker? This dependancy & conflict > dance has to be done each time a project wants to add versioning. > The linker should do the following (and it does not) > 1. Prefer a symbol with matching version over an unversioned symbol. > Not just pick the first that does not have a mismatching version. > 2. do a breadth first search from the object where a symbol is referenced > Not a breadth first search from the application. > 3. Not fail assertions. As indicated above, I've now done more research on your idea. I've talked to a colleague at my department, who has programmed an linker for embedded systems, and fetched Levine's excellent book "Linkers and Loaders" from the library. Let's recapitulate roughly what the runtime linker does at load time: Conceptually, the runtime linker builds at bootstrap phase of a program a global symbol table containing all symbols of all DSO the application is linked to. This symbol table is only conceptually global. In practice, there are actually two (one for code, one for data), and there are actually tables for each and every library. At bootstrap phase, these tables are combined in a linked list. (there are a bit more optimizations going on here like lazy lookups for function, but that doesn't matter for this discussion, for details please see Levine's book, chapter 10.3, "Loading a Dynamically Linked Program" and chapter 10.4, "Lazy Procedure Linking with the PLT"). What you now propose is to introduce "hierarchic symbol tables", as my colleague called it. I would rather call that "introducing namespace" that are created on load time of a library. This way the runtime linker would have to obey the library in which the search for symbol it was started. I think this is not a good idea for a few reasons: - you could of course change the list in which the symbol tables are linked to a graph to implement the "hierarchic symbol table" conecpt, but this increases the complexity of the runtime linker considerably! - this would probably only work for functions, not for code (would have to think more about this) - it would totally break the existing behavior of LD_PRELOAD and dlopen()-tricks that overshadow existing symbols. A lot of really useful applications rely on this. So all in all, I'm now pretty sure that changing the existing linker is an option for solving the problem at hand. In parallel, I asked the Debian glibc maintainers for their opinion. I was pointed to the -Bsymbolic ld flag, which, alters the symbol lookup behavior of the runtime linker as well. At first, I was sceptic on that idea, and frankly, I still have to actually experiment with this approach and to think more about this if it would really helps to fix the problem. On the second thought, *might* indeed be an alternative to symbol versioning. However, it would require to create static shared libraries, e.g. to link libavutil statically into the dynamically linked libavcodec and libavformat, and libavcodec statically in the dynamically linked libavformat. It would effectively prevent to LD_PRELOAD existing symbols, e.g. in libavutil. I guess (and hope) that nobody does that! Implementing this might be more challenging for me though, because it would require more invasive changes and increased complexity to the Makefiles and the configure script. In detail, the Makefile would need to know exactly which libraries to link statically an which dynamically in what way. So, how do the makefile maintainers feel about this approach? Still, this approach wouldn't get us the other benefits of symbol versions. I guess both approaches can be combined, though. -- Gruesse/greetings, Reinhard Tartler, KeyID 945348A4
http://ffmpeg.org/pipermail/ffmpeg-devel/2009-December/062435.html
CC-MAIN-2015-06
refinedweb
630
63.49
I am trying to read date column from a csv file. This column contains dates in just one format. Please see data below: The problem arises when I am trying to read it using dateparser. dateparse=lambda x:datetime.strptime(x, '%m/%d/%Y').date() df = pd.read_csv('products.csv', parse_dates=['DateOfRun'], date_parser=dateparse) Above logic works fine most of the cases, but sometimes randomly i get error that format is not matching, example below: ValueError: time data ‘2020-02-23’ does not match format ‘%m/%d/%Y’ Does anyone know how is this possible? Because that yyyy-mm-dd format is not in my data.. ANy tips will be useful. Thanks The problem happens when you open the csv file in Excel. Excel by default (and based on your OS settings) automatically changes the date format. For instance, in USA the default format is MM/DD/YYYY so if you have a date in a csv file such as YYYY-MM-DD it will automatically change it to MM/DD/YYYY. The solution is to NOT open the csv file in Excel before manipulating it in Python. IF you must open it to inspect it either look at it in Python or in notepad or some other text editor. I always assume that dates are going to be screwed up because someone might have opened it in Excel and so I test for the proper format and then change it if I get an AssertionError. As an example if you want to change dates from YYYY-MM-DD try this: from datetime import datetime def change_dates(date_string): try: assert datetime.strptime(date_string, '%m/%d/%y'), 'format error' return date_string except AssertionError, ValueError: dt = datetime.strptime(date_string, '%Y-%m-%d') return dt.strftime('%m/%d/%Y') Tags: csv, date, exception, pandas, pythonpython
https://exceptionshub.com/pandas-issues-in-reading-date-column-from-csv-in-python-exceptionshub.html
CC-MAIN-2021-10
refinedweb
302
64.2
/* * Document.w3c.dom.Node.ELEMENT_NODE; import org.w3c.dom.Document; import org.w3c.dom.Element; import org.w3c.dom.NamedNodeMap; import org.w3c.dom.Node; /** * The <code>DocumentReader</code> object provides an implementation * for reading XML events using DOM. This reader flattens a document * in to a series of nodes, and provides these nodes as events as * they are encountered. Essentially what this does is adapt the * document approach to navigating the XML and provides a streaming * approach. Having an implementation based on DOM ensures that the * library can be used on a wider variety of platforms. * * @author Niall Gallagher * @see org.simpleframework.xml.stream.DocumentProvider class DocumentReader implements EventReader { /** * Any attribute beginning with this string has been reserved. */ private static final String RESERVED = "xml"; * This is used to extract the nodes from the provided document. private NodeExtractor queue; * This is used to keep track of which elements are in context. private NodeStack stack; * This is used to keep track of any events that were peeked. private EventNode peek; * Constructor for the <code>DocumentReader</code> object. This * makes use of a DOM document to extract events and provide them * to the core framework. All nodes will be extracted from the * document and queued for extraction as they are requested. This * will ignore any comment nodes as they should not be considered. * * @param document this is the document that is to be read public DocumentReader(Document document) { this.queue = new NodeExtractor(document); this.stack = new NodeStack(); this.stack.push(document); } * { Node node = queue.peek(); if(node == null) { return end(); return read(node); * @param node this is the XML node that has been read private EventNode read(Node node) throws Exception { Node parent = node.getParentNode(); Node top = stack.top(); if(parent != top) { if(top != null) { stack.pop(); } if(node != null) { queue.poll(); return convert(node); * This is used to convert the provided node in to an event. The * conversion process ensures the node can be digested by the core * reader and used to provide an <code>InputNode</code> that can * be used to represent the XML elements or attributes. If the * provided node is not an element then it is considered text. * @param node the node that is to be converted to an event * * @return this returns an event created from the given node private EventNode convert(Node node) throws Exception{ short type = node.getNodeType(); if(type == ELEMENT_NODE) { if(node != null) { stack.push(node); return start(node); return text(node); * This is used to convert the provided node to a start event. The * be used to represent an XML elements within the source document. * @param node the node that is to be converted to a start event * @return this returns a start event created from the given node private Start start(Node node) { Start event = new Start(node);) { NamedNodeMap list = event.getAttributes(); int length = list.getLength(); for (int i = 0; i < length; i++) { Node node = list.item(i); Attribute value = attribute(node); if(!value.isReserved()) { event.add(value); * This is used to convert the provided node to an attribute. The * be used to represent an XML attribute within the source document. * @param node the node that is to be converted to an attribute * @return this returns an attribute created from the given node private Entry attribute(Node node) { return new Entry(node); * This is used to convert the provided node to a text event. The * @param node the node that is to be converted to a text event * @return this returns the text event created from the given node private Text text(Node node) { return new Text(node); * This is used to create a node node that is to be represented as an attribute. */ private final Node node; * Constructor for the <code>Entry</code> object. This creates * an attribute object that is used to extract the name, value * namespace prefix, and namespace reference from the provided * node. This is used to populate any start events created. * * @param node this is the node that represents the attribute public Entry(Node node) { this.node = node; * node.getLocalName(); * This returns the value of the event. This will be the value * that the attribute contains. If the attribute does not have * a value then this returns null or an empty string. * @return this returns the value represented by this object public String getValue() { return node.getNodeValue(); * node node.getNamespaceURI(); *() { String prefix = getPrefix(); String name = getName(); if(prefix != null) { return prefix.startsWith(RESERVED); return name.startsWith(RESERVED); * This is used to return the node for the attribute. Because * this represents a DOM attribute the DOM node is returned. * Returning the node helps with certain debugging issues. * @return this will return the source object for this public Object getSource() { return node; * element that is represented by this start event. private final Element element; * Constructor for the <code>Start</code> object. This will * wrap the provided node and expose the required details such * as the name, namespace prefix and namespace reference. The * provided element node can be acquired for debugging purposes. * @param element this is the element being wrapped by this public Start(Node element) { this.element = (Element)element; * This provides the name of the event. This will be the name * of an XML element the event represents. If there is a prefix * associated with the element, this extracts that prefix. * @return this returns the name without the namespace prefix return element.getLocalName(); * this node. A prefix is used to qualify an XML element or * attribute within a namespace. So, if this represents a text * event then a namespace prefix is not required. * @return this returns the namespace prefix for this event return element.getPrefix(); * node is in. A namespace is normally associated with an XML * element or attribute, so text events and element close events * are not required to contain any namespace references. * @return this will provide the associated namespace reference return element.getNamespaceURI(); * This is used to acquire the attributes associated with the * element. Providing the attributes in this format allows * the reader to build a list of attributes for the event. * @return this returns the attributes associated with this public NamedNodeMap getAttributes(){ return element.getAttributes(); * This is used to return the node for the event. Because this * represents a DOM element node the DOM node will be returned. * @return this will return the source object for this event node that is used to represent the text value. * Constructor for the <code>Text</code> object. This creates * an event that provides text to the core reader. Text can be * in the form of a CDATA section or a normal text entry. * @param node this is the node that represents the text value public Text(Node node) { } *(){ * represents a DOM text value the DOM node will be returned. *() { }
http://simple.sourceforge.net/download/stream/report/cobertura/org.simpleframework.xml.stream.DocumentReader.html
CC-MAIN-2018-05
refinedweb
1,120
58.38
Hola! Lazy dev here. React testing is hard. Especially react testing outside the browser environment, like with Jest and JSdom. Let's try to reverse engineer react's act(), understand why do we need it, and think about UI testing overall. History Today I meet this tweet by @floydophone And was inspired to write about how your tests work inside your terminal when you are testing in node.js. Let's start from the question – why do we need this "magic" act() function. What is act() Here is a quote from react.js docs: To prepare a component for assertions, wrap the code rendering it and performing updates inside an act() call. This makes your test run closer to how React works in the browser. So the problem that act() is solving – It delays your tests until all of your updates were applied before proceeding to the next steps. When you are doing any kind of user interaction, like this act(() => { button.dispatchEvent(new MouseEvent('click', {bubbles: true})); }); React is not updating UI immediately, thanks to the Fiber architecture. It will update it asynchronously in some time after the click, so we need to wait for UI to be updated. And here is a problem The main problem here – act() is actually a crutch and you will probably agree that it is not a perfect solution. Tests that you (probably) are writing are synchronous. It means that commands and assertions that tests are doing are executed one-by-one without any waiting. UI works differently – UI is async by nature. Reverse engineer it Let's look more closely at the implementation of this function, right from the react sources. We only need 2 files ReactTestUtilsPublicAct and ReactFiberWorkLoop. I will skip not interesting parts, but the code is not so big so you can read it yourself 🙃 Let's start from the main point of the act function: let result; try { result = batchedUpdates(callback); } catch (error) { // on sync errors, we still want to 'cleanup' and decrement actingUpdatesScopeDepth onDone(); throw error; } And this magic batchedUpdates function has a pretty simple yet powerful implementation. export function batchedUpdates<A, R>(fn: A => R, a: A): R { const prevExecutionContext = executionContext; executionContext |= BatchedContext; try { return fn(a); } finally { executionContext = prevExecutionContext; if (executionContext === NoContext) { // Flush the immediate callbacks that were scheduled during this batch resetRenderTimer(); flushSyncCallbackQueue(); } } } This particular function is called inside the react when during the render phase react exactly knows that all updates are done and we can render the dom. And it starts the reconciliation and synchronous dom updating after. After batchedUpdates our code has 2 branches depends on how you used it. If you passed a synchronous function inside the act, like act(() => { ReactDOM.render(<Counter />, container); }); It will call the function flushWork which is nothing more than a sync while loop const flushWork = Scheduler.unstable_flushAllWithoutAsserting || function() { let didFlushWork = false; while (flushPassiveEffects()) { didFlushWork = true; } return didFlushWork; }; It looks like for concurrent mode a new specific hook implemented to stop all the effects together ( unstable_flushAllWithoutAsserting) But for now, it is just a sync while loop that stops the synchronous execution until all the DOM updating work is done. Pretty clumsy solution, don't you think? Async execution More interesting is coming when you are passing an async function as a callback. Lets go to another code branch: if ( result !== null && typeof result === 'object' && typeof result.then === 'function' ) // ... not interesting result.then( () => { if ( actingUpdatesScopeDepth > 1 || (isSchedulerMocked === true && previousIsSomeRendererActing === true) ) { onDone(); resolve(); return; } // we're about to exit the act() scope, // now's the time to flush tasks/effects flushWorkAndMicroTasks((err: ?Error) => { onDone(); if (err) { reject(err); } else { resolve(); } }); }, err => { onDone(); reject(err); }, ); Here we are waiting for our passed callback (the result is returned by batchedUpdates function) and if after we are going to more interesting function flushWorkAndMicroTasks. Probably the most interesting function here :) function flushWorkAndMicroTasks(onDone: (err: ?Error) => void) { try { flushWork(); enqueueTask(() => { if (flushWork()) { flushWorkAndMicroTasks(onDone); } else { onDone(); } }); } catch (err) { onDone(err); } } It is doing the same as the sync version (that only calling flushWork()). But it wraps the call enqueueTask, which is a hack only to avoid setTimeout(fn, 0). an enqueueTask function export default function enqueueTask(task: () => void) { if (enqueueTaskImpl === null) { try { // read require off the module object to get around the bundlers. // we don't want them to detect a require and bundle a Node polyfill. const requireString = ('require' + Math.random()).slice(0, 7); const nodeRequire = module && module[requireString]; // assuming we're in node, let's try to get node's // version of setImmediate, bypassing fake timers if any. enqueueTaskImpl = nodeRequire.call(module, 'timers').setImmediate; } catch (_err) { // we're in a browser // we can't use regular timers because they may still be faked // so we try MessageChannel+postMessage instead enqueueTaskImpl = function(callback: () => void) { if (__DEV__) { if (didWarnAboutMessageChannel === false) { didWarnAboutMessageChannel = true; if (typeof MessageChannel === 'undefined') { console.error( 'This browser does not have a MessageChannel implementation, ' + 'so enqueuing tasks via await act(async () => ...) will fail. ' + 'Please file an issue at ' + 'if you encounter this warning.', ); } } } const channel = new MessageChannel(); channel.port1.onmessage = callback; channel.port2.postMessage(undefined); }; } } return enqueueTaskImpl(task); } The main goal of this function is only to execute a callback in the next tick of the event loop. That's probably why react is not the best in terms of bundle size :) Why async? It is a pretty new feature, probably needed more for concurrent mode, but it allows you to immediately run stuff like Promise.resolve aka microtasks for example when mocking API and changing real promise using Promise.resolve with fake data. import * as ReactDOM from "react-dom"; import { act } from "react-dom/test-utils"; const AsyncApp = () => { const [data, setData] = React.useState("idle value"); const simulatedFetch = async () => { const fetchedValue = await Promise.resolve("fetched value"); setData(fetchedValue); }; React.useEffect(() => { simulatedFetch(); }, []); return <h1>{data}</h1>; }; let el = null; beforeEach(() => { // setup a DOM element as a render target el = document.createElement("div"); // container *must* be attached to document so events work correctly. document.body.appendChild(el); }); it("should render with the correct text with sync act", async () => { act(() => { ReactDOM.render(<AsyncApp />, el); }); expect(el.innerHTML).toContain("idle value"); }); it("should render with the correct text with async act", async () => { await act(async () => { ReactDOM.render(<AsyncApp />, el); }); expect(el.innerHTML).toContain("fetched value"); }); Both tests passing 😌. Here is a live example (you can open sandbox and run tests inside using "Tests" tab): It is fun that it works, but if you will change Promise.resolve to literally anything like this: const fetchedValue = await new Promise((res, rej) => { setTimeout(() => res("fetched value"), 0); }); // it doesn't work ¯\_(ツ)_/¯ Replace It is pretty easy to replace any act() call by using simple setTimeout(fn, 0): button.dispatchEvent(new MouseEvent('click', {bubbles: true})); await new Promise((res, rej) => { setTimeout(res, 0); }); will work in most cases :) some sources But why The main question – why do we need it? So much ~not good~ code that confuses everybody? The answer – our tests that are running inside node.js and trying to be "sync" while the UI as async. And that's why you will never need any kind of act() if you are rendering React components in the real browser and using async test-runner, like Cypress for component testing Thank you Thank you for reading, hope it is more clear why do we need act() for most plain react unit testing. And no act() was not harmed in the making of this article :D Top comments (2) I think the actis weird too, your blog helped me a lot, thx~ In Replace,the source code may like this: codesandbox.io/s/determined-pine-x...
https://practicaldev-herokuapp-com.global.ssl.fastly.net/dmtrkovalenko/how-act-works-inside-react-3hc0
CC-MAIN-2022-40
refinedweb
1,270
54.22
webassets-jinja2js 1.0.0 Integration of pwt.jinja2js compiler with the webassets package. Usage Easiest way to install: pip install pwt.jinja2js pip install webassets-jinja2js In your assets.py file: from webassets_ext import JinjaToJSFilter from webassets.filter import register_filter register_filter(JinjaToJSFilter) Then use filter=”jinja2js” wherever you want to use it. If you want to chain it with a js minifer, make sure that jinja2js comes first in the list of filters. Bugs If you have any issues, please open a ticket at the Github page. - Downloads (All Versions): - 2 downloads in the last day - 14 downloads in the last week - 56 downloads in the last month - Author: Michael Su - License: BSD - Categories - Package Index Owner: Emsu - DOAP record: webassets-jinja2js-1.0.0.xml
https://pypi.python.org/pypi/webassets-jinja2js
CC-MAIN-2015-18
refinedweb
127
50.43
Here is the project: builing a extensive weather station using the mega, and hope to use the stalker so that I can poll it or request clock data with the mega using the llc connection between the two boards and the serial port on the mega to send data to a logging program, I am also sending the data to a lcd, and eventually wirelessly to the pc over quite a bit of distance. Here is the current issue: I really have no problems with the sensors or serial data, or lcd data, but I am new to most of this and the Wire library and the RTC are giving me lots of problems, most important is the fact that no matter what code approach I take to request date,year and time what I get are odd characters or zeros or nothing, could this be a problem with control bits needing to be masked , or I dont know maybe Iam not actually getting the RTC data, I know the stalker works correctly because I can set the clock and serial print the data. so just to clarify: seeeduino Mega (master Writer) with analog and digital sensors, seeeduino stalker (Slave Sender) , llc connection between boards, master writer requests date,year and time from the Slave Sender(stalker), and upon receit of data sends time stamp and sensor readings to the serial port . Any one know what method or code would work for this, Just not having any luck getting the RTC data to the Mega ? Here is the project: builing a extensive weather station using the mega, and hope to Hi, so just to clarify: seeeduino Mega (master Writer) with analog and digital sensors, seeeduino stalker (Slave Sender) , llc connection between boards, master writer requests date,year and time from the Slave Sender(stalker), You should use the seeeduino Mega as master reader., the example code is Arduino IDE -> File -> Examples->Write -> master reader. The seeeduino stalker as slaver sender. The example code is Arduino IDE -> File -> Examples->Write -> slave sender. So the seeeduino Mega can read the data from the seeeduino stalker Regards. Thanks for the reply SQX, I actually made an error when I said master writer , Yes Iam using the stalker as (Slave sender) and the seeeduino mega as (master reader), I have tried the arduino examples you mentioned and the examples did work and I was able to send “Hello” as per the examples, The problem is that I have not been able to successfully request or get the RTC data from the stalker thru the i2c/twi connection and a second problem I am having is to send charactors and numbers I can send one or the other but not both, I am sure its just because I don’t have very much code writing experiance, it seems difficult to me to manipulate the wire library as compaired to sending thru the serial port, I have also tried retrieving the RTC data by calling specific registers but was unsucessful, any advice or code help would be appreciated, and thanks again for the response ! Hi, I try to use the seeeduino mega as slave receiver, and the stalker as master sender. The demo code as following, you can refer to them. the seeeduino mega as slave receiver: [code]#include <Wire.h> void setup() { // Wire.onReceive(receiveEvent); // register event { byte c = Wire.receive(); // receive byte as a character Serial.print©; // print the character } byte c= Wire.receive(); // receive byte as an integer Serial.println(’ '); // print the integer }[/code] the stalker as master sender: [code]#include “FileLogger.h” #include “DS1307.h” #include <WProgram.h> #include <Wire.h> #define Timing 0 #define Accept 1 #define Record 2 byte start[7]= { ‘B’,‘e’,‘g’,‘i’,‘n’,0x0D,0x0A}; byte buffer[20]; int temp; byte ASCII[10]={‘0’,‘1’,‘2’,‘3’,‘4’,‘5’,‘6’,‘7’,‘8’,‘9’}; unsigned char result; unsigned char state; int time=0; int oldtime=0; byte x = 0; int i; void setup() { Serial.begin(9600); // start serial for output Wire.begin(); // join i2c bus (address optional for master); Wire.beginTransmission(4); // transmit to device #4 Wire.send("data:"); // sends five bytes for(i=0;i<15;i++) { Serial.print(buffer[i]); Wire.send(buffer[i]); // sends one byte } Wire.endTransmission(); // stop transmitting Serial.println(' '); } break; default:state=Timing;break; } } [/code] Regards. Hi SQX if your still out there, wow thanks for the Example, looks interesting. I copied and pasted the code into a new arduino file and imported the libraries, the seeduino mega slave receiver sketch compiled just fine, The stalker master sender however won’t compile, I get the following errors : ds1307 has no member named “stop” or “start”, and also MIN, HR, Date, MTH, and YR were not declared in this scope ( void setup), do you think this is a problem with the ds1307 library or maybe just a syntex issue ? Well any way Thanks for you help I am looking forward to getting it running so I can possibly adapt it or modify it to fit my project, I’ll keep working on it until I can get it to compile. Happy Holidays ! Hi, This is a problem with the ds1307 library, you can get my DS1307’s library, and put it on you arduino librarys. Please feel free to ask any questions. regards. DS1307.zip (93.8 KB) Hey SQX, thanks for the Library download, I installed the Library and I am happy to say that it seems to be working very well, it does appear to be a much more inclusive library, I have been playing around with it a bit, I think it will work out well for the project I am working on, I did notice that there is a issue with getting the year, or maybe it requires a different code statment than the other parameters (date,hour, min etc…), I looked in the Library source code and Year is defined (DS1307_YR),same form and syntax as the other parameters any Ideas ? Well its great to be moving forward again, thanks for all your help !!! oogs
https://forum.seeedstudio.com/t/having-trouble-reading-stalker-rtc-thru-llc-connection/14736
CC-MAIN-2021-49
refinedweb
1,017
62.51
. How does IntelliSense "interfere"? It doesn’t restrict what one can type; you can still type things not on the list. How about if, for instance, it filtered by default, but if you typed something in-scope but filtered, it broadened the filter? So if I typed Colors.Get the filter would broaden to show more than just the enumerated values? Dr Pizza: Very interesting idea. Kind of the reverse of what we were considering as a model as well. Namely that we would start with the most inclusive filter but as you typed we would trim down the list. That gets very interesting because imagine if you had something like (making this up onthe spot). class Foo { int WindowAccessibility; int MenuAccessibility; //_tons_ of other stuff } … Foo foo = new Foo(); foo<dot> Now you have a list with a ton of stuff in it. Say you type "accessibility" that no longer means "find me the thing in the list that starts with accessibility" it instead means "show me all the things related to what I’ve typed. Prefereably prefix matches first, then substring matches, then close spelling matches, etc." That might be very interesting, especially in a model where you’re searching for something but dont’ quite remember the name. "Was is WindowAccessibility, or AccessibilityWindow?" etc. I haven’t run into this problem, but I’ve heard that people who do a lot of forms work do. That’s why I’d like to hear if there are specific areas where the current completion sets get very unweildy and difficult to deal with. IMHO, something like "pressing the dot will display FilteredSense ™, pressing and holding the dot (even something like Shift+Ctrl+.) will display free form" would be better than writing "Get". You have to have two different types of completions as the IDE cannot possibly understand what you want to do. If you look at e.g. the ReSharper plugin, it has both normal intellisence (ctrl+space – that would show Format and parse) and correct intellisence (ctrl+shift+space – that shows Blue, Green, Red). Jens: This was a model I’ve been thinking about. However, I’m not (totally) convinced that it’s the only way to solve the problem 🙂 Rather than the two invoke methods, I was thinking about doing what you said by having a completion list with two tabs on it: "all" and "filtered". You could get between them by hitting left/right (or maybe ctrl-left/ctrl-right etc.) Benefits being it fits both models of users _and_ it’s extremely discoverable. With the caveat up front that you should be able to configure the default behaviour, I think it would be useful if the initial popup showed the options that you’ll be after in 95% of cases, but with the ability to show the lesser-used options on request (like personalised menus). So Colours. would initially show Blue Green Red (v) but would give you the other possibilities if you clicked on the arrow at the bottom, or hit Alt + ‘+’, or typed anything that took you out of the scope of the current completion list (e.g. "F" for Format). The idea of matching on substrings is interesting, but I think it might make more sense to have it in the Object Browser – it feels to me like it’s straying outside what Intellisense should be trying to do for you (though if you can think of a logical way of doing it then you might want to make it a configurable behaviour). Mark: "it’s straying outside what Intellisense should be trying to do for you" Man. I which we could define what it was the intellisense was trying to do for you. If I had to try, I’d say that our mandate is that it do whatever possible to improve developer productivity. 🙂 Quote: "We often times get a suggestion that when a user types “throw new<space>” we should automatically pop up the list prefiltered to only things that are exceptions." Eh, what’s wrong with that? The counterexample you give would not involve the ‘new’ keyword (it would be ‘throw GetMyPreformattedException()’ or something like that). Your argument is correct after ‘throw<space>’, but not after ‘throw new<space>’ … Luc: After "throw new" you are not required to write a type that extends from exception. you could (And people do) do something like: throw new MyExceptionAggregator().CurrentApplicationException; where MyExceptionAggregator doesn’t derive from Exception. Ok – realized that too, about one minute after I made my message… (and comments are not editable) But does that case occur often enough to warrant not filtering the exception list? As was mentioned before, having the completion list is not enforcing you to use it, isn’t it? I have difficulties thinking of scenarios where one actually would want to use the example you give. I can see someone using a field, a singleton or some static methods to do some more complex exception initialization, for instance for localization purposes. But creating a *new* MyExceptionAgregator to generate a new exception (or retrieve an existing one) looks a bit strange… A slight tangent which your post reminded me of. You’re probably not the person who could do anything about this, but maybe you know someone in the C# language group who would be able to. You pointed out that technically it’s possible to type Colors.Parse(…) and the intellisense filters this out. My first reaction was "wow, Colors.Parse actually works? I always wanted that! How dare intellisense have stopped me from realizing that?". After I started to compose a post explaining that I really wanted intellisense to allow me to see the Parse method on enum types, I realized what you really meant. Colors.Parse isn’t the useful method I would want it to be, but simply a (useless) alias for Enum.Parse, which means that to get the behavior I want, I still have to type (Colors)Colors.Parse(typeof(Colors),str) (or something, I forget the exact parameter order). Why oh why doesn’t the enum keyword automatically generate a static Parse method on every enum type so I could just type Colors.Parse(str)? Oh, and back on the subject you asked about – perhaps intellisense could keep track of the other things that are valid, and if it notices you starting to type something that’s on the full list but not the displayed list, show the other items (in a different color, probably). The only case that’s problematic if you do that is if you have an enum containing, say, a ParseFoo value. In that case, perhaps if you actually type out MyEnumType.Parse in full, intellisense shouldn’t autocomplete it to ParseFoo, even though it didn’t actually display Parse in the list because the thing you were typing was possibly in the list. I’m sure you can think of lots of variations on this behavior to experiment with, anyway. Another possibility would be to display the filtered list as soon as you hit ".", but after that show the full list (with the probably-useless ones in a different color) as long as what you’re typing matches any of the otherwise-hidden entries. Stuart: "Why oh why doesn’t the enum keyword automatically generate a static Parse method on every enum type so I could just type Colors.Parse(str)? " Excellent question. Excellent, excellent question. Wow. Hrmm. Well, I’ll try to figure that out ASAP 🙂 Luc: Yup. I agree. It’s a very corner case scenario, but it does occur. That’s one of the fundamental isssues here. Should we be "right" or "correct" 😉 Stuart: Interesting ideas and we will defintely be playing with a lot of them in the future. I’m looking for ones that definitely help with deiscoverability though. i.e. showing additional members if you type some correct substring could potentially confuse teh hell out of people that don’t know this type 100% backwards and forwards. I think you should display a full list, with the possibility of filtering coming later. You could use ctrl-m to filter in methods, ctrl-p for properties, ctrl-e for events, and ctrl-. to initiate free-form search (looking for the searhs tring in the name, paramaters, documentation, etc etc) More thoughts: it actually occurs to me that this problem isn’t specific to enums. I always wondered why any valid typename always gave me "Equals" and "ReferenceEquals" in the completion list, and now I understand why. On the one hand this is allows the useful trick of checking whether you typed in a valid typename by hitting <dot> and seeing whether the "Equals" and "ReferenceEquals" show up or not, but on the other hand, I’ve never encountered a case where you might EVER want to call Equals or ReferenceEquals through a type other than Object. (Actually, I just realized that you never even need a typename at all for these, because they’re always directly in scope by inheritance). I’d actually like to see a warning if you call a static method explicitly through a type other than the one that declares it. This seems similar to the "object == string" case which the compiler already warns about – it’s legal C# but unlikely to do what you expect. public class Foo { public static void DoSomething() {…} } public class Bar : Foo { public static void ARandomMethod() { DoSomething(); // No warning here, it’s in scope by inheritance Foo.DoSomething(); // No warning here, it’s actually declared in Foo Bar.DoSomething(); // Warning here, it’s not declared in Bar } } Needless to say, if you’re going to give a compiler warning on it, you shouldn’t show it in intellisense. If you do decide to do this, please add *something* to show up on a valid typename with no static members, eg a list that just contains an unselectable "<no static members>" entry or something. That’s probably less confusing than showing the useless "Equals" and "ReferenceEquals" entries anyway, and it still allows you to tell that the type is valid. (Damn, it’s hard to not rant about all my intellisense gripes when I have someone on the team writing it… I can’t resist adding this one other intellisense complaint: If I have a type Foo and a variable or property Foo both in scope at the same time, the C# compiler is perfectly capable of disambiguating in almost any case, but intellisense completely gives up the ghost and won’t give me any help for any variable of type Foo at all. Haven’t tested this in 2005 yet, so maybe you already fixed it, but if not, that would be my #1 intellisense request. This pattern shows up pretty frequently in my code – I often have things like: private Customer customer; public Customer Customer {get {return customer;}} In a class with such a declaration, I get no intellisense on Customer, customer, or any other variables of type Customer. That sucks. End of rant. ) Stuart: The whole rule "don’t show me static members unless I’m expliciting accessing the type which they are declared on" seems like a great heuristic to add to the list of things that generate the filtered list! Stuart: Fixed for 2005. If you have problems then *please* file bugs on them at I cannot express how important this is. We do not catch all poassible variations in code that can arise and I do not want you griping for the next few years because of some glitch in intellisense that bugs you 🙁 (but also rant. Rants are good. they make us understand the pain we’re causing people and how badly those issues need to get fixed!!) While we’re on the IDE behaviour, here’s another one for ASP.NET with VB.NET on 2003 IDE(didn’t tested on 2005, though): CType(ViewState("ActiveVersion"), Long) CType(viewstate("ActiveVersion"), Long) CType(viEWStAte("ActiveVersion"), Long) All three are valid (by design of the language) but IDE doesn’t automaticly convert the last two into "ViewState". When you type "viewstate(", it pops up the usual yellow box with ViewState() As System.Web… so IntelliSense discovers it very well. And this behaviour is specific only to ViewState, AFAIK. I don’t think that it’s a big deal, but FxCop do. Sorry Gokhan: C# guy over here. Not even sure what to do about that. I’d file a bug if I were you so it will get fixed 🙂 I think it is better to start with the constrained list and have an easy way to expand it. This goes for both Enum’s and Exceptions. Michael: What would that easy way be? How would you make it discoverable to the user? I would like an option to filter on what makes sense in my current context, including local variables and parameters. If I am about to assign something to a string, please show me things which evaluates to string. There should be a way to switch between modes, like ctrl + shift and ctrl + alt + shift. What if intellisense could learn? So if I type throw new MyExceptionFactory().GetMostLikelyException(); then intellisense adds MyExceptionFactory to the list as long as it’s available. I would like intellisense or autocompletion for keywords as well: pub vir void Execute() <ctrl + enter> gives public virtual void Execute() { // <– cursor here! } Autocompletion would generally be nice. Type: string s = "abc <enter> and the editor should complete it to: string s = "abc"; // <– cursor here! Thomas: I like these ideas 🙂 If it helps, we’ve added keywords to intellisense and autocomplete. Doing a lot better on finishing things up for you is something I’d really like to work on as well. Learning is a very interesting proposition. i’m not sure if it would do well enough, but it’s definitely worth exploring. Additional suggested method for Enum<T> public static bool TryParse<T>(string stringValue, out T enumValue); It was reported as 320417672 at 6/19/2004 and "has been forwarded to the appropriate group" ;o) I agree with most of people – "throw new" must list only exception constructors, but "throw " must list keyword "new" and others methods/variables suitable as exceptions. Both lists can behaive as Most Recently Used, after first failure to predit intellisense text – values must be added to list. As second step – it must be possible to press Delete or Ctrl+Delete to remove incorrectly added item from IntelliSence list. AT: Unfortunately, you cannot constrain a type variable down to the Enum type. I’m going to see what Anders has to say about that restriction Cyrus: Yea. It will be nice to hear why "where T:struct" works but "where T:enum" does not. But even using current "where T:struct" we can restrict type to value-type. The biggest benefit of generics will be compile-time prevention of errors like Y y = (Y) Enum.Parse(typeof(X),"Value"); // X vs. Y type Currently this kind of errors will be undetected even at runtime !!! Take a look on my helper class for type-safe Enums 😉 public static class Enums<T> where T : struct { public static bool TryParse(string stringVal, out T enumValue) { // TODO: Provide system support for TryParse if (Enum.IsDefined(typeof(T), stringVal)) { enumValue = (T)Enum.Parse(typeof(T), stringVal); return true; } enumValue = default(T); return false; } public static T ToObject(long value) { return (T)Enum.ToObject(typeof(T), value); } public static string[] GetNames() { return Enum.GetNames(typeof(T)); } public static T[] GetValues() { return (T[])Enum.GetValues(typeof(T)); } } Cyrus : my answer would be a bit "what you want" 🙂 As you pointed out above – intellisense is there to aid productivity. I think I personally would be a lot more productive if I was looking at a very restricted list. I’d prefer something aimed at helping me 90% of the time, rather than something aimed at helping me 100% of the time. I’m quite happy to type something that is not in the list, than have too many things in the list. For instance: Equals and GetType. I do use them. Occasionally. GetHashCode I’ve never explicitly called in my life. All three of them, however, I don’t want in my list. The few times I will call them, I’m quite happy to type them in, but having three more items in every single list I ever see I don’t want! In some ways I’m almost tempted to suggest not showing ANY inherited members – it would lead to FAR better code 🙂 – but I’ll acknowledge that most people would disagree with me 🙂 With enums, I’m very unlikely to call enum.anything – only in a small percentage of cases – and so I’d rather only see Color.[Red|Green|Blue] – and have to go and hunt for the rest. I like the idea of two lists, filtered and unfiltered – but don’t know about the two tabs – I’d prefer my eyes to see as little as possible when I’m looking for what I’m expecting. What about the simple option of this: on the first control-space or other intellisense causing event, show the superfiltered list. Control-space again expands that? It’s would be very easy to keep typing without losing context, or your fingers having to come off the home keys, and even though it’s less discoverable, only has to be learnt once and used many 🙂 As for the list filtering – add one vote for the exception filtering! I can’t think of anything else that can come after throw new … Similarly for attributes – only showing attributes after [… things that take delegates only showing functions, things that take types only showing types and so on. Also, a weird request – but we have basically chucked enum’s.. Enums would be FANTASTIC if you could add functionality to them. … but you can’t …. and I HATE > switch (x) > case Color.Red: DoForRed(); break; > case Color.Blue: DoForBlue(); break; – I think it’s horrible. I much prefer > x.DoSomething(); So, in reality here I don’t allow anyone to use enum (or switch :))… ALL our "enums" are actually defined like: > public abstract class Color > … [any functionality] > private class RedColor : Color {…} > private class BlueColor : Color {…} > > public static readonly Color Red = new RedColor(); > public static readonly Color Blue = new BlueColor(); So.. ideally those static functions would come up the same way enums do… 🙂 (or, much better – allow an enum to implement an interface! (which isn’t as dumb as it sounds – because even though the enum is a number – it can still look up a static instance of an object, and if that plumbing was all transparent, we could make some much more elegant programs) eg: > enum ConfiguredDataSourceType : ProvidesDataSource > { > Database > { > DataSource ProvidesDataSource.Source() {return new DatabaseSource();} > } > XMLFile > { > DataSource ProvidesDataSource.Source() {return new XMLFileSource();} > } > } of course that’s just an example I quickly thought of – please don’t come back and say "why wouldn’t you just store the class name instead of some arbitrary "type code" and dynamically create it" – because of course I would. PS – speaking of intellisense… it would be really nice to have the facility to put in a config file "Never show me". There’s exactly one class in the system that REALLY gets to me. We use the _ scoping prefix for class level variables – so we hit _ control space to get up our local scope. Which works great, except for the one @$!#$ class "_appdomain" which feels the need to jump into our lists 🙁 I think I read through everything, but forgive me if this has already been settled. I am way tired… Anyway, isn’t this a similar problem to fonts in MS Word? Many users have dozens or hundreds of fonts. But, they are most likely to use thier most recent. On your Enum example, what about a single popup that looks something like this: Blue Green Red —(Horizontal Rule)— Whatever Other Possibilities Up The Tree Given that the first group is going to be used most of the time, it should text match only among that group until a match cannot be found, then match among the full list. Anyone who is using the less common case can probably get two (or three, etc) letters on thier own, or bother to scroll through the list with thier mouse. It seems a lot simpler than hotkeys, tabs, etc. Okey-okey … How about another trade-off? Make it possible to use to select how IntelliSence will beehive in IDE settings? BTW, Do VSIP allows adding own IntelliSence helpers? Options can be: a) Show everything possible sorted alphabetically (current, default) b) Show most common used and all others using Horizontal splitter. Some keyboard shortcuts must be provided to move items between those two lists. c) Show tabbed interfaces. Again – shortcuts can be used to move items from one panel to another. Option must be given to save IntelliSence Lists on Project/User/Machine basis. d) Show tabbed interface with powerful search and documentation window showed for each member (ala Dynamic Help – but inside IntelliSence List). Make it possible to search inside help text e) Most crazy idea. Use Google search API to fetch results for IntelliSence drop-down list. User will be prompted to enter a few keywords and VS.IDE will perform Google search on code snippets to complete user idea. f) In addition to simply next member – IntelliSence can provide a way to complete entire block. Like a code snippets – but with automatic discovery. It must be possible to teach IDE on common techniques. Using clustering IDE must detect common patterns and you can give them some names/description. As result IDE must automatically recognize start of source code for common patterns and provide suggestions on code blocks. Implement Naive Bayes learning algorithm to classify source text based on name/type/visibility and initial source code part. For example "public MyObject GetInstance() {<BANG>" must provide singleton patterns list in IntelliSence suggestions. "public int CastMagicSpell(<BANG>" must provide list of possible overrides to specify default values for parameters. Selecting one of them will create source code for method and documentation for this override. No needs to study all crazy names of expansions. P.S> There are infinite number of ideas waiting for somebody implement them. Jeff: That’s a great suggestion, and it’s definitely one we’ll be trying, but there are several issues with it that we’re aware of already. The font menu has very different properties than what we’re looking for. For example, it’s intended to be mostly mouse driven. It’s difficult ot navigate with the kwyboard because it’s not clear how matching works. If you have: FooBar … ——– … Foo And you type: "Fo" where should you be? etc. etc. However, a grouped list that filtered as you typed would get around these issues and could be very cool! Using a MostRecentlyUsed list of elements along with a heuristic for smart ways to group, we could potentially help you out a lot. AT: There are no beehives in VS! We got rid of all of them. I like the way you think 🙂 Looking for a job? 😀 In the example… FooBar … ——– … Foo If you type Fo, you should remain in the top list. I really think the lower list would be ok to need a little mouse interaction if you need one of the more obscure choices but can’t manage to type it uniquely. The text should match against the top section until it can’t find a match. Also, if the non-matching substring is edited (if you have a typo that bounces you to the full list then delete it to mini-list valid substring) it should lock back into the mini-list section. I think. Jeff: That interaction model really worries me as it ends up punishing someone typeing something that is very reasonble. It forces them to type out fully something they used to be able to type just a substring of. That said, I think i need to do a better job realizing that there might not be "one right way to do this" and having options to change behaviors might work out best here. Those that then find this beneficial can have this behavior and those that like the current behavior can stick with it 🙂 I agree with AT! Why don’t we have an IntelliSense configuration dialog, like the C# formatting one (i.e., a million options)? 🙂 If that’s possible on your end (resources), it’d be great. Most people are somewhat professional developers and could figure out a few IntelliSense options to make it work just like they want to. Download 7 penis enlargement videos. User ratings & reviews of 55 penis enlargement pills. Penis enlargement pills, patches, extension devices and exercises reviewed by over 500 users. Find out if penis enlargement products actually work. PingBack from PingBack from
https://blogs.msdn.microsoft.com/cyrusn/2004/08/02/should-intellisense-do-what-you-want-or-do-whats-correct/
CC-MAIN-2017-09
refinedweb
4,170
62.17
REST APIs are extremely useful for accessing and modifying data within XSA Multi-Target Applications. When it comes to checking whether the API you’ve designed is working properly or not, things becomes a little difficult if you are not familiar with web security protocols. In this blog, I am going to briefly describe OAuth 2.0, the security protocol used by XSA for user authentication and authorization (UAA), and cover details on how you can make HTTP requests to different modules within an XSA application. The steps described in this blog are needed only if the XSA application has user authentication configured in the application router. If not, you can simply send an HTTP request from any source and you should be allowed access without any authorization checks. Also, if you are only dealing with HTTP GET requests, you can use any browser to test the API. When you open the URL for the API, you should be redirected to the UAA service which logs you in and takes care of authorization on its own afterwards. OAuth 2.0 OAuth2 is the security protocol used by XSA. It allows scope-based access to resources as opposed to basic authentication which allows access to all resources for all users. In scope-based authorization, you must have certain scopes granted to you for you to access certain resources. OAuth2 is an industry standard for authorization. For more detailed information, you can visit this link. Let’s start with understanding the three major roles involved in OAuth2: user, external client, and the (XSA) application which contains the authentication server and protected resources. A user is a person trying to access resources relevant to them from the application using an external client. The external client is registered on the application’s authentication server so that it can request access to those resources. The authentication server handles incoming requests from external clients and responds with appropriate authorization grants and access tokens. These tokens are then used by the external clients to access the protected resources. Four major grant types are used to request an access token, namely Authorization Code, Implicit, Password, and Client Credentials. The one most relevant to this blog is Client Credentials which only involves the external client and the application. How it works is that you send a request to the authentication server with client credentials (client id and client secret), which you receive when you register your client with the server, and requested scopes. The server responds back with an access token (in the form of a JSON Web Token) which includes the requested scopes. The client can then use this access token to request resources from the application. The diagram below outlines the Client Credentials workflow. If you want details on the other three grant types, please visit this page. Postman Postman is a tool that you can use to test HTTP communication. It allows you to store session variables, cookies, and authorization information from your HTTP requests and use them in subsequent requests – pretty similar to how a web browser works. I am going to demonstrate how to use Postman to connect to an XSA application and send HTTP requests to REST APIs contained in that application. First, you need to obtain a valid access token from the UAA service in the XSA application. Open the Postman desktop app and create a new basic request. You should see an empty request page like the one in the image below. Under authorization, change TYPE to Oauth 2.0. In the panel that shows up to the right, click on Get New Access Token. This should open the GET NEW ACCESS TOKEN form. Change Grant Type to Client Credentials. The updated form should look like the following. You need to fill out all information required in this form to obtain a valid access token. For token name, you can put anything you wish. To get the rest of the information (client id, etc.), open the command prompt from Windows and log in to XSA using xs login. Make sure you are in the development space. Execute xs env <app> where <app> is the name of the application router module. This should output all environment variables for services and applications bound to the application router. The one we are interested in is the instance of the xsuaa service which is responsible for user authorization. The output for this should look similar to the image shown below. The Client ID and Client Secret should be included in the environment variables. Copy and paste these into the Postman form. For Access Token URL, use the URL included in the environment variables and append “/oauth/token” to the end of it. Scope is optional, if you provide none, you would be assigned uaa.resource by default. Click Request Token. Postman should then show you the access token it receives (as a base64 string) along with the scopes granted as shown below. Click Use Token before closing the dialog. Congratulations! You have successfully obtained the access token which you can now use to access resources from the XSA application!! Keep in mind that when you are accessing resources using this method, you should use the direct URLs for those resources as opposed to the URL of the application router which redirects you to the requested resource. Also, the access token expires after a certain time period, so you would need to request a new token from time to time. To demonstrate how you would use the access token in a subsequent request, I am going to send a POST request to a Python module within my XSA application. You can find the code for this application at this link. In Postman, change the request type to POST. Enter the request URL to be. Make sure the Access Token in filled in. The request should look as follows at this point. Under the Body tab, choose raw from the radio buttons. Select JSON (application/json) from the dropdown list that appears. Copy and paste the following text. { "productID": "22335142", "category": "Notebooks", "price": "45.22" } You are now ready to send the request. Click the blue Send button in the top right. The response should be as follows. Python You can also obtain an access token programmatically and send HTTP requests accordingly. I am going to demonstrate this using Python3. The following code snippet shows how to obtain an access token using the OAuth2 Client Credentials workflow. First, you create a client using your Client ID, followed by a client session using the OAuth2Session object. Eventually, you can call the fetch_token function on your session object to obtain the access token. This token contains the default scope for your application. Just a note that I have turned SSL verification off (verify=False) since I am using local self-signed certificates for my XSA application. from oauthlib.oauth2 import BackendApplicationClient from requests_oauthlib import OAuth2Session CLIENT_ID = '' #your client id goes here CLIENT_SECRET = '' #your client secret goes here TOKEN_URL = '' #this would be different for you client = BackendApplicationClient(client_id=CLIENT_ID) oauth = OAuth2Session(client=client) token = oauth.fetch_token(token_url=TOKEN_URL, client_id=CLIENT_ID, client_secret=CLIENT_SECRET, verify=False) That’s it! It’s really that simple!! For an example of how you can use the access token in subsequent requests, I am going to send the same POST request from the Postman section above but using Python this time. The code would be as follows. import requests import json access_token = #your access token goes here URL = '' headers = {'Authorization': 'Bearer %s' % access_token, 'Content-Type': 'application/json'} body = json.dumps({'productID': '88997785', 'category': 'Notebooks', 'price': '88.98'}) r = requests.post(url, headers=headers, data=body, verify=False) The response should be (‘88997785’, ‘Notebooks’, 88.98, ‘Product 88997785 inserted successfully’)! CURL The last tool I am going to talk about is CURL, which you can use from a terminal or command prompt to send HTTP requests. If you are using CURL just for testing APIs, you do not need to worry about obtaining the OAuth2 access token yourself. You can request that token directly from the XS command line interface using the xs oauth-token command. This should return a valid access token with all the scopes you need. You can use this token in the header of your subsequent CURL requests to access resources in your XSA application. The image below demonstrates how this works within the command prompt. The body.txt and headers.txt files should look as follows. Closing Hopefully this blog has helped you better understand how to work with user authorization in XSA applications. If you still have any questions with regards to obtaining an access token and using it, feel free to comment below! Hi, great post this was useful for me to understand the internal logic for authorisation at the xs uaa. In addition to this, I want to add the possibility to use the grant type password credentials in postman. As Username and password you can use the credentials of an xsa user. With this method you get an oauth token, which has the scopes of this user. So you can test the defined scopes and roles for each user, which were defined in the xs-security.json or directly in the mta.yaml . You should add in your post, that you are checking for the default scopes in your python code, instead of defining a scope/role model for your application. Hi Altaf Thank you for the great post. Currently i am looking to post a value to external REST API from HANA using XSA or any other service, could you let me know what could be the best way to POST some hard coded values to an external REST API, currently i am passing the values as following from POSTMAN. Basic Auth: User – UserNAme Password – Password123 JSON body: { “command”: “Import”, “calendarSeq”: “2254353535349”, “stageTypeSeq”: “3543546464358697”, “traceLevel”: “status”, “userId”: “Administrator”, “runMode”: “all”, “batchName”: “BOne1”, “module”: “TransactionalData”, “stageTables”: [“TransactionAndCredit”], “runStats”: “false” }
https://blogs.sap.com/2018/07/16/testing-rest-apis-in-xsa-applications-without-ui-layer/
CC-MAIN-2020-24
refinedweb
1,661
64.51
function stable Convert an object into an Observable of [key, value] pairs. pairs<T>(obj: Object, scheduler?: SchedulerLike): Observable<[string, T]> Observable<[string, T]> Turn entries of an object into a stream. pairs takes an arbitrary object and returns an Observable that emits arrays. Each emitted array has exactly two elements - the first is a key from the object and the second is a value corresponding to that key. Keys are extracted from an object via Object.keys function, which means that they will be only enumerable keys that are present on an object directly - not ones inherited via prototype chain. By default these arrays are emitted synchronously. To change that you can pass a SchedulerLike as a second argument to pairs. @example Converts a javascript object to an Observable import { pairs } from 'rxjs'; const obj = { foo: 42, bar: 56, baz: 78 }; pairs(obj) .subscribe( value => console.log(value), err => {}, () => console.log('the end!') ); // Logs: // ["foo", 42], // ["bar", 56], // ["baz", 78], // "the end!" @param {Object} obj The object to inspect and turn into an Observable sequence. @param {Scheduler} [scheduler] An optional IScheduler to schedule when resulting Observable will emit values. @returns {(Observable<Array<string|T>>)} An observable sequence of [key, value] pairs from the object. © 2015–2018 Google, Inc., Netflix, Inc., Microsoft Corp. and contributors. Code licensed under an Apache-2.0 License. Documentation licensed under CC BY 4.0.
https://docs.w3cub.com/rxjs/api/index/function/pairs
CC-MAIN-2021-10
refinedweb
230
60.11
GCD is the abbreviation for Greatest Common Divisor which is a mathematical equation to find the largest number that can divide both the numbers given by the user. Sometimes this equation is also referred as the greatest common factor. For example, the greatest common factor for the numbers 20 and 15 is 5, since both these numbers can be divided by 5. This concept can easily be extended to a set of more than 2 numbers as well, where the GCD will be the number which divides all the numbers given by the user. Image source Applications of GCD GCD has a wise number of applications in - Number theory - Encryption technologies like RSA - Modular arithmetic - Simplifying fractions that are present in an equation Different ways to implement GCD Program in Python There are different ways to implement GCD program in Python. Here I am presenting some most popular and widely used implementations for you. 1. Using standard library functions to find GCD in Python In Python, the math module contains a number of mathematical operations, which can be performed with ease using the module. math.gcd() function compute the greatest common divisor of 2 numbers mentioned in its arguments. Syntax: math.gcd(x, y) In the above syntax the parameters are as follows In the above syntax the parameters are as follows -. The following examples shows how to use math.gcd() # importing "math" for mathematical operations import math # prints print("The gcd of 24 and 36 is : ") print(math.gcd(24, 36)) The following is the out put of the above program The gcd of 24 and 36 is : 12 2. Using iterative process (Loops) def computeGCD(x, y): if x > y: small = y else: small = x for i in range(1, small+1): if((x % i == 0) and (y % i == 0)): gcd = i return gcd a = 6 b= 4 print ("The gcd of 6 and 4 is : ",end="") print (compute GCD(6,4)) Output:- The gcd of 6 and 4 is : 2 3. Using Recursion def GCD(a,b): if(b==0): return a # base case else: return GCD(b,a%b) # General Case a = 60 b= 48 # prints 12 print ("The gcd of 60 and 48 is :") print (GCD(60,48)) Output:- The gcd of 60 and 48 is : 12 Note: only a member of this blog may post a comment.
http://www.tutorialtpoint.net/2021/12/gdc-program-in-python.html
CC-MAIN-2022-05
refinedweb
393
53.14
/* this is a hacked version of if.h from unix to contain the stuff we need only to build named (bind) with the minimal amount of changes... by l. kahn */ /* * Copyright (c) 1982, 1986 Regents of the University of California. * All rights reserved. The Berkeley software License Agreement * specifies the terms and conditions for redistribution. */ #ifndef _NET_IF_H #define _NET_IF_H /* #pragma ident "@(#)if.h 1.3 93/06/30 SMI" /* if.h 1.26 90/05/29 SMI; from UCB 7.1 6/4/86 */ #ifdef __cplusplus extern "C" { #endif /* * Structures defining a network interface, providing a packet * transport mechanism (ala level 0 of the PUP protocols). * * Each interface accepts output datagrams of a specified maximum * length, and provides higher level routines with input datagrams * received from its medium. * * Output occurs when the routine if_output is called, with three parameters: * (*ifp->if_output)(ifp, m, dst) * Here m is the mbuf chain to be sent and dst is the destination address. * The output routine encapsulates the supplied datagram if necessary, * and then transmits it on its medium. * * On input, each interface unwraps the data received by it, and either * places it on the input queue of a internetwork datagram routine * and posts the associated software interrupt, or passes the datagram to a raw * packet input routine. * * Routines exist for locating interfaces by their addresses * or for locating a interface on a certain network, as well as more general * routing and gateway routines maintaining information used to locate * interfaces. These routines live in the files if.c and route.c */ /* * Structure defining a queue for a network interface. * * (Would like to call this struct ``if'', but C isn't PL/1.) */ /* * Interface request structure used for socket * ioctl's. All interface ioctl's must have parameter * definitions which begin with ifr_name. The * remainder may be interface specific. */ #ifdef FD_SETSIZE #undef FD_SETSIZE #endif #define FD_SETSIZE 512 #include <winsock.h> typedef char *caddr_t; int get_winnt_interfaces(); struct ifreq { #define IFNAMSIZ 16 char ifr_name[IFNAMSIZ]; /* if name, e.g. "en0" */ struct sockaddr ifru_addr; char nt_mask[IFNAMSIZ]; /* new field to store mask returned from nt lookup l. kahn */ #define ifr_addr ifru_addr /* address */ #define ifr_mask nt_mask /* nt mask in character */ }; #ifdef __cplusplus } #endif #endif /* _NET_IF_H */
http://opensource.apple.com/source/ntp/ntp-45.1/ntp/include/ntif.h
CC-MAIN-2015-27
refinedweb
366
57.57
-22179 Related Items Preceded by: Lake City reporter and Columbia gazette This item is only available as the following downloads: ( PDF ) Full Text PAGE 1 By AMANDA [email protected] efore she could bite into a slab of ribs at the Smokin Pig BBQ Fest, Sarah Ripple had to wait for her husband the barbecue connoisseur to arrive at the Columbia County Fairgrounds. The Smokin Pig lasted Friday and Saturday, featuring a wide array of competition barbecue teams from across the Southeast. The event was a World Qualifier and a Jack Daniels championship qualifier. Guest tasted samples of ribs, chicken and pork from local favorites like the Budmeisters cook team and Fenced-In BBQ. Anything barbecue, my husband will tear it up, Ripple said, speaking of her husband David. Last year when we came, we looked at every barbecue menu and the prices. We even smelled it. The husband takes his barbe-cue very seriously. While waiting for David By STEVEN [email protected] Safety Manager David Kraus told county commissioners that the Combined Communications Center will still be able to handle city fire department dispatch services Oct. 1 despite mixed messages from city officials. However, City Manager Wendell Johnson says that the citys intentions were made clear through documentation and correspondence throughout the year. As of today, the county and city are participants in an interlocal agreement whereby the county will dispatch Lake City fire protection services through their Combined Communications Center and respond automatically to city fire emer-gencies with county resources. On June 28, County Manager Dale Williams sent a letter to Johnson saying the county plans to terminate the interlo-cal agreement effective Oct. 1 in response to the requests of Johnson and LCFD Fire Chief Frank Armijo. The city then set in motion plans to dispatch city fire services from their own Public Safety Building, the same location from which the city currently dispatches LCPD officers. The countys understanding was that the city would handle its own fire dis-patch services beginning Oct. 1. However, county officials are unclear if that will in fact happen. They never officially told us one way Lake City ReporterSUNDAY, SEPTEMBER 22, 2013 | YOUR COMMUNITY NEWSPAPER SINCE 1874 | $1.00 LAKECITYREPORTER.COM Homecoming Queen crownedin Fort White. 6-year-old geniusfinds tests easy peasy as pie. SUNDAYEDITION 6A 5A CALL US:(386) 752-1293SUBSCRIBE TOTHE REPORTER:Voice: 755-5445Fax: 752-9400 Opinion ................ 4ABusiness ................ 5AObituaries .............. 6AAdvice & Comics......... 8BPuzzles ................. 2B TODAY IN PEOPLE Nutcracker auditions COMING TUESDAY Local news roundup. 91 64 T-Storm Chance WEATHER, 2A People.................. 2AOpinion ................ 4AObituaries .............. 5AAdvice.................. 3DPuzzles .............. 2B, 3B 86 67 Chance of storms WEATHER, 8A Vol. 139, No. 168 1A MIXED MESSAGES County: Lake City officials say one thing about unified dispatch, staffers, another. DISPATCH continued on 3A A smokin good time Joblessrate ondecline Unemployment in county falls to 6.6%;state rate drops too. By TONY [email protected] in Columbia County fell for the first time in three months in August, showing a four-tenths decrease in the local jobless rate. According to information released Friday by the Florida Department of Economic Opportunity, Columbia Countys unemployment rate for August was 6.6 percent. In July the figure was 7.0 percent. Floridas unemployment rate in August fell a tenth to 7.0 percent, while the nations jobless rate was 7.3 percent. Local officials said the decrease in local jobless numbers is based on people returning to work at school and other seasonal factors. Our seasonally adjusted employment (due to the school employment, tourism, and agricul-ture) has leveled out as all of our schools were back in session dur-ing the month of August and the tourist season saw its close (for the most part) on Labor Day, bringing our current unemployment rate down to 6.6 percent this month from 7.0 percent in July of 2013, said Denise Wynne, Florida Crown Workforce Board Lead Employer Services Representative. Wynne said there is an excellent chance the local unemployment rate will continue to decrease as new job opportunities become available in Columbia County. With several new employers opening their doors in Columbia County during the month of September, including Michaels and CiCis Pizza, we do expect to see a decrease in our local unem-ployment numbers, she said. We certainly hope that trend contin-ues, and as we are heading into the holiday shopping season when merchants traditionally hire their seasonal staff, we at Florida Crown Workforce Board expect to see this trend continue into December. In August there were 30,967 UNEMPLOYMENT continued on 7APhotos by AMANDA WILLIAMSON /Lake City Reporter Kristah Couey and Ethan OHearn dig into ribs cooked b y Wellborn-based Fenced-In BBQ during a break from the work day Saturday. The two helped owner Lawrence Rentz dish out his barbecued far e at the Smokin Pig BBQ Fest. The two-day event ran Friday and Saturday at the Columbia County Fairgrounds.5th annual Smokin Pig draws hungry crowd Firefighter Austin Thomas checks the temperature inside the grill for the Black Helmet BBQ competition team, while teammate Greg Sund holds open the lid. The four men of Black Helmet BBQ represent the Lake City Fire Department. Good eatin, good times for folks at the fairgrounds. PIG continued on 6A PAGE 2 DAYTONA BEACH Secured inside a room you need a U.S. passport to enter is a modern arcade of war machines. It looks like a gamers paradise: A comfortable tan leather captains chair sits behind four computer mon itors, an airplane joystick with a red fire button, a keyboard and throttle con trol. The games here have great implications. Across the world, a $20 million Gray Eagle drone armed with four Hellfire missiles, ready to make a sortie into hostile territory is taking commands from a worksta tion like this one. A gradu ate from this room on the campus of Embry-Riddle Aeronautical University in Daytona Beach could be in that other room in as little as six months with a masters degree in piloting drones, his hand on the joystick, making $150,000 a year. Welcome to the new basic training, where the skills to fight the War of Tomorrow are taught in private class rooms today. Embry-Riddle this fall became the first in the country to offer post graduate education in this field. Were trying to prepare our students so theyre ready to operate at the highest levels, said Dan Macchiarella, department chair of aeronautical sci ences at Embry-Riddle. But as with so many things that begin with a military purpose, these unmanned vehicles are coming in all shapes and sizes from full-sized planes to mini helicopters less than 2 feet across to play a role in the civilian world. School sanitized as dozens fall ill WESTON Cleaning crews are at work at a South Florida elementary school where dozens of students have fallen ill. WFOR-TV reports cleanup crews are working with the health department to sanitize Manatee Bay Elementary. Students at the school in Weston have reported symptoms including vomit ing, diarrhea and fever. The principal sent a let ter to parents urging them to keep any sick kids home and encourage their chil dren to wash their hands frequently. Parents told the station between 200 and 300 kids had gotten sick and some were going to the school to pick their children up. Amnesty day for exotic pets CORAL SPRINGS Instead of releasing exotic pets into the wild, owners can surrender their nonna tive animals at an event in Broward County. The Florida Fish and Wildlife Conservation Commission is holding the Exotic Pet Amnesty Day on Saturday in Coral Springs. Pet owners can drop off their exotic rep tiles, amphibians, birds, fish, mammals and inver tebrates at the event free of charge. Domestic pets such as cats and dogs are not accepted. Penalties for not hav ing the proper license to have the exotic animal will also be waived. It is illegal to release any animal not native to Florida. Helping owners rent their vessels MIAMI Getting out on the open sea, wind in your hair, enjoying the ride with your family and friends. Then theres the boat payments, storage fees, fuel, maintenance and repair these costs can quickly sink the dream of boat ownership. Ahoy, mates a new breed of boat-sharing ser vices is entering the hot South Florida boating mar ket. San Francisco-based Boatbound.co set up its East Coast headquar ters own ers to rent out their boats when they arent using them for many peo ple thats a considerable chunk of time. There are 12.2 million boats regis tered in the United States, yet the average boat gets used just 26 days a year, according to boating industry statistics. PEOPLE IN THE NEWS Celebrity Birthdays Actress Laura Vandervoort (Ted) is 28. Actress Catherine Oxenberg (Dynasty) is 51. St. Louis Cardinals switchhitter Vince Coleman is 51. Scott Baio (Happy Days, Joanie Loves Chachi) is 52. Rocker Joan Jett of the Blackhearts is 54. Operatic tenor Andrea Bocelli is 54. Singer Debby Boone ('You Light up my Life') is 56. Rocker David Coverdale of Deep Purple is 61. Actor Paul Lemat (American Graffiti) is 67. Baseball manager Tommy Lasorda is 85 CORRECTION The Lake City Reporter corrects errors of fact in news items. If you have a concern, question or suggestion, please call the executive editor. Corrections and clarifica tions will run in this space. And thanks for reading. AROUND FLORIDA Friday: 1-7-25-44 (13) Friday: 2-10-12-25-29 Friday: Afternoon: 3-6-5 Evening: 0-3-0 Friday: Afternoon: 3-0-2-2 Evening: 1-5-9-6 Wednesday: 14-15-23-36-49-50 (x4) Embry-Riddle offers masters degree in drones LOS ANGELES Its standing room only inside The Gorbals. The hip downtown Los Angeles eatery is filled to the brim with loud lookieloos whove gathered to sip free-flow ing beer and wine while watching a pair of professional chefs sizzle their way through a new televised cook ing competition called Knife Fight, the first series debuting on the new Esquire Network. The boisterous room is momentari ly interrupted by Drew Barrymore. Yes, that Drew Barrymore, the Drew Barrymore from the films E.T. and Never Been Kissed. Shes one of the shows executive producers and is serving as a guest judge for tonights battle. Without any provo cation, Barrymore suddenly hoists herself atop a table and screams at the top of her lungs. IVE ALWAYS WANTED TO JUDGE A (EXPLETIVE) COOKING SHOW! The crowd roars. The chefs keep working on their improvised dishes. Yep, this is not one of those by-thebooks cook-offs like Chopped or Top Chef, and it certainly doesnt feel like the sort of series that would launch a channel inspired by and named after the slicker-than-slick Hearst mens magazine. Thats the point, programming director Matt Hanna notes in a nearby ballroom serving as a makeshift control room. Theres an integrity that youre gonna see with this show that will hopefully reflect what we want the network to be, Hanna said over the clamor from the restaurant next door. Whether Im talking about a great comedian, restaurant or TV show, it all comes down to honesty, and theres something really honest here. Were hoping to defy expectations. The network kicks off Monday with a two-hour 80th anniversary retro spective about the networks name sake narrated by Mad Men star John Slattery. Knife Fight, which is hosted by The Gorbals owner and second sea son Top Chef champ Ilan Hall, and a docu-series about Scottish beer aficio nados James Watt and Martin Dickie titled Brew Dogs debut Tuesday. Urban luxury for next summer from Versace MILAN Donatella Versace mixes street wear with sophisticated styles for her new summer col lection, its also about basic items like a pair of jeans, a shirt or a T-shirt, the designer said ahead of Friday nights labels trademarks. Bono joins world leaders at Global Citizen Fest NEW YORK U2 frontman Bono and a long list of world leaders will attend next weeks Global Citizen Festival to help fight poverty. United Nations Secretary General Ban Ki-moon, leaders from several countries and congressional mem bers will join Stevie Wonder, Kings of Leon, Alicia Keys and John Mayer at the free concert Sept. 28 in New Yorks Central Park. The concert coincides with the U.N. General Assembly. Fans earn free tickets for volunteering to help end poverty. Esquire Network seeks niche for men Wednesday: N/A PB XX 2A LAKE CITY REPORTER SUNDAY REPORT SUNDAY, SEPTEMBER 22, 2013 Page Editor: Robert Bridges, 754-0428 Daily Scripture Master, which is the great command ment in the law? Jesus said unto him, Thou shalt love the Lord thy God with all thy heart, and with all thy soul, and with all thy mind. This is the first and great commandment. Matthew 22:36-38 Associated Press AMANDA WILLIAMSON/ Lake City Reporter Nutcracker auditions Local ballerinas audition at Florida Gateway Colleges Levy Performing Arts Center on Saturday for the December performance of The Nutcracker by Dance Alive National Ballet of Gainesville. The performance required 30-40 local dancers. From right to left: Eliana Duarte, Abigail Schuler and Natalie Duarte. Back row: (from right) Kaylin Tate, Erinn White and Sarah Simpson. ROBERT BRIDGES /Lake City Reporter Truck fire This semi tractor-trailer caught fire on East Duval Street just west of Main Boulevard Thursday. No one was hurt. We smelled [smoke] in the building, said Joyce Williams, who works at N&W Cleaners nearby. Thats when I went in and called 911. Williams said the truck driver attempted to control the blaze with a small fire extinguisher and when that didnt work, tried using bottled water. Lake City Fire Department first responders were on scene quickly and put the fire out. The cause of the fire wasnt clear. PAGE 3 or another, Kraus said. Thats why we made it clear to the board were going to be prepared either way. The last correspon-dance we had is that [Sept. 4] letter from the mayor to [county commission chair-man] Steven Bailey saying they want to leave things the way they are. On Sept. 4, Mayor Stephen Witt sent a letter on behalf of the city coun-cil detailing what course of action they plan to take Oct.1. It is the councils position that there should be no imminent changes to city/county public safety dispatch or fire protection service relationships at the present time, Witt said in his letter. This viewpoint is not only logical, but cer-tainly the most reliable path to the short term safety interests of both Columbia County and Lake City resi-dents. Included with the letter was a six-page posi-tion statement explaining the citys existing areas of contention with the county in terms of combined dis-patch efforts.CITY GRIEVANCESCommon motifs in the position statement included concerns over fair repre-sentation of all public safe-ty participants in combined dispatch efforts, business management practices, financial details and the centers goals and objec-tives. Such sentiment dates back to a March 19 let-ter Johnson sent to Kraus explaining that, It does not appear favorable for the city to plan for continued partic-ipation with the Columbia County CCC [combined communications center]. In that letter Johnson stated, Stakeholder col-laboration, the governance model and SOPs [standard operating procedures] were among our principal con-cerns concerns which simply fell on deaf ears. LCFD Fire Chief Frank Armijo sent another letter to Columbia County Fire Rescue Chief David Boozer in which he accused county fire stations of, [Picking and choosing] the single alarm calls that they feel that they want to respond to. Prior to all of this, the city and the county had an automatic aid agreement, Director of the Combined Communication Center Tom Brazil said during an interview Friday. The county covered certain city addresses on the west side of I-75. Now, on Oct. 1, it will be city fire. The city also covered parts of Five Points for the county. That automatic aid is ending Oct. 1.MIXED MESSAGESBeyond that, little is clear, says Kraus. This is where we get the mixed messages, Kraus said. Their words say one thing, but what their staff and people are telling us is something different. An email sent Tuesday from Lake Citys IT and Communications Director Zack Moss contained a response to sheriffs deputy Billy Hall, who asked when city fire would be switching to their own CAD [com-puter assisted dispatch] system. This is in preparation for October 1st, Moss said. We should be ready by the end of the week next week to run this and move over. We will be in contact when we are ready to do this. Kraus also said Chief Armijo stopped by earlier last week to give the dis-patch center a new ring down phone number so the county could bounce calls to the citys dispatch center. [Armijo and Moss] said theyre ready, Kraus said. They came over last week and gave us a new number to roll over calls to. But Mayor Witts call for no imminent changes to the City/County pub-lic safety dispatch or fire protection service relation-ships at the present time left Kraus in bureaucratic limbo. We dont know what theyre going to do come Oct. 1, Kraus said. I have no idea. According to Witts letter to county commissioners, the city felt there should be no imminent changes due to several projects in the works, including the countys ISO-inspired upgrades and the replace-ment of the citys Florida Highway Patrol tower on US Highway 90. Were in the middle of a transition process for the Lake City Fire Department to handle their own dis-patch with an Oct. 1 goal, Johnson said Friday in an attempt to clarify the situ-ation. Kraus and Brazil said the Combined Communications Center will be prepared for either outcome. We want to make it real clear so everyone under-stands, Kraus said. We, the county, will never do anything to endanger the lives of any citizen within Columbia County, which is why we came up with a contingency plan. If for some reason theyre not ready, you cant just ignore the call. If we cant roll it to the city, well continue to simultaneously dispatch city and county [fire ser-vices].ANI/ALI, PSAPAccording to the minutes of an Aug. 8 meeting between city/county public safety and 911 staff, the city offered to not make any changes to fire dispatch on two conditions: 1) The backup PSAP [Public Safety Answering Point] needs to be moved to the City Public Safety Building; 2) ANI/ALI [Automated Number Identifier/Automated Location Identifier] is provided to the City. ANI/ALI is a computer system that allows public safety personnel in the field to see the caller identity, location and details pertain-ing to any emergency call. We believed they wanted [the backup PSAP] so they could get ANI/ALI, Kraus said. When the call comes up on a screen, it gives you all the data on the call. Right now their police department does not receive that information. Currently, only county officials have access to the system because they oper-ate the primary PSAP, as mandated by the Florida Legislature. The city referred to that legislation in a letter to Dale Williams, saying that State of Florida 9-1-1 guidelines mandate the county 9-1-1 coordinator [the county] ensure all stakeholders are involved in the develop-ment of countywide 9-1-1 plans. In this instance, that does not appear to have occurred. The city originally wanted the backup PSAP to be located in their Public Safety Building, where LCPD is dispatched from currently. However, because it was cheaper, faster and easier, according to Kraus, the backup PSAP was installed in the Columbia County Detention Facility where a communications tower capable of effectively com-municating between the CCC and the backup PSAP was already in place. Johnson said that before the backup PSAP was moved to CCFD, the coun-ty manager communicated in an [Oct. 2010] email to me ... that when the city/county have resolved their differences, all still maintain the backup PSAP should/will be placed at the citys Public Safety Building. He also decried the backup PSAP for being a cold operating site, meaning there was no one staffing it on a regular basis. In response to Johnsons concerns, Williams wrote on June 4, I found your letter to be insightful; how-ever, I do not agree with many of your comments. Regardless, I regret that a true combined communication center was not achieved. Brazil explained that 9-1-1 staff was capable of operating the backup PSAP station through the use of microwave transmissions, meaning they wouldnt nec-essarily have to staff the backup PSAP at all. We explained to them that because the way the system is set up, Brazil said, Unless we physically lost this building, we can bring all the information and run all the equipment at the backup using microwaves and never have to leave this building. We can lose the console in this room to run our CAD, ANI/ALI, 9-1-1severything can be run from here running on the equipment at the backup. We dont physically have to be out there and physically put bodies in seats unless we physically lost this build-ing. Kraus said Columbia Countys setup is not unusual. This is the type of backup 99 percent of Florida has. Alachua, Marion, everybody has one, Kraus said. We worked with Alachua to assist us in designing this. Kraus and Brazil both claim that problems hold-ing back the implementa-tion of a truly combined dispatch center were politi-cal, not technical. The county has the ability and resources to pro-vide combined communica-tion. Period, Kraus said. Were always open if the city would like to come to the table and negotiate. Boozer also commented that the 9-1-1 staff has a communications meeting every Tuesday. However, according to him, city offi-cials never show up. Regardless of what direction county and city offi-cials take and despite ter-mination of the automatic aid agreement, Boozer said public safety will continue to be his primary concern. I emphasize to my guys that if theres a need, well respond, Boozer said. County and city lines dont matter. Were tired of games. Its time for business. Page Editor: Robert Bridges, 754-0428 LAKE CITY REPORTER LOCAL SUNDAY, SEPTEMBER 22, 2013 3A3A 934 NE Lake DeSoto Circle, Lake City, FL(Next to Courthouse) Teen gives to others for his birthday DISPATCH: County officials say city sends mixed messages ab out its plans Continued From Page 1A By AMANDA [email protected] For Skyler Colleys fourteenth birthday, he didnt ask for video games, electronics or any other normal present for a boy his age. Instead, he asked for slippers, robes and shirts to donate to the local nursing homes. He wanted his friends to bring toy trucks, baby dolls, princess tiaras and action fig-ures to give to the children at Shands Lake Shore Regional Medical Center. Last year, Skyler raised enough presents through his own birthday party to give to 90 people at the Avalon Healthcare and Rehab Nursing Home and the childrens divi-sion at the medical center. As soon as I got outside, I told my parents I wanted to do it again next year, Skyler said. I want to get more people to say, Hey, I want to do this too. Already Skyler has recruited his church, Parkview Baptist, and several of his friends to either contribute time or money to the cause. At Valentines Day, they dis-tributed candy to two local nursing homes. He wants to raise enough gifts to be able to donate to both for his birthday this year. Though school has kept Skyler busy, he tries to visit the nursing home when he can. He said many of the residents at Avalon do not have anyone visit them. He recently met a man at the nursing home who has eight children, but none of them come. These are the men and woman Skyler wants to help. Many of the people Skyler visits struggle to get basic items, such as shirts and socks. Im just proud of him, said Chris Colley, Skylers dad. Hes a good kid. What my conscious doesnt cover, he helps me out. Recently, he lost his grandma Mary Arnold and grandpa Al Arnold. The 14-year-old said he took it hard when they AMANDA WILLIAMSON/ Lake City ReporterSkyler Colley shows his friend Marcus Blalock a shir t he plans to donate to a local nursing home during his birthday party Saturday at the Lake City Country Club. For two years, Skyler has used his birthday as a way to raise presen ts for people at local nursing homes and children at Shan ds Lake Shore Regional Medical Center. GIFTS continued on 7A PAGE 4 M y daughter, a high-school senior, just told me to turn down the music because she is studying. Really.The music is from the 1984 Go-Gos album, Talk Show. The track? Head Over Heels. You see, I just dug into my viny con-nect care-fully bought with baby-sitting and birthday money? A prized collection that. Theres the soundtrack to the remake of the movie A Star is Born. I was 13 when this came out, and I remember playing the soundtrack countless, countless times and having to convince my father that half-naked people on the cover (Barbra Streisand and Kris Kristofferson) were not reflec-tive of moral decay. Of course, that case was harder to make with Rod Stewart and Tonights the Night (Gonna Be Alright), which came out about the same time. That one my dad just shut off when he heard it. Then there are the albums Help! and A Hard Days Night. When I was in junior high and my parents were heading to England, I asked them to bring me real Beatles albums from real England. They did. Priceless. Ahh, I remember REO Speedwagon and You Can Tune a Piano, But You Cant Tuna Fish. Before Google, I needed the jacket for the words to Time For Me To Fly so I could write them out, in longhand, in a breakup letter to my high-school boyfriend. The, theres a pattern -I played New Orleans Ladies by the little-known Louisianas LeRoux, a group he had introduced me to, so many times that the girls on my dorm floor liter-ally staged an intervention to make me stop. Yep, I still have the album, hard to find even at the time. Fleetwood Mac, The Police, The Doobie Brothers, Earth Wind & Fire, James Taylor, Gordon Lightfoot (featured with a lit ciga-rette,. So thanks, Facebook friends, for suggesting that my LP collection be allowed to live again. And yes, darling daughter. Each album plays far more than just one track. In fact, Ill tell you something I didnt know when I was your age: They also play memories, and pictures, and histories. You and your friends may be proud of your iTunes collec-tions -and, in a sense, those iTunes libraries are more valuable. But Im convinced that downloading something with a few keystrokes cant match what I, and so many people of my generation, built with our prized album collections: The stories of a lifetime. N early a year ago Christian Service Center, a local faith-based charity, cut ties with the United States Department of Agriculture over what it saw as a matter of religious principle.Based on information revealed in two recent stories in this newspaper, however, it is clear this conflict could have been easily resolved without compromise of Christian beliefs. Religious principles were not at stake here. This was a conflict borne of poor commu-nication, as well as custom and habit. Two issues drove the conflict.First, CSC says they were told that in order to keep receiving government food, they would have to remove from their premises any reference to their faith. USDA flatly denies this. Whatever CSC may or may not have been told, there is no requirement that faith-based charities remove references to Jesus, God or the power of prayer from their premises. And no other faith-based agency in the area reports having been so instructed. This is a conflict that stemmed from poor communication, not differences in political or religious views. Were CSC forced to choose between government handouts and profession of their faith, we would cer-tainly join their voices in strong protest. Second, there was conflict between USDA and CSC over the order in which events occur once a client enters the premises seeking help. While USDA rules are, in the words of a spokeswoman, fairly vague, our research revealed that, in order to qualify for federal help, faith-based agencies must offer aid before asking clients to engage in reli-gious activities such as prayer. Thats not how CSC does it.If praying first were a matter of conscience for CSC, we would not question it. It is not the place of any newspaper to offer comment on anyones principles of faith. However, that just isnt the case here.When asked if they would be willing to provide aid, then offer religious counseling afterward which is also the example Jesus set in the New Testament CSC Executive Director Kay Daly seemed to sug-gest the current routine was a matter of custom and habit not faith. After clients are processed and given aid, theres never been a time to be quiet to pray with them, she said. We find that a remarkable admission. Surely CSC could have altered its routine if it meant filling more empty bellies. The food that didnt go to CSC wasnt lost. According to USDA, every bit of it went to other agencies in Florida Gateway Food Banks four-county coverage area. In addition, CSC says local churches and others have multiplied their efforts to make up the deficit at that agency. All well and good. Still, the need here remains far greater than available resources. The CSC board of directors should have ended this costly impasse long ago, and been willing to change the agencys timeworn routine, if need be, so that more of our neighbors could eat.F anned by fears that their sex-obsessed society is producing hypersexual young girls, the French Senate has voted to ban adults from entering a child under age 16 into a beauty pageant. The measure, approved in Paris Wednesday on a 197-146 vote, would set criminal penalties of two years in prison and 30,000 euros -roughly $40,500 -in fines in an effort to pro-tect girls from becoming sexualized too early. The measure next goes to the National Assembly for further debate. The bill was written by conservative French lawmaker Chantal Jouanno, who said the hyper-sexualization that touches chil-dren between 6 and 12 years old strikes at the foundation of French equal rights laws. The ban was an amendment to broader legislation intended to increase gender equal-ity. Jouanno was commissioned by the French health ministry in 2011 to report on hypersexualization of children following outrage at a Paris Vogue magazine photo display that showed 10-year-old model Thylane Lna Senates imme-diately after her 1996 murder in Boulder, Colo. But crossing the line on child sexuality has long been a controversy for both countries. French film director Louis Malle produced the American-underwrit-ten 1978 film Pretty Baby, which detailed the life of a New Orleans child prostitute played by 12-year-old actress Brooke Shields, who appeared nude in the film. The most common reaction this week by Americans whove posted comments on Twitter and on blogs about the proposed ban on child beauty pageants has been to tell French lawmakers: Merci. OPINION Sunday, September 22,SC erred in refusing USDA foodFrance bans child beauty pageants Memories live on thanks to my dusted-off LP collection Q Dale McFeatters is editorial writer for Scripps Howard News Service. Dale [email protected] Q Betsy Hart hosts the It Takes a Parent radio show on WYLL-AM 1160 in Chicago. Betsy Hartbetsysblog.com4AOPINION PAGE 5 ABOVE LEFT: Fort White High School senior Rebecca Bailey (left), 18, receives her crown from 2012-13 Fort White Homecoming Queen Taylor Haddox after being named 2013-14 Fort White Homecoming Queen at halftime during Friday nights football game against Chiles. Im honored, very honored, Bailey said. Im sur prised they picked me. Im already the student body president. ABOVE: Homecoming Queen Rebecca Bailey and King Braden King pose for a photograph. LEFT: Rebecca Bailey (from left) poses for a photograph with Homecoming Queen nominees Amanda Kasaed, Khadijah Ingram, Kaemeli Gutierrez and KaShanique Cook. Sept. 23 SVTA board meetingThe Board of Directors of the Suwannee Valley Transit Authority meets Monday, Sept. 23 at 6 p.m. at the Suwannee Valley Transit Authority HQ Building, 1097 Voyles St., SW, Live Oak. The meeting is open to the public. Lake City SAR meetingThe beverag es are a separate, individ ual cost.. Page Editor: Robert Bridges, 754-0428 LAKE CITY REPORTER LOCAL SUNDAY, SEPTEMBER 22, 2013 5A 5A .............................................................................................. 8:00 AM 9:00 AM Jessie Lee Feagle Jessie Lee Feagle 29 of Lake city passed away September 15, 2013 at Bradford Terrace Nursing home following a lengthy illness. Jessie was a lifelong resi dent of Lake City. In his spare time he loved drawing, racing and hanging out with friends. Jessie never met a stranger and was of the Baptist faith. Jessie was preceded in death by his mother Mary Lou Feagle. He is survived by his twin sis ter Mekayla Morrison Ford (Robert) Sherry Harrington (David), Lorraine Feagle, Brandi Miller (Jimmy), Kaitlin Morrison and Catelin Moore. Brothers Howie and Har vey Feagle, Dustin Morrison (Jessica), Michael Morrison, P.J., Justin and BoBo Moore. He is also survived by his custodial parents Robert and Tina Morrison and Paul and Carol Moore who he also considered Mom and Dad. Nephews: Wendell, Ethan and Blake, Nieces: Kayleigh, Bryn sleigh and Taylor. Numerous Aunts, Uncles and other rela tives and friends also survive. A memorial service celebrat ing Jessies life will be Sun day, September 22, 2013 at 2 pm at the hopeful Bap tist Church, East Campus. John W. McCarthy, Sr. John W. (Jack) McCarthy, Sr passed away suddenly on Au gust 5th in Tallahassee, FL. Born January 4, 1930 in Jersey City, NJ to Eugene and Margaret McCarthy. In 1946 he joined the Mer chant Marines until the end of World War II in 1947. In 1951 he joined the Army and spent 2 years in Korea until 1953. He is pre deceased by his parents, 2 brothers, Eu gene and Thom as, 3 sisters, Teresa, Marga ret and Nancy. He was a wonderful hus band, Father, Grandfather & Great Grandfather. He so loved his grandchildren. He is survived by his wife of 57 years Kathleen (Lynch) McCar thy. They had 4 children, John (Jack) McCarthy (Kim) of Fort. White, Michael McCarthy (Bar bara) of Oakridge, NJ, Diane Mc Carthy Lee, thirteen grandchil dren and four great grandchildren. Johns daughter Kathleen Bishop passed away seven days after in Tallahassee, FL. Services will be held on Satur day, September 28th at 11:00 am at the Epiphany Catholic Church with Father Michael Pender follow on Monday, September 30th at 12:30 pm at the Jack sonville Veterans Memorial. Scott Anderson Gillen Scott Anderson Gillen, 27, of Gainesville died suddenly from a motorcycle accident Friday morning on S.R. 121 near La crosse. He was born in Jackson ville, but lived most of his life in Gainesville. He graduated from Santa Fe High School and was a student at Santa Fe Junior Col lege in the welding program. He attended the Advent Christian Church of Lulu. He was pre ceded in death by his maternal grandmother, Jan Thrower. He is survived by his parents, Geoffrey and Cindy Thrower Gillen of Gainesville; his sister, Kristen Smith and her husband John; nephew Isaac Smith of Gainesville; his paternal grand parents, Roland and Betty Gillen of Lulu; his maternal grandfa ther, Al Thrower of Keystone Heights. Funeral services will be held Fri day, Sept. 27, 2013 at 11 a.m. in the Advent Christian Church of Lulu with Rev. Butch Nelson another date. Visitation will be held at the Ar cher Funeral Home in Lake But ler Thursday evening from 6 to 8 p.m. Archer Funeral Home, 386-4962008, is in charge of arrange ments. Please sign the guestbook at ar cherfuneralhome.com Obituaries are paid advertise ments. For details, call the Lake City Reporters classified depart ment at 752-1293. OBITUARIES COMMUNITY CALENDAR Homecoming at Fort White High Photos by JASON MATTHEW WALKER/ Lake City Reporter PAGE 6 By STEVEN RICHMOND [email protected] A walk inside six-year-old Katelyn Goffs bedroom reveals the typical trappings of a firstgrade girl: Disney princess mem orabilia, stuffed animals, legos, a tiara and other pink personal affects. But little Katie Goff is a cardcarrying member of Mensa, a high-IQ society that requires its members pass an intelligence test as part of the membership process. We knew she was very bright for a while, her mother Jamie Simpson said. Shes always been read to and just picked it up on her own. She didnt need to sound it out or anything, she just did it. Katelyn was given the Kaufman Brief Intelligence Test and Reynolds Intellectual Assessment Scales when she was a five-yearold kindergarten student at Summers Elementary. Descriptors such as very superior, gifted, and excep tional were used by the schools psychologist Lance O. Hastings to describe her cognitive abili ties. Katelyn presents the profile of a very friendly and mature child for her age, Hastings said in his psychological report, just inches below her 99th percentile rank ings in Verbal Intelligence and Composite Intelligence scores. Every parent likes to think theyre child is special and bright, Katelyns father Don Goff said. But now we can say ha, told you so. They said her verbal IQ was four points beyond the chart. Katelyns mental aptitude is evident in how she communi cates with others and the way she responds to questions. She chooses her words thoughtfully, carefully pronouncing each syl lable as if she were building a house of cards. When asked how challenging her aptitude test was, she cheer fully replied, Easy peasy as pie. Her favorite book is Goodnight Moon, a bedtime story she enjoys so much that she memorized the entire story aa a one-year-old, according to her mother. It makes me want to go to sleep, Katelyn said. Her palette is as mature as her mind. Shes a fan of Brussels sprouts and broccoli covered in lemon juice. During her interview with the Reporter, Katelyn and her mom ran through a series of small mental tests, such as naming the chemical symbols for various ele ments (including Beryllium and Potassium) and performing sim ple single-variable algebra equa tions. I wouldve never believed Id have a child doing algebra at six, Goff said. Katelyns parents say theyre worried about boredom affecting their daughters performance in school. They said they plan to meet with her curriculum coun selor on Tuesday to give her more challenging work. I had a dream the day before yesterday, Katelyn said, That I took FCAT in the first grade. How did you do, her mother asked. I scored 16 hundred thousand points, she replied. Given her track record thus far, a score like that doesnt seem unlikely. Ripple to return to Lake City from his out-of-town job, Sarah Ripple let her son Charlie play on the bounce houses and earn prizes at miniature golf. Charlie, 3, putted the golf balls into the hole several times, winning gum and a Slinky. You cant go wrong with bouncy houses and barbecue, Sarah Ripple said. Lake City local Thomas Henry leads the Budmeisters, who competed again this year in all four categories. At approximately 11:45 a.m. on Saturday, Henry was prepping his ribs box for the judges. Though he was expected to turn in at least 8 ribs, he plated 10 large ribs and then cleaned the edges of the Styrofoam box with Q-Tips and nap kins. It has to be spotless, uniform and look good, Henry said. Its very serious ... but we all have a good time. Most of the competition teams are just like a big family. Ninety percent of us know each other. Lake City residents and visitors from out of town milled about the spacious festival. Kids ducked into the bounce houses, disap pearing from view until they plummeted down the exit slide. Adults exam ined jewelry, bird houses, clothes, purses and more. Though local Sylvia Pepper hadnt sampled any of the food by about 2 p.m., she planned to taste a couple of nearby competing teams, especially a smaller grill set-up from Alachua. The smaller ones are probably better, she said. I havent tried it yet, but it smells good. Wellborn-based FencedIn BBQ had a long line in front of its booth, as staff handed out Boston butt sandwiches and rib slabs. Owner Lawrence Rentz said the ribs were his bestseller. Even though he cooked about 320 pounds of food Saturday, he expected to run out by 3 p.m. He started cook ing the food at 1 a.m., so it had time to sit for six hours. Rentz likes to cook it low and slow, he said. Its a great crowd, Rentz said. I cant wait for next year. Budmeisters and Fenced-In BBQ had some local competition from the Lake City Fire Department cook team, Black Helmet BBQ. Four firefighters joined together to cook at the Smokin Pig for the first time. Black Helmet BBQ competed in all four categories. The group plans to come back next year. Were excited to com pete at such a high level, Austin Thomas said. We barbecue at the fire department all the time. The others always tell us how good it is. For Columbia County resident Martin Munoz, Saturday was his first time at the Smokin Pig. He had just arrived at the festival and was excited to try the food. Already Munoz was standing in a long line extending from one of the barbecue booths. Ive been saving myself all day for these ribs, he said. By AMANDA WILLIAMSON [email protected] The Lake City Police Department will graduate students from its second session of the Citizens Police Academy on Monday at the First Baptist Church of Lake City. The Citizens Police Academy is a 12-week course held on Thursday evenings from 6 p.m. to 9 p.m. The program is offered twice a year to citizens who wish to participate and can pass a background screen ing. LCPD hopes to edu cate citizens on the role of the police department by using the free, hands on, interactive program. I feel that it is important to have citizen-police inter action and cooperation, said Chief Argatha Gilmore in the press release. One way to achieve this is through an exchange of ideas and education. The Citizens Police Academy is an excellent tool in achiev ing this. Congratulations graduates! The men and women graduating will have com pleted classes in criminal investigations, criminal law, firearm safety, community relations, crime prevention, internal affairs and more. The academy includes speakers, case studies, a mock trial and field trips. For more information on the course or to enroll for the next session in January 2014, call the LCPD Community Relations Unit at 386-719-5742. 6A LAKE CITY REPORTER LOCAL SUNDAY SEPTEMBER 22, 2013 Page Editor: Robert Bridges, 754-0428 6A If you worked in the Maintenance Dept. for the FL Department of Transportation in Lake City, FL during the 1960s-1980s Please contact Sandy Cline toll free at 1-800-994-1279. CPA graduation set for Monday PIG: Good eatin at the Columbia County Fairgrounds for 5th annual event Continued From Page 1A For young genius, tests are easy peasy as pie JASON MATTHEW WALKER/ Lake City Reporter Six-year-old Mensa member Katelyn Goff finds the value of x while completing algebra problems as parents Don Goff and Jamie Simpson look on. Every parent likes to think theyre child is special and bright, Don Goff said. But now we can say Ha, told you so. PAGE 7 By AMANDA [email protected] Highway Patrol Troopers allegedly discovered six drivers licenses, five social security cards, seven credit cards, six checkbooks and one passport in a car rented by a Fort Lauderdale man. According to an FHP arrest report, Ahmad Xavier Hall, 23, of Fort Lauderdale, and Clifton Oneal Robinson, 23, also of Fort Lauderdale, face charges of lar-ceny, fraud for illegal use of credit cards, fraud for imperson-ation, and a moving traffic violation. Hall was also arrested on charges of marijuana possession and drug equipment possession. Trooper J.C. Lemery stopped Hall and Robinson in a rented 2013 gray Chevrolet Impala as it headed north on Interstate 75. As the vehicle slowed, Lemery watched the driv-er, Robinson, switch seats with Hall before finally stopping, the report read. Lemery opened the rear passenger door of the Chevrolet, smell-ing the strong odor of marijuana. Green marijuana residue was scattered across the rear floorboard, the report continued. Both Hall and Robinson were detained. A thorough search of the vehicle revealed two bundles held together by a rubber band stored behind the rear passen-ger side quarter panel. The bundles contained a passport, six checkbooks and a womans wallet. The wallet held several wom-ens drivers licenses, social security cards and credit cards, the report stated. According to the FHP report, it appeared as if some-one had been practicing signatures on the back of the checks. Lemery also said he noticed that someone had been writing checks to the other victims for amounts at and above $1,000. The report states that it is typical for people who look similar to the victim to attempt to cash illegally written checks. Hall and Robinson admitted to switching seats because Robinson had a suspended drivers license, but denied any knowledge of the illegal items. Hall is currently being held at the Columbia County Detention Facility on a $192,000 bond, while Robinson is being held at CCFD on a $201,000 bond. By AMANDA [email protected] ix empty chairs surrounded a table set with black plates, a lit candle and a service hat from each branch of the military dur-ing the POW/MIA Recognition Day Ceremony at the Lake City VA Medical Center Friday. Held on a yearly basis, the ceremony remembers the men and women held hostage as prisoners of war or those who went missing in the line of duty. According to Nicky Adams, assistant chief of Voluntary Services, many of Americas veterans made the ultimate sacrifice, a fact that the rest of the nation should not forget. It takes a community to remember, she said. We cant do it by ourselves. Each American needs to do his or her part. Rows of community leaders and military veterans filled the auditorium at the VA Medical Center. The Marine Corps League donned their signature red and gold caps, while on the opposite side of the room members of the VFW and the Military Order Purple Heart Chapter 772 watched the ceremony from the sidelines. As the Table Ceremony progressed, AmVets Chaplain Jerri Watkins described in detail what each aspect of the ceremo-ny meant. The dinner table laden with plates, silverware and service hats is sym-bolic of the POW/MIA spirit. It is round to show Americans everlasting concern for the missing comrades. The Bible rep-resents the strength gained through faith to sustain those lost from our country, founded as one nation under God. There are more than 88,000 warriors who are still unaccounted for from con-flicts past, and still their family waits, She said. Our work is not done. More than 140,000 Americans since WWI have endured the hardships of captivity. Those sacrifices for freedom must never be for-gotten. She continued by thanking the U.S. Merchant Marines, the United States Navy, the United States Marines, the United States Coast Guard, the United States Air Force and the United States Army. As she listed each branch of military, members of the Florida Youth Challenge Academy carried the caps to the table, placing them next to each plate, facing the audience. American warriors of yesterday and today have never failed to answer their nations call, Watkins said. Through their selfless sacrifice they have brought a concept most associated with American ideals: Freedom. A precious word. A word with so many meanings to so many people. After all the branches had been honored, Watkins pressed play on a portable CD player. A rendition of Taps filled the room. The Florida youth Challenge Academy students, the Marine Corps League and the veterans spread throughout the audi-ence rose to their feet and saluted the empty table. A World War II veteran in the front row stood from his wheelchair, a bit wobbly and aided by fellow audience members. He too saluted his missing comrades. Guest speaker Marvin Lane of Chaplain Service read a letter from by his father-in-law Vice Admiral Diego E. Hernandez, father of Lake City VA ambassador Dolores Lane. Hernandez was the first Hispanic to be named Vice Commander, North American Aerospace Defense Command. He flew two combat tours in Vietnam. His military decorations and medals include: Silver Star, Purple Heart, Legion of Merit, Distinguished Flying Cross and more. POWs are my heroes, Hernandez stated in his letter. I have known many of them. I have flown with many of them. Some in my airwing, some in my squad-ron. When I knew them before they were shot down and captured, I saw them as typical naval aviators: Confident, self-con-fident and perhaps a little cocky. But after they were shot down, each man became a beacon of hope to the other captives against the brutal actions of their captors. The senior officer of North Vietnamese prison camp, Jim Stockdale, established the rules the other POWs were to follow, Hernandez contin-ued. To avoid being paraded in front of anti-US journalists, Stockdale slashed his scalp and disfigured himself by beating his face with a stool. Other POWs Hernandez honored in his letter were Bill Lawrence, who learned to read French while captured, and Sen. John McCain. McCain was offered to leave camp due to his family connections, but chose to stay a prisoner until all his fellow comrades were released. It took five years. Can you imagine the courage and integrity it took to refuse an offer to be released from hell? Hernandez asked in his letter. I have learned that families too are affected deeply by war and they merit our support and our thanks for what they too have endured. To those of you in the audience who have born the brunt of our wars, I salute you.7A WILSONS OUTFITTERS1291 SE Baya Dr, Lake City (386) [email protected] Tumblers New Blue Color New Camo for Women have just arrivedMens & Childrens Camo By Page Editor: Robert Bridges, 754-0428 LAKE CITY REPORTER LOCAL SUNDAY, SEPTEMBER 22, 2013 7A Empty chairs at VA honor POWs, MIAsAMANDA WILLIAMSON/ Lake City ReporterMakaila Lestenkof, of the Florida Youth Challenge Academy based at Camp Blanding, carries a service hat for the United States Coast Guard during a T able Ceremony Friday at the Lake City VA Medical Center. The ceremony honored current pri soners of war and military members missing in action on POW/MIA Recognition Day. people in Columbia Countys labor force, 28,915 of which had jobs. An estimated 2,052 people were unem-ployed. In July there was 30,932 people in the Columbia County labor force and 28,778 had jobs. There were an estimated 2,154 who were not employed, accounting for the 7.0 percent unemployment rate. In August 2012, Columbia Countys unemployment rate was 8.3 percent. Wynne said seasonal employment in occupations such as agriculture, tourism and particularly educa-tion drive the employment numbers between July and September each year. It is very common to see this fluctuation each year, she said of the unemployment numbers. Wynne also noted there are employment opportunities in other local fields of employment. We have positions available in the medical field w ith various employers, aviation opportunities and count y, state and private employment have numerous postings , she said. Of course those who are seeking employme nt, as well as employers seeking staff can contact Flor ida Crown Workforce Board staff at 386-755-9026 for ass istance, or visit for the lates t job postings, labor market information and much more. Monroe County had the states lowest unemployment rate at 4.0 percent. The highest unemployment rate in the state was Hendry County with 15.4 percent. UNEMPLOYMENTContinued From Page 1Apassed away, but they inspired him to help other peoples grandparents. He doesnt like to blow his own horn, but its not about him, you know, Chris Colley said. Its about the people he is helping. One of Skylers friends, Garrett Cook, decided to contribute to the cause after Skyler asked him to help. Garrett and his mom donated socks, cream and other necessities. He also attends the nursing home with Skyler on occasion. Garrett believes his friend is doing a good job. I want to get more kids involved, Helen Colley, Skylers mom, said. I want them to understand how impor-tant this is. GIFTS: Teen gives his birthday presents away Continued From Page 3A Robinson Hall 2 face fraud charges after traffic stop Yoho votes to defund ACAFrom staff reportsWASHINGTON, DCU.S. Rep Ted Yoho, R-Gainesville, said he voted to defund the Affordable Care Act, or Obamacare, because the law contin-ues to be disastrous for America. The House of Representatives on Friday passed a stopgap bill hat funds the government until Dec. 15 and per-manently defunds Obamacare. The legislation keeps sequester-level cuts in place. The bill passed 230-189. Every fear of the phrase we have to pass it to see what is in it has been realized, Yoho said after the vote. PAGE 22 23 24 25 26REGIONAL FORECAST MAP for Sunday, Sep. 22 Sunday's highs/Sunday night's low 83/67 83/70 85/67 85/68 83/68 83/74 85/68 85/74 86/70 88/74 85/74 90/74 90/76 90/79 90/76 86/77 92/77 86/79MondayTuesday Cape Canaveral 88/75/ts88/78/ts Daytona Beach 86/75/ts87/74/ts Fort Myers 88/77/ts89/76/ts Ft. Lauderdale 91/76/ts89/76/ts Gainesville 85/70/ts87/69/ts Jacksonville 85/71/ts86/68/ts Key West 88/80/ts88/80/ts Lake City 85/70/ts87/69/ts Miami 92/76/ts90/76/ts Naples 88/77/ts86/77/ts Ocala 88/71/ts87/70/ts Orlando 89/75/ts88/75/ts Panama City 86/76/ts84/76/ts Pensacola 85/73/ts86/73/ts Tallahassee 87/72/ts88/71/ts Tampa 87/77/ts88/75/ts Valdosta 83/70/ts87/69/ts W. Palm Beach 92/74/ts89/74/ts High SaturdayLow Saturday 87 99 in 192555 in 1938 8868 66 Saturday 0.00"6.26" 48.37"38.83" 3.24" 7:19 a.m. 7:26 p.m. 7:19 a.m. 7:25 p.m. 9:33 p.m. 10:21 a.m.10:17 p.m.11:18 a.m. Sept 26 Oct 4 Oct 11 Oct 18 LastNewFirstFull QuarterQuarter Onthisdatein1890,aseverehailstormhitStrawberry,Ariz.Thesizeofthehailstoneswentunreported,butitwassaidthat5daysafterthestorm,hailremained1-1.5feetdeepinsomepartsofthetown. 100 50 60 70 80 90 100 110 SunMonTueWedThuFriSat 94 93 92 87 90 8888 71 73 70 71 69 67 66Actual high Actual low Average highAverage low WEATHER BY-THE-DAY High630 mins to burnChance of storms Slight chance of storms Slight chance of storms Slight chance of storms Partly cloudy SUN 85 67 MON 85 67 TUE 86 67 WED 86 67 THU 88 67 HI LOHI LOHI LOHI LOHI LO 2013 8A LAKE CITY REPORTER WEATHER SUNDAY, SEPTEMBER 22, 2013 Page Editor: Robert Bridges, 754-04288 $5 MILLION IN 2013 MOVE your First Mortgage(from another institution) to CAMPUS USA Credit Union over the life of your loanOR 00 Well save you1 50 Well pay you1overnorthernNewEnglandaslowpressurebegins tomoveawaytothenortheast.ShowersandthunderstormswillbelikelyalongafrontalboundaryfromportionsoftheSoutheasttothecentralGulfCoast. 97, Needles, CA26, West Yellowstone, MT SaturdayTodaySaturdayTodaySaturdayToday SaturdayTodaySaturdayTodaySaturdayToday Albany NY 60/50/.0068/55/pc Albuquerque 75/59/.0082/54/pc Anchorage 41/34/.0042/33/r Atlanta 73/69/1.5777/64/pc Baltimore 77/59/.0072/48/pc Billings 71/46/.0070/48/ts Birmingham 75/68/.9978/60/pc Bismarck 72/37/.0081/57/pc Boise 71/60/.0071/53/pc Boston 75/57/.0071/51/ts Buffalo 75/60/2.6856/41/sh Charleston SC 87/68/.0082/67/ts Charleston WV 70/62/.8769/48/pc Charlotte 75/66/.0279/59/pc Cheyenne 77/46/.0074/45/ts Chicago 64/54/.0063/49/pc Cincinnati 72/60/1.1467/47/pc Cleveland 69/64/.9761/46/sh Columbia SC 72/48/.0076/49/s Dallas 84/62/.0084/61/s Daytona Beach 88/69/.0086/75/ts Denver 58/50/.0082/55/ts Des Moines 71/48/.0076/54/s Detroit 69/60/.3563/46/pc El Paso 75/63/.0087/68/pc Fairbanks 37/25/.0035/27/sh Greensboro 75/66/.0277/54/pc Hartford 73/55/.0371/44/sh Honolulu 82/75/.0089/72/pc Houston 75/71/1.0185/69/pc Indianapolis 70/53/.0069/47/s Jackson MS 75/71/3.8982/57/s Jacksonville 86/66/.0084/71/ts Kansas City 55/49/.0077/54/s Las Vegas 93/73/.0084/63/pc Little Rock 80/61/.0078/55/s Los Angeles 73/64/.0075/63/fg Memphis 71/64/.0078/57/s Miami 87/75/.0191/79/pc Minneapolis 62/48/.0071/53/s Mobile 78/75/.1784/65/ts New Orleans 81/73/2.2184/69/ts New York 75/59/.1875/51/sh Oakland 68/62/.6068/57/pc Oklahoma City 81/55/.0081/55/s Omaha 73/44/.0078/56/s Orlando 88/72/.0089/74/ts Philadelphia 79/57/.0074/50/pc Phoenix 100/82/.0093/69/pc Pittsburgh 69/60/.8160/45/pc Portland ME 66/54/.0071/47/sh Portland OR 64/57/.0864/56/r Raleigh 75/66/.0578/55/pc Rapid City 85/39/.0089/52/ts Reno 68/52/.0070/45/pc Sacramento 70/62/.1476/57/pc Salt Lake City 88/64/.0065/49/ts San Antonio 73/72/.0486/65/s San Diego 73/64/.0068/62/pc San Francisco 66/62/.2365/58/pc Seattle 66/55/.0062/55/r Spokane 62/54/.0063/47/pc St. Louis 69/53/.0075/53/s Tampa 87/74/.0089/77/ts Tucson 95/73/.0094/64/pc Washington 78/64/.0174/53/pc Acapulco 86/77/.0084/75/pc Amsterdam 64/46/.0064/55/pc Athens 77/66/.0082/68/pc Auckland 64/59/.0062/59/r Beijing 82/64/.0080/60/s Berlin 62/53/.0062/50/pc Buenos Aires 62/51/.0057/41/pc Cairo 91/71/.0093/75/s Geneva 68/46/.0071/48/pc Havana 89/71/.0086/71/ts Helsinki 60/50/.0064/39/r Hong Kong 93/86/.0095/80/pc Kingston 89/80/.0089/80/ts La Paz 62/35/.0062/35/ts Lima 66/59/.0066/59/pc London 66/50/.0069/57/pc Madrid 87/64/.0087/59/s Mexico City 73/59/.0073/57/ts Montreal 69/62/.0073/53/r Moscow 51/50/.0053/46/r Nairobi 80/59/.0080/59/ts Nassau 87/77/.0087/77/ts New Delhi 91/75/.0096/77/ts Oslo 53/50/.0057/48/s Panama 89/75/.0087/75/ts Paris 69/48/.0069/59/pc Rio 96/73/.0096/71/pc Rome 84/57/.0082/59/s San Juan PR 88/80/.0090/78/pc Santiago 89/69/.0089/73/ts Seoul 82/68/.0082/62/pc Singapore 89/82/.0089/78/pc St. Thomas VI 88/84/.0088/78/s Sydney 66/51/.0071/53/s Tel Aviv 84/73/.0084/71/r Tokyo 82/71/.0082/69/pc Toronto 68/60/.0066/50/r Vienna 62/55/.0062/51/r Warsaw 62/46/.0060/48/r H H H H L L L L L L L L L L L L L L L L L L L L 66/47 Bangor 71/51 Boston 74/51 New York 74/53 Washington D.C. 79/59 Charlotte 77/64 Atlanta 81/55 City 84/61 Dallas 85/69 Houston 71/53 Minneapolis 63/49 Chicago 78/57 Memphis 68/45 Cincinnati 62/48 Detroit 88/75 Orlando 91/79 Miami Oklahoma 64/46 Falls International 75/53 Louis St. 78/56 Omaha 82/55 Denver 82/54 Albuquerque 93/69 Phoenix 70/48 Billings 71/53 Boise 64/56 Portland 62/55 Seattle 84/69 Orleans New 89/52 City Rapid 65/49 City Salt Lake 81/61 Vegas Las 70/62 Angeles Los 65/58 Francisco San 42/34 Anchorage 35/27 Fairbanks 89/72 Honolulu PAGE 9 By TIM KIRBY [email protected] FORT WHITE After a week of homecoming fes tivities, Fort White Highs football team started a little slow against Chiles High on Friday. The Indians hit high gear and turned a 14-7 halftime lead into a 35-14 finish. One to whom slow can never be used in the same sentence is running back Tavaris Williams. After consecutive 200-plus yard games, Williams was even better against Chiles. He carried 18 times for 364 yards. Williams had touch down runs of 63, 59, 80 and 44 yards. One run of 99 2 3 yards to the end zone was called back for an unsportsmanlike conduct penalty when Williams waved to the crowd at the 15-yard line. It turned the run into a mere 69-yarder. Williams said he saw his grandmother and was wav ing at her. Lake City Reporter SPORTS Sunday, September 22, 2013 Section B Story ideas? Contact Tim Kirby Sports Editor 754-0421 [email protected] 1BSPORTS INDIANS continued on 3B Homecoming bash for Indians JASON MATTHEW WALKER /Lake City Reporter Fort White Highs Tavaris Williams (2) rushed for 364 yards against Chiles High. Fort White beats Chiles, 35-14 District destroyer By BRANDON FINLEY [email protected] If Columbia High was looking to make a statement in District 4-6A consider it done after a 63-13 demolish ing of Terry Parker High. The Tigers scored early and often against the Braves and never punted in the contest. Lonnie Underwood matched a single-season rushing touchdown record he set last week with anoth er five-touchdown perfor mance and Nathan Taylor had a breakout game with four touchdown passes. Underwood set the tone early with a 41-yard touch down run on the Tigers third offensive play with 8:02 remaining in the first quarter to give Columbia a 7-0 lead after Brayden Thomas connected on the extra point. Zedrick Woods sacked quarterback Mark McCoy on third down of the Braves second possession and Columbia went back to work. This time, it was Taylor who finished the drive after Underwood had runs of 28 and 10 yards to put the Tigers in scoring position. Taylor found defensive standout Terry Calloway, in at fullback, for a nineyard scoring play with 5:17 remaining in the first quarter. The highlight of Terry Parkers night came on the following kickoff when Cornelius Fleming returned it 98 yards for the score, but it was the last of the posi tives for the Braves until the games final play. Taylor had his second touchdown pass of the game when he connected with Alex Weber from 44 yards out to give Columbia a 21-6 lead with 40 seconds remaining in the first quarter. The big plays kept com ing in the second quarter, begining with Underwoods 45-yard touchdown run with 10:22 remaining in the half. Carlos Vegas sack stalled the next Terry Parker drive, and it was back to work for the Tigers. This time, it only took one play for Taylor to find Akeem Williams for a Columbia makes easy work of Terry Parker, 63-13. CHS continued on 3B BRENT KUYKENDALL /Lake City Reporter Columbia Highs Lonnie Underwood looks to score against Terry Parker High in the Tigers 63-13 win in District 4-6A play in Lake City on Friday. PAGE 10 C hiles Highs head junior varsity football coach is Philip Browning. Yes, the grandson of that Philip Browning who ruled Lake City Junior High as its principal for many years. His grandmother was Ethel Browning, who taught at LCJH. Brownings maternal grandparents were G.T. (Doc) and Sarah Melton. Melton was a state senator from 1959 through the 1966 term. During World War II in 1944 when Columbia High didnt have a football coach, Melton volunteered to do the job and the Tigers played a 10-game season. Browning is the son of Philip Browning Jr. and Patricia Melton Browning. He said his mom lives in Tallahassee and his dad still lives in Atlanta. Browning walked on at Florida State and later graduated as a Seminole. My roommate at Florida State was a coach and as soon as I was done playing, he put me in touch with Coach (Mike) Lassister, Browning said before the game against Fort White on Friday. I had been a volunteer coach at the school and Coach Lassiter said he had an opening for a JV line coach. As strict as Mr. Browning was as principal (stay off the grass, stay out of the halls, lunchtime raids at Mrs. Lula Maes store to catch smokers, etc.), Mrs. Browning was equally sweet. I think he passed away when I was 9, Browning said. I stayed at their house and saw a lot of them when we came to Lake City. I teach culinary arts at Chiles and my grandmother was the one who inspired me to get into the kitchen. She was such a good cook, I wanted to know how she did it. Browning was close to his other grandparents. Sen. Melton was a golfer who once tried to qualify for the U.S. Open. We talked golf every time I was over there, Browning said. I gave him a golf club for a gift and got it back and I still use it. Browning also remembered his late uncle, Tommy Browning, a bombastic legend in Lake City. I had a lot of dealings with him, Browning said. I figured there was no better way for him to go out than in a homemade pine box, wearing his orange Gator jacket, and with When the Saints go Marching In playing. Browning is married to Jessica and they are expecting their first child in March. He is the youngest of three children, joining Callie (Ryan) Hudak, who lives in San Antonio, and Sarah (Edward) Hales, who lives in Atlanta. The Hudaks have a 1 12 -year-old daughter and the Hales have a 2-year old son, so Browing has had some practice before he becomes a dad. SCOREBOARD TELEVISIONTV sports TodayItalia, MAJOR LEAGUE BASEBALL 1 p.m. TBS San Francisco at N.Y. Yankees 2:10 p.m. WGN Atlanta at Chicago Cubs 8 p.m. ESPN St. Louis at Milwaukee NFL FOOTBALL 1 p.m. CBS Regional coverage,FOX Regional coverage 4 p.m. FOX Regional coverage 4:25 p.m. CBS Doubleheader game 8 p.m. NBC Chicago at Pittsburgh SOCCER 10:55 a.m. NBCSN Premier League, Manchester United at Manchester City WNBA BASKETBALL 3 p.m. ESPN2 Playoffs, first round, game 2, Chicago at Indiana 5 p.m. ESPN2 Playoffs, first round, game 2, Minnesota at Seattle Monday NFL FOOTBALL 8:25 p.m. ESPN Oakland at Denver WNBA BASKETBALL 10 p.m. ESPN2 Playoffs, first round, game 3, Phoenix at Los Angeles (if necessary)BASEBALLAL standings East Division W L Pct GB x-Boston 94 61 .606 Tampa Bay 84 69 .549 9 Baltimore 81 72 .529 12New York 81 73 .526 12 Toronto 70 83 .458 23 Central Division W L Pct GB Detroit 90 64 .584 Cleveland 84 70 .545 6Kansas City 81 72 .529 8 Minnesota 65 88 .425 24 Chicago 60 93 .392 29 West Division W L Pct GB Oakland 91 63 .591 Texas 83 70 .542 7 Los Angeles 75 78 .490 15 Seattle 67 87 .435 24 Houston 51 103 .331 40 x-clinched division Todays (Hellickson 12-9), 1:40 p.m. Texas (Ogando 7-4) at Kansas City (Shields 12-9), 2:10 p.m. Seattle (F.Hernandez 12-9) at L.A. Angels (C.Wilson 17-6), 3:35 p.m. Minnesota (De Vries 0-0) at Oakland (Gray 3-3), 4:05 p.m. Mondays (Iwakuma 13-6), 10:10 p.m.NL standings East Division W L Pct GB Atlanta 91 62 .595 Washington 83 71 .539 8Philadelphia 71 82 .464 20 New York 69 84 .451 22 Miami 56 98 .364 35 Central Division W L Pct GB St. Louis 90 64 .584 Cincinnati 88 66 .571 2 Pittsburgh 88 66 .571 2 Milwaukee 68 85 .444 21Chicago 64 90 .416 26 West Division W L Pct GB x-Los Angeles 88 66 .571 Arizona 77 76 .503 10 San Diego 72 81 .471 15 San Francisco 71 83 .461 17 Colorado 71 84 .458 17 x-clinched division Todays Games San Francisco (Petit 4-0) at N.Y. Yankees (Pettitte 10-10), 1:05 p.m. Cincinnati (Arroyo 13-11) at Pittsburgh (Locke 10-6), 1:35 p.m. Miami (Flynn 0-2) at Washington (Haren 9-13), 1:35 p.m. N.Y. Mets (C.Torres 3-5) at Philadelphia (Cl.Lee 14-6), 1:35 p.m. p.m. Mondays Games Milwaukee (Estrada 6-4) at Atlanta (Minor 13-7), 7:10 p.m. N.Y. Mets (Z.Wheeler 7-5).Career Grand Slams (x-active) Player No.1. x-Alex Rodriguez 242. Lou Gehrig 233. Manny Ramirez 214. Eddie Murray 195. Willie McCovey 185. Robin Ventura 187. Jimmie Foxx 177. Ted Williams 17FOOTBALLNFL standings 1 1 0 .500 41 34Pittsburgh 0 2 0 .000 19 36Cleveland 0 2 0 .000 16 37 West W L T Pct PF PAKansas City 3 0 0 1.000 71 34Denver 2 0 0 1.000 90 50Oakland 1 1 0 .500 36 30San Diego 1 1 0 .500 61 61 NATIONAL CONFERENCE East W L T Pct PF PADallas 1 1 0 .500 52 48Philadelphia 1 2 0 .333 79 86 Kansas City 26, Philadelphia 16 Todaysondays Game Oakland at Denver, 8:40 p.m.AUTO RACINGRace week SPRINT CUP SYLVANIA 300 Site: Loudon, N.H.Schedule: Today, race, 2 p.m. (ESPN, 1-5:30 p.m.). Track: New Hampshire Motor Speedway (oval, 1.058 miles). Race distance: 317.4 miles, 300 laps. CAMPING WORLD TRUCK Next race: Smiths 350, Sept. 28, Las Vegas Motor Speedway, Las Vegas. FORMULA ONE SINGAPORE GRAND PRIX Site: Singapore.Schedule: Today, race, 8 a.m. (NBC Sports Network, 7:30-10 a.m., 1:304 p.m.). Track: Marina Bay Street Circuit (street course, 3.147 miles). Race distance: 191.98 miles, 61 laps.Next race: Korean Grand Prix, Oct. 6, Korean International Circuit, Yeongam, South Korea. Online: http:// TEXAS NHRA FALL NATIONALS Site: Ennis, Texas.Schedule: Today, final eliminations (ESPN2, 8:30-11:30 p.m.). Track: Texas Motorplex. OTHER RACES AMERICAN LE MANS SERIES: International Sports Car Weekend, Saturday (ESPN2, Today, 1-3 p.m.), Circuit of the Americas, Austin, Texas. Online: http:// Sylvania 300 qualifying At New Hampshire Motor Speedway Friday qualifying; race today (Car number in parentheses).BASKETBALLWNBA playoffs CONFERENCE SEMIFINALS (Best-of-3) Thursday Washington 71, Atlanta 56Phoenix 86, Los Angeles 75 Friday Indiana 85, Chicago 72Minnesota 80, Seattle 64 Today Chicago at Indiana, 3 p.m.Minnesota at Seattle, 5 p.m. Monday (if necessary) Washington at Atlanta, TBAPhoenix at Los Angeles, 10 p.m. 2B LAKE CITY REPORTER SPORTS SUNDAY, SEPTEMBER 22, 2013 Page Editor: Tim Kirby, 754-04212BSPORTS CHEAP SEATS Tim KirbyPhone: (386) [email protected] Q Tim Kirby is sports editor of the Lake City Reporter Great Lake City grands COURTESYTop female at Stephen Foster 5kMichelle Richards of the Step Fitness Run Club was the ov erall female runner with a time of 20:24 at the Stephen Foster 5k on Sept. 7. Other Step Fitness runners were: Alex McCollum, first in the 11-19 mens division; Charlo tte Amparo, first in the 30-39 womens division; Tony Richards, second in the 30-39 m ens division; Mary Kay Mathis, second in the 40-49 womens division; also, Shayne Mor gan, Buddy Haas, Julio Amparo, Valerie Amparo and Savannah Amparo. PAGE 11 From staff reportsRichardson Middle Schools football team has dealt out a couple of big defeats this season, but the Wolves got a taste of the other side with a 32-0 loss to Madison County Central School on Thursday. Teryon Henderson scored all five touchdowns for the Broncos. It was homecoming for Richardson and Kamaya Bennett was crowned queen at halftime. Mackenzie Crews was Miss Congeniality. Diamond Ross also was a queen nominee. Kaden Jones was crowned king earlier in the week. Richardson (2-1) plays at Suwannee Middle School at 7 p.m. Thursday.Lady Tigers golfColumbia Highs girls golf team lost 193-178 to Buchholz High at Haile Plantation Golf & Country Club on Thursday. Gillian Norris shot 42 for Columbia. Columbia (5-2) hosts Gainesviille High at 4 p.m. Monday at Quail Heights Country Club.Branford golfBranford Highs boys golf team had two wins last week at Quail Heights. The Bucs beat Aucilla Christian Academy 143-148 on Tuesday and Lafayette High 187-205 on Thursday. Aucilla Christian only brought four golfers, so the schools agreed to use three scores. Branfords Tyler Allen was medalist with a 45. Tyler Bradley shot 48 and Rylee McKenzie shot 50. Allen also was medalist against the Hornets with a 43. Hunter Hawthorne shot 44, McKenzie shot 46 and Bradley shot 54. I knew they didnt have the speed to run with him, Fort White head coach Demetric Jackson said. They didnt tackle that well and he made a couple of guys miss. Williams first touchdown came at 5:55 of the first quarter and he added another with 1:57 left before halftime. Chiles quarterback Trey Melvin hit Jonathan Scaringe on the first of two touchdown passes to pull the Timberwolves within seven points at intermission. That lasted 20 seconds when Williams broke his 80-yard run on the first play of the second half. After a TD pass from Melvin to Marcus Holton, Williams scored at 2:15 of the third quarter. Isaiah Sampson put on the exclamation point with a 40-yard interception return for a touchdown with 7:29 left in the game. Melton Sanders tacked on his fifth extra point. I was just in the zone, Williams said. I feel like I get better every week. I owe it all to my linemen. Two of those up-front blockers are seniors A.J. Kluess and Chris Waites, who were able to celebrate their last homecoming win. Thats all you can ask for as a senior, Waites said. We put in so much work over the summer. Its good to get the whole commu-nity out and finish the work for them. It feels great to win, Kluess said. We worked hard. Both like blocking for Williams, and nine others also carried the ball for a total of 516 yards. We are told to finish our blocks no matter what, Waites said. Its a good feeling to know somebody can hit it and you dont have to wait on him. He makes it easier, Kluess said. He has good vision and is ridiculously fast. Jackson didnt like the Indians three turnovers, but was pleased with the result and sending Chiles to 0-3. We got a good win, Jackson said. When you come out an beat a 7A team, it is always good for us. We had some miscues here and there and gave up a couple of big plays, but our defense played well. They swarmed to the ball. Our run defense is pretty solid. Fort White (3-0) has a week off before start-ing District 2-4A play at Fernandina Beach High on Oct. 4. Page Editor: Tim Kirby, 754-0421 LAKE CITY REPORTER SPORTS SUNDAY, SEPTEMBER 22, 2013 3B3BSPORTS INDIANS: Off this week Continued From Page 1B CHS: Tigers open district with win Continued From Page 1B BRIEFS GAMES Monday Q Columbia High girls golf vs. Gainesville High at Quail Heights Country Club, 4 p.m. Q Fort White High volleyball at Chiefland High, 6 p.m. (JV-5) Tuesday Q Columbia High girls golf vs. Branford High at Quail Heights Country Club, 4 p.m. Q Columbia High boys golf vs. Gainesville High at The Country Club at Lake City, 4 p.m. Q Fort White High volleyball vs. Bradford High, 6 p.m. (JV-5) Thursday Q Columbia High swimming vs. Ridgeview High, Baker County High, 4:30 p.m. Q Fort White High volleyball at Lafayette High, 6 p.m. (JV-5) Q Columbia High volleyball vs. Orange Park High, 6:30 p.m. (JV-5:30) Q Columbia High JV football vs. Suwannee High, 7 p.m. Q Fort White High JV football at Union County High, 7 p.m. Friday Q Columbia High volleyball in Varsity Elite Tournament at Oak Hall School, TBA Q Columbia High football at Englewood High, 7:30 p.m. Saturday Q Columbia High volleyball in Varsity Elite Tournament at Oak Hall School, TBA 34-yard touchdown pass. Terry Calloway recovered a fumble for the Tigers at the Braves 19-yard line and Underwood added to Columbias lead with a six-yard score to make it 42-6. Taylor ended his 4-of5 night with his fourth touchdown pass in as many completions with a 51-yard thriller to Michael Jackson. Taylor ended the game with 138 yards and four touchdowns. It felt good to come in and perform, because coach (Brian Allen) put an emphasis on it in practice, Taylor said. We wanted to show people that we could pass it too. It feels great. The second half began with a running bclock, but Underwood wasnt done adding to his totals in the Tigers two second-half pos-sessions. His first attempt went to the house on the first play of the second half as Underwood cut up field and outran the defense for a 60-yard score. Bryan Williams and Woods both had sacks on the Braves following possession and Roger Cray returned a punt for a touchdown from 40 yards, but it was called due to a penalty. Underwood would cap off the night with a seven-yard rushing touchdown to round out the scoring for the Tigers, but Terry Parker did add a final score on the games final play when Fleming scampered 50 yards for the 63-13 final. Still, it was a statement game for the Tigers in their first district contest. This is why we practice so hard during the week, Allen told the team after the game. BRENT KUYKENDALL /Lake City ReporterColumbia Highs Nathan Taylor scrambles against Terry Parker High on Friday. Taylor shines in Tigers 63-13 win By BRANDON [email protected] High had been looking to open up the passing attack all season coming into Fridays 63-13 win against Terry Parker High. Nathan Taylor, in his second start, finally helped the Tigers accomplish just that in impressive fashion. Taylor went 4-of-5 passing and all of his comple-tions went for touchdowns as he racked up 138 yards through the air during the first half. His lone incom-pletion bounced off the hands of a Tiger receiver. It was a performance that caught the eye of his coaches. He did an outstanding job, Columbia head coach Brian Allen said. Hes a little gamer. Offensive coordinator Mitch Shoup said that even when losing out on the starting job prior to the year that Taylor remained poised and knew his moment would come. He told me that it was just going to make him work harder, Shoup said. Thats the kind of player that you want to have. And for Taylor, the moment to shine was on Friday. Still, he didnt want to take all the credit. I think we played great, Taylor said. The offensive line did a great job of protecting me. I had plenty of time. The wide receivers did a good job of getting open and making plays after they caught it. I feel like were playing with confidence. CHS WRESTLING Tryouts are set for Tuesday Columbia High wrestling tryouts begin at 3:30 p.m. (until 5:15 p.m.) Tuesday at the field house. Columbia is hosting a Ken Chertow wrestling camp on Oct. 12-13. Columbia and Suwannee county wrestlers will be offered a special rate. All proceeds from the camp go to support the Tigers. This is the second year for the camp at CHS. For details, call head coach Kevin Warner at (352) 281-0549 or coach Allen Worley at 965-7025, or e-mail monsta wrestling SOFTBALL Southern Pride seeks players Southern Pride, a 12U softball travel team out of Valdosta, Ga., is looking for two position players and a seasoned pitcher for the remainder of its 2013-14 season. Southern Pride practices twice a week. The team plays ASA and USSSA competition. For details, contact [email protected] Q From staff reports Chiles 0 7 7 0 14 Fort White 7 7 14 7 35 First Quarter FWWilliams 63 run (Sanders kick), 5:55 Second Quarter FWWilliams 59 run (Sanders kick), 1:57 CScaringe 13 pass from Melvin (Muldoon kick), :34 Third Quarter FWWilliams 80 run (Sanders kick), 11:40 CHolton 12 pass from Melvin (Muldoon kick), 9:01 FWWilliams 44 run (Sanders kick), 2:15 Fourth Quarter FWSampson 40 interception return (Sanders kick), 7:29 Fort White ChilesFirst downs 15 11Rushes-yards 36-516 36-78Passing 47 100Comp-Att-Int 5-11-2 12-21-2Punts-Avg. 2-29 7-43.5Fumbles-Lost 2-1 2-0Penalties 7-61 7-33 INDIVIDUAL STATISTICS RUSHINGFort White, Williams 18-364, Baker 3-46, Snider 4-37, Sanders 2-25, Chapman 1-16, Garrison 2-12, White 3-10, Bryant 1-5, Asuncion 1-2, Preston 1-(-1). Chiles, Scaringe 6-44, Wahlen 19-27, Koon 7-17, Melvin 4-(-10). PASSINGFort White, Baker 5-11-472. Chiles, Melvin 12-21-100-2. RECEIVINGFort White, Sanders 2-18, Chapman 1-17, Helsel 1-15, Williams 1-(-3). Chiles, Holton 3-40, Scaringe 3-4, Williams 2-18, Brown 2-13, Lane 1-13, Christmas 1-12. Wolves roughed up by Broncos PAGE 12 4B LAKE CITY REPORTER SPORTS SUNDAY, SEPTEMBER 22, 2013 Page Editor: Brandon Finley, 754-04204BSPORTSBraves beatdown BRENT KUYKENDALL /Lake City ReporterColumbia Highs Lonnie Underwood breaks into the open field in the Tigers 63-13 win against Terry Parker H igh in District 4-6A play in Lake City on Friday. BRENT KUYKENDALL/ Lake City ReporterAlex Weber runs free on a touchdown pass reception. BRENT KUYKENDALL /Lake City ReporterCarlos Vega (left) and Bryan Williams (right) combine for a sack in the Tigers 63-13 win over Terry Parker High. BRENT KUYKENDALL /Lake City ReporterColumbia Highs Carlos Vega sacks Terry Parker High quarterback Mark McCoy.BRENT KUYKENDALL /Lake City ReporterA group of Tigers attempt to block a Terry Parker High p unt on Friday. PAGE 13 Page Editor: Brandon Finley, 754-0420 LAKE CITY REPORTER SPORTS SUNDAY, SEPTEMBER 22, 2013 5B5BSPORTSIndians trample Timberwolves JASON MATTHEW WALKER /Lake City ReporterFort White Highs football team breaks through a banner be fore the homecoming game against Chiles High on Friday. JASON MATTHEW WALKER /Lake City ReporterBlair Chapman celebrates after making a big tackle agai nst Chiles High. JASON MATTHEW WALKER /Lake City ReporterFort White Highs Christian Helsel trucks through a numbe r of tacklers on Friday. JASON MATTHEW WALKER /Lake City ReporterFort White Highs Tavaris Williams cuts in between two Ch iles High defenders on his way to scoring one of his fo ur touchdowns on Friday. JASON MATTHEW WALKER /Lake City ReporterIndian defenders attempt to strip the ball from a Chiles Hig h runner on Friday. PAGE 14 6B LAKE CITY REPORTER SPORTS SUNDAY, SEPTEMBER 22, 2013 Page Editor: Tim Kirby, 754-04214BSports +++nn++ !$$ r"nr nr !! &"#! # nn %" & & $!'rrnn' !! ''' .'!#/#7#!0'2#0&.,1%&1+)#//,0.3'/# +,0#"0#*/.#/1 (#!00,2') ')'05+"*5+, 0 #2') )#'+))#++#5/0,.#/,.0(!-!,* %1#%.3!'%2/;/1)').!,/11%'4,!1 01)#%2#34!,2!5).'2-!8%7#%%$23!3%$0%1#%.3!'% /;91)').!,:!.$91%'4,!1:01)#%2!1%/;%1).' 01)#%23(!3-!8./3(!5%1%24,3%$).2!,%2!.$).3 %1-%$)!3%-!1+$/6.2-!8(!5%"%%.3!+%. 91)').!,:01)#%2-!8./3(!5%"%%.).%;%#3$41). '3(%0!23$!82/1).!,,31!$%!1%!2n%..%8 1%2%15%23(%1)'(33/,)-)31%341.2/1%7#(!.'%26) 3(/43!5!,)$1%#%)039!,%:%5%.32%7#,4$% %23!,4%-%1#(!.$)2%91)').!,:!.$%23!,4%) 3%-26),,1%-!).!3!$5%13)2%$01)#%2!&3%1%5%.3 $#!#$ # r)()&"((&''')'($"!#'r''-# *!&,*('r $)+& $"+!)'$#'%%!,!$*$&(!' r$r $#%%&!'$'''$&'r$" 40.r,7/#)#!0,.'%'+).#%1)./)#+"!)#.+!#-.'!#"--.#)/&,#/!!#//,.'#/&,*#-1.!&/#/40.r,7/#)#!0,.'%'+).#%1)./)#+"!)#.+!#-.'!#"$1.+'01.#*00.#//#/!1/0,* )'+"//&"#/6+#(#3#).530!/'(,14.-1.!&/#/!&,7#.%,,"'+/0,.#+"0(!-!,*#4!)1"'+%04#/+"/&'--'+%!&.%#/ /%2./3!00,83/%23!,4%%5)2r)+%/.5%12% ,!1+23(,%3)#(/%2n%6%,1814.+(/623(%)!-/.$!4,301/43 !3#(%2)5)%..% %236//$ !3#(%2%0(/1!%1%42%382/./0()%/.1!./8!,/4,3/. !3%1&/1$%,%"1!3)/.2%./7%."8/1(!-0)%'%,!4//$)3#(%.,%#31)#24.3%1/4',!2./-%423/-%#/1!3).'!"8%!1%15)#%2%15)#%,!.2)&3!1$241.)341%43,%3041#(!2%2#411%.3/1$%12!.$01)/1041#(!2%2/1).#/-").!3)/.6)3(/3(%1#/40/.2!."%#/-").%$6)3(%!1.%$*#01%6!1$2/40/.#!../3"%42%$&/10!8-%.3/.!##/4.3/40/.#!../3"%1%$%%-%$!2#!2(/1-%1#(!.$)2%#1%$)3)&-%1#(!.$)2%)21%341.%$r/#!2(5!,4%nnnn .3%1/.,).%#/$%6(%.01/-03%$!3#(%#+/43/1#!,,!.$ -%.3)/.3(%#/$% nn " JASON MATTHEW WALKER /Lake City ReporterFloridas Darious Cummings (55) runs with the ball after making an interception against Tennessee at Ben Hill G riffin Stadium in Gainesville on Saturday. Gators defeat Vols with new quarterbackBy BRANDON [email protected] Saturday Night Live doesnt premiere until next week, but the Southeastern Conference opener for Florida certainly looked like a joke in the first half before the Gators took con-trol in a 31-17 win against Tennessee on Saturday. The two teams combined for seven turnovers (includ-ing a fumble on a punt) in the first half as Florida built a 17-7 lead, but the second half was played without a turnover and the Gators were able to take control. Besides the turnovers, the story of the first half was Florida quarterback Jeff Driskel going down with an injury. Gator coach Will Muschamp said that Driskel wouldnt return for the rest of the season after an injury sustained while throwing an interception for a touchdown to Devaun Swafford with 10:05 remain-ing in the first quarter. I hurt for him, Muschamp said. I hurt for us, because Jeffs a good player. Its going to hurt us. Driskels replacement, Tyler Murphy, showed that he could be a dual threat, as he rushed for 84 yards and a touchdown while complet-ing 8-of-14 passes for 134 yards and a touchdown. His first touchdown pass as a Gator was mostly the result of Solomon Patton. A screen pass to the receiver resulted in a 52-yard touch-down. It was one of the few positives in the first half for either team. Floridas next score came after defensive lineman Darius Cummings inter-cepted a Nathan Peterman pass and rumbled 30 yards to set the Gators up at the Volunteers 40 yard line. A seven play, 40-yard drive was capped off by Mack Brown from three yards away. Murphy again made a play with his legs on the drive with a 12-yard scam-per on third-and-6 to keep Florida moving. Murphy did it with his arm on Floridas first pos-session of the second half with a 31-yard completion to Quinton Dunbar. It was part of a touchdown drive capped off by Matt Jones on a four-yard run. Tennessee had cut Floridas lead to seven points with a 44-yard field goal from Michael Palardy on its previous possession. Murphy had already found the end zone with his arm, but he found it with his legs to cap off Floridas second drive of the second half and put Tennessee away. The Gators used 5:26 of the game clock to take a 21-point lead early in the fourth quarter after Murphy rushed for seven yards and a touchdown on third-and-6. Justin Worley replaced Peterman in the second half for the Volunteers and he was able to muster one touchdown without turn-ing the ball over for the Volunteers. Worley found Alton Howard for an 18-yard score with 10:20 remaining, but Tennessee wouldnt get any closer than 14 points. It wasnt pretty at times, but in the end it was a win in the SEC, something Muschamp and the Gators will take. That was a true team victory, doing what we had to do to win the game, Muschamp said. Thats where we are. Associated PressTALLAHASSEE No. 8 Florida State and quar-terback touch-down. Karlos Williams fin-ished with 83 yards rushing and two touchdowns, and James Wilder, Jr. added 56 yards and a touchdown. The Wildcats (3-1) scored their lone touchdown off a seven-yard run from quar-terback Jackie Wilson at 8:21 of the third quarter. Florida State ran away in the second quarter, but all three starting receivers had dropped passes, including two for touchdowns. The defense also missed several tackles.No. 4 Ohio State 76, Florida A&M 0COLUMBUS, Ohio Kenny Guiton set a school record with six touchdown passes all in the first half to lead No. 4 Ohio State to a victory against Florida A&M on Saturday. It was an epic mismatch between a team with nation-al-title aspirations and a Football Championship Subdivision member get-ting a $900,000 guarantee. FAMU (1-3), which suffered its worst loss ever, trailed 48-0 before picking up its initial first down in the second quarter. Guiton completed 24 of 34 passes for 215 yards. His TD passes went to five dif-ferent receivers.No. 7 Louisville 72, FIU 0LOUISVILLE, Ky. Teddy Bridgewater threw four touchdown passes and Louisvilles defense allowed a school-record 30 yards, helping the Cardinals blow out Florida International. It was the highest scoring game for the Cardinals (4-0) since a 73-10 victory over Murray State in 2007. Georgia (2-1). Seminoles join in routs of Sunshine State programs PAGE 15 By TONY [email protected] pavilion will be added to downtown Lake City, in Wilson Park, as part of a Community Redevelopment project. The Lake City City Council accepted a bid on the project at the Tuesday, Sept. 3 meeting when city council mem-bers authorized a contract with Union LaSteel Metal Buildings of Lake Butler to erect the structure. The projected total cost is estimated at $187,492. The contractors shall fully complete all work required under this agree-ment within the first 90 calendar days from the date any equipment or building materials have been delivered to or placed on the construction site or 90 calendar days from the date the city issues contractor notice to proceed, said Jackie Kite, Lake City Community Redevelopment Agency administrator. The base contract amount for the pavilion was $146,889. Council accepted the bid for the pavilion as well as accepted bids for some additional costs associated with the project. The associated costs included 60x80 feet of roof liner panels for $10,103 Lake City Reporter Week of September 22-28, 2013 Section C Columbia, Inc. Your marketplace source for Lake City and Columbia County1CColumbia Inc. Multi-use structure will also house localfarmers market. JASON MATTHEW WALKER/ Lake City ReporterConstruction equipment is seen at Wilson Park in downtow n Lake City on Tuesday where a new pavilion is slated to be built. Coming soon: Lakeside pavilion PREP WORK CONTINUES PAVILION continued on 2C PAGE 16 2C LAKE CITY REPORTER BUSINESS WEEK OF SUNDAY, SEPTEMBER 22-28, 20132CBIZ/MOTLEYand other work. The CRA advisory committee decided it wanted a more finished look under the ceiling part of the building, Kite said. This 60x80 feet of roof liner panels is going to finish the underside of the build-ing. Additional costs also encompass three gabled ends with metal siding for $9,000 and a portion of the building will be enclosed with brick siding, estimated at $21,500. The brick will match the current brick buildings like the restrooms in downtown Olustee Park and other city brick, Kite said. The pavilion will be a 60 x 100-foot building and have a 20x60-foot enclosed area on its north end. Kite said the enclosed portion of the building will contain separate mens and womens restrooms, as well as a warm-ing/prep kitchen. If somebody wanted to have an event there and have that event catered, the pavilion will have stainless steel counter tops, sinks and that kind of thing, but its not a cooking kind of kitchen, Kite said. Construction is expected to start in the near future. The notice of the bid award was mailed out Sept. 11. Once the contract is returned, along with a payment and performance bond, then the notice to proceed will be issued, Kite said. The city public works department conducted some pre-construction work at the site, including bringing in fill dirt and relocating palm trees to Halpatter Park. The structure will be a multi-use pavilion, Kite said, noting it could be used for events sponsored by either the city, possibly the chamber of commerce or for public events. This pavilion is going to be used for public sponsored events but it can also be used for private events. Most notably, it will be the new home for the farmers market. Kite said the pavilion could also be used for weddings/wedding receptions. It will be something we can rent out, she said. The CRA plans to introduce Lake Fire Night on Oct. 25 where 10 cast iron bowls will be placed in Lake DeSoto with fires lit inside them. The first time well do it will be Trunkor-Treat event this year, Kite said. Those fire pits can be rented as well. Name That Company=fle[\[`e(0.'Xe[YXj\[ `eBXejXj:`kp#Df%#@dXglY$ c`j_\iXe[Xcjfk_\nfic[jcXi^\jk `e[\g\e[\ekcpfne\[e\njgXg\i jpe[`ZXk`feZfdgXep#[`jki`Ylk$ `e^Zfek\ekkfgi`ek#fec`e\Xe[ dfY`c\gcXk]fidj%9iXe[jle[\i dpiff]_Xm\`eZcl[\[;ffe\jYlip# ;\Xi8YYp#D`jjDXee\ij#:Xcm`eXe[ ?fYY\j#>Xi]`\c[#G\Xelkj#;`cY\ik#=fi 9\kk\ifi=fiNfij\#:Xk_p#Q`^^pXe[K_\ Dfkc\p=ffc% PAGE 17 LAKECITYREPORTER CLASSIFIEDSUNDAY, SEPTEMBER 22, 2013 3C Classified Department: 755-5440 DIRECTOR, WATER RESOURCES (Grant Funded)Direct all functions of the water resources programs; supervise staff; maintain constant rapport with industry; develop industry oriented training and education programs; maintain an industry advisory committee; and do strategic planning. Manage all aspects of the non-credit, AS and BAS programs, courses and faculty. Requires Bachelors degree with five years of experience in water management issues or workforce education. Skill in people management; ability to interact positively with industry; ability to work with government agencies; ability to analyze and solve problems. Desirable qualifications: Masters degree in education or relevant field. Three years in a management position or related experience. Knowledge of current issues related to the water industry and water quality. SALARY: $50,000 annually plus benefits DEADLINE FOR RECEIVING APPLICATIONS: Open Until Filled Persons interested should provide College application, vita, and photocopies of transcripts. All foreign transcripts must be submitted with official 1999 Alegro 28Ft.Clean, 75K, one owner. No smoke/pet. Ref, ice maker, elec-gas hot water, air w/heat pump, 3 burner cooktop w/oven.$11,500 386-758-9863 LegalNOTICE OF PUBLIC SALE: AUTO EMPORIUM OF LAKE CITYINC. gives Notice of Foreclo-sure of Lien and intent to sell these vehicles on 10/7J4GW48S52C1244162002 JEEP05541045SEPTEMBER 22, 2013055 05541098The Lake City Reporter, a daily newspaper seeks Independent Contractor Newspaper Carrier for the Fort White / Ellisville route. Apply in person during normal business hours Monday Friday 8am 5pmNO PHONE CALLS41101Accountant Auditor position open in local CPAFirm. Accounting or related degree and experience required. Acareer position, competative salary and benefits. Send resume to: [email protected] Drivers: $5 Experienced Quail hunting guide from horse back for commercial preserve, Live Oak area. Housing & utilities furn. Call 386-623-6129 Experienced Welder needed. Must be able to read and understand assembly paperwork and drawings. Must be able to pass a measurement comprehension test. Apply in person at Grizzly Manufacturing 174 NE Cortez Ter. Lake City Fl. F/TAssistant to PR/Client Services needed. Excel, Word, and Sales/Marketing experience a must. Aminimum 2 year College degree, driverslicense, drug screening, and Level II background screening required. Apply at LEC 628 SE Allison Ct, 32025 (386) 755-0235 EOE Looking for Experienced Maintenance/Painter References Needed. Mon. Fri. Contact 386-697-4814 MECHANIC NEEDED with tools and experience. Southern Specialized Truck & Trailer. 386-752-9754P/TLPN needed for medical practice. 2-3 days a week. Send resume to [email protected]. 120Medical Employment05540992Medical BillingSeveral years experience in all aspects of Medical Insurance Billing required.Please send resume to [email protected] or fax to 386-438-8628. 240Schools & Education05540620INTERESTEDin a Medical Career?Express Training offers courses for beginners & exp Nursing Assistant, $479next class9/30 /2013 Phlebotomy national certifica-tion, $800 next class10/7/2013 LPN APRIL2014 Hospital style bed, electric powered. Single bed. Twin motors for multiple positions, like new condition $350, 758-240 Office Furniture 8ft conference table w/ 8 padded chairs, desk, armor, art, plants, etc... Call Mary 386-755-2040 RYOBI CIRCULAR SAWKIT Saw, drill driver work light & sander, Like New, $250 386-752-5969 SETOF4 F150 Platinum 20 polished wheels $400 OBO Call 755-3667 or 623-5219 440Miscellaneous SMOKER CRAFT 1232 John Boat 12ft $450 Contact 386-497-4643. $625-$750 plus SEC. 386-438-4600 nn nn r & 1/2 ba Townhouse. Very Clean. W/D $875 a month & $875 deposit Call 386-288-8401 3/1 neat, clean. Just completely re-done inside Eadie Street (In Town) $785 mth & $800 dep. 386-752-4663 or 386-854-0686 3bd/3bth & more. $800 down, $800 mth. CHA, corner lot, 2 car garage. Call (850) 386-3204. 397 NE Mont 05541099#!$)#%$() %$%))%$) %"$()"*#)) ) #(#$) "&)r %$") $'")"())$)"$)) )r$()r)) n $55,000 3/2 CH&Aw/d hook up, 1100ft Concrete Block Home priced to Sell. Downtown LC 386-752-9736 between 9am-9pm 810Home forSale 2bd/1ba brick home, close in, Available approx.. 10/15/13 $69,9007 days 7-7 ABar Sales, Inc.386-752-5035 ext. 32 Were on target! days a weekSubscribe Today 386-755-5445 PublishedMonthlybythe Lake City Reporter PAGE 18 4C LAKE CITY REPORTER ADVERTISEMENT SUNDAY, SEPTEMBER 22, 2013 Hwy. 90 West, Lake City, FLPrice excludes tax, tag, title, registration, and dealer fee. Providing You With More Selection of Pre-Owned Vehicles 2013 TACOMA 2013 TUNDRA 2013 COROLLA LE $239$149$299/mo. /mo. /mo.39 month lease $2,638 due at signing, wac, no security deposit required, oer valid thru 5/30/13. 39 month lease $3,299 due at signing, wac, no security deposit required, oer valid thru 5/30/13. 36 month lease $2,548 due at signing, wac, no security deposit required, oer valid thru 5/30/13. $8,9702007 Pontiac Grand Prix $7,5002003 Chevy Trail Blazer $13,5002011 Toyota Camry LE $13,5002012 Dodge Avenger SE $14,8602010 Ford Escape $19,5002013 Grand Caravan SXT $16,0002012 Nissan Altima 2.5 S $14,0002012 Ford Focus SEL PAGE 19 LIFE Sunday, September 22, 2013 Section D Story ideas?ContactRobert [email protected] Lake City Reporter1DLIFE GARDEN TALK Nichelle [email protected] Grow your own berriesN orth Florida is a great place to grow a pretty special home edible garden. Many fruits and vegetables will thrive here, and the enjoyment of growing your own just cant be beat. You may want to order your bareroot strawberry plants now so you can get them planted by mid-November. Youll soon be harvesting sun-ripened strawberries, and you will continue to enjoy these juicy treats well into April and May. Although strawberries are an early summer crop in most other states, in Florida they grow best during the cooler months of the year. Strawberry plants grow and produce when the temperatures remain between 50 and 80 degrees, and the daytime sunlight is less than 14 hours. This pretty much describes our fall, winter and spring seasons. In our soil and climate, strawberry plants are nor-mally grown as annuals and replaced with fresh plants each year. This recommendation is made for several reasons. First, plants decline as tempera-tures heat up in the sum-mer. The best quality and size fruit is produced by young, first year garden plants. But the most com-mon problem with using year-old plants and run-ners is nematode infesta-tion. Nematodes are tiny, microscopic worms that commonly migrate into the strawberry runners which form new plants. These tiny worms cause early problems with plants that are home propagated from seemingly healthy looking plants. There are several strawberry varieties that UF recommends for the Florida home garden. Camarosa, Sweet Charlie, and Festival are all good varieties that produce great fresh eating and freezing fruits. Currently, Camarosa is the best vari-ety for North Florida, and finding just what you want may take some on-line and on-phone shopping. Buy them as bareroot plants or as small plugs that have already been started in small containers for you. Either way, starting with healthy plants is the first step to growing your own delicious strawberries. For the best results, locate your strawberry patch in a sunny area that has well drained, slightly acidic soil. Raised beds or rows are preferred to in-ground flat rows because they provide a well-drained soil in which roots have plenty of oxygen during periods of extended rain-fall. The crown should be set just barely above the BERRIES continued on 4D Schools hoping to see family dinner time make comeback By AMANDA [email protected] iblack Principal Melinda Moses sits down for dinner with her mother every Sunday, and sees her own children at least once every two weeks. Family time, she said, is extremely important for children of all ages. To encourage families to spend time together despite busy schedules, the Columbia County School District is support-ing Family Day 2013 on Monday. The day calls families to action by sug-gesting they eat dinner with their children. You can really tell the difference when dealing with children who have a strong support system at home, Moses said. It doesnt have to be Leave it to Beaver, but if the children have that sup-port system it makes such a big difference in both behavior and academics. Research conducted by The National Center on Addiction and Substance Abuse at Columbia University consistently finds that the more often children eat dinner with their families, the less likely they are to smoke, drink or use illegal drugs, states the Informed Families Family Day 2013 website. Informed Families: The Florida Family Partnership, a nonprofit that helps educators engage parents and children in youth sub-stance abuse prevention, organizes Family Day on a yearly basis. According to the organization, the num-ber one reason kids give for not using drugs is their parents. Eating dinner together gives the family time to unwind, said Gloria Spivey, Columbia County Safe Schools Coordinator. They can sit and have a conversation for an extended period of time. The key to this being effective is that everyone leave the electronics somewhere else. You want all of the attention to be on each other. But Spivey said this time doesnt have to be focused on encouraging kids to avoid illegal sub-stances. Dinner can be a time to talk about friends, interests, workdays and school. The school district wants parents and guardians to make family meal time part of the daily routine. Spending time with chil-dren leaves a lasting effect, she added. If a family mem-ber works the night shift, try to sit down for breakfast or lunch instead. The meal doesnt have to be someone preparing a gourmet recipe, Spivey said. It can be takeout. Its just a time to sit and share. We have hectic schedules. We have after-school activities. We have parents who work differ-ent shifts. Its just really hard to find time. You have to make time. The district plans to ask all the principals to advertise Family Day 2013 at their school by placing it on signs, on the website or on school newsletters. The mobile app will also list the event on its calendar. Even with all the electronics, children still need and desire that one-on-one facetime with their parents, Spivey said. They not only listen to what you say, they can sense through your body language and voice tone how important the topic is to you. Keith Hatcher, director of adult education, tru-ancy and charter schools, tries to make family din-ner time a priority. He has three children two who still live at home. Though his 16-year-old daughter has a busy schedule with extracurricular activi-ties, the Hatcher family frequently spends time together. I think it all comes down to structure, he said. Kids need that time to sit down with their parents and tell them how their day was. District officialsendorse FamilyDay 2013.METROThough this Norman Rockwell-esque scene may not be how modern-day dinners play out, spending time as a family reduces abuse by children of various illegal substanc es, including cigarettes, alcohol and drugs, experts say. Big-dog owners a special breedBy JENNIFER PELTZAssociated PressNEW YORKLife with Suzzane Kelleher-Ducketts wouldnt live without one. As big as they are, they love that big, the Santa Clarita, Calif.-based breeder said Tuesday as one of her two Danes, a 3-year-old, 134-pound female named Vendetta whos 34 inches tall at the shoul-der, eyed her owners sandwich after the breeds competition at the Westminster Kennel Club dog show. Whatever dog wins Americas most prestigious canine com-petition, giant breeds cant help but make a big impression on spectators who snap pictures of small children reaching up to pet huge dogs and ping the owners with queries: How much does he weigh? How much does she eat? Whats it like to live with one? Heres what its like for Chris Boltrek and Ashley Erlitz, who share their Sound Beach, N.Y., home with Huxley, a 190-pound mastiff wh breeds contest. But for all the challenges, which can include health considerations particular to massive dogs, their owners say theyre drawn to ani-mals that can inspire both awe and awwwww. The Irish wolfhound is considered the tallest among the 175 breeds currently recognized by the American Kennel Club, but the lanky hound isnt necessarily the heaviest breed. (The Guinness World Record for the individual worlds tallest dog belongs, at the moment, to a Michigan-dwelling great Dane named Zeus, who measures 44 inches from foot to shoulder, and 7-foot-4 when he ASSOCIATED PRESSIn this file photo, a pair of Neopolitan Bull Mastifs named Paparazzi and Ruben ride the elevator with their owners after checking into the Hotel Pennsylvania in New York i n preparation for the Westminster Dog Show. The hotel is located directly across from Madison Square Garden, whe re the show is held. DOGS continued on 4D PAGE 20 2D LAKE CITY REPORTER LIFE SUNDAY, SEPTEMBER 22, 20132DLIFE SUNDAY EVENING SEPTEMBER 22, 2013 Comcast Dish DirecTV 6 PM6:307 PM7:308 PM8:309 PM9:3010 PM10:3011 PM11:30 3-ABC 3 -TV20 NewsABC World NewsAmericas Funniest Home VideosOnce Upon a TimeRevenge Truth Emily is forced to evaluate her quest. News at 11Inside Edition 4-IND 4 4 4Chann 4 Newsomg! Insider (N) Big Bang TheoryBig Bang TheoryCSI: Miami Cop Killer Criminal Minds Today I Do NewsSports ZoneChann 4 NewsArsenio Hall 5-PBS 5 -Keeping UpKeeping Up AppearancesLast Tango in Halifax (N) Masterpiece Mystery! Mysterious military facility. (N) The Bletchley Circle (Part 2 of 3) Austin City Limits 7-CBS 7 47 47e(4:25) NFL Football Jacksonville Jaguars at Seattle Seahawks. (N) The 65th Primetime Emmy Awards Honoring excellence in television. (N) (Live) Action Sports 360Two and Half Men 9-CW 9 17 17Confessions of a Dangerous MindDaryls HouseMusic 4 ULaw & Order Panic Local HauntsI Know JaxYourJax MusicJacksonvilleLocal HauntsMeet the Browns 10-FOX 10 30 30(5:00)Swing Vote (2008) American DadThe SimpsonsThe SimpsonsBobs BurgersFamily GuyDads Pilot NewsAction Sports 360Leverage The Bottle Job 12-NBC 12 12 12NewsNBC Nightly NewsFootball Night in America (N) (Live) e(:20) NFL Football Chicago Bears at Pittsburgh Steelers. (N) News CSPAN 14 210 350NewsmakersWashington This WeekQ & ABritish CommonsRoad to the White House Q & A WGN-A 16 239 307Americas Funniest Home VideosAmericas Funniest Home VideosHow I Met/MotherHow I Met/MotherHow I Met/MotherHow I Met/MotherWGN News at Nine(:40) Instant ReplayAnalyze This (1999) TVLAND 17 106 304The Golden GirlsThe Golden GirlsThe Golden GirlsThe Golden GirlsThe Golden GirlsThe Golden GirlsThe Golden GirlsThe Golden Girls(:12) The Golden GirlsThe Golden GirlsThe Golden Girls OWN 18 189 279Oprah: Where Are They Now?Oprah: Where Are They Now?Oprahs Lifeclass Bishop T.D. Jakes. Oprahs Lifeclass (N) (Part 1 of 2) Oprah: Where Are They Now?Oprahs Lifeclass Bishop T.D. Jakes. A&E 19 118 265Bad InkBad InkDuck DynastyDuck DynastyDuck DynastyDuck DynastyDuck DynastyDuck DynastyBad Ink (N) Bad Ink(:01) Bad Ink(:31) Bad Ink HALL 20 185 312Be My Valentine (2013, Romance) William Baldwin, Natalie Brown. Cedar Cove Jack may get a job offer.Hope Floats (1998, Romance) Sandra Bullock, Harry Connick Jr. FrasierFrasier FX 22 136 248(5:00)X-Men: First Class (2011) James McAvoy, Michael Fassbender.Moneyball (2011, Drama) Brad Pitt, Jonah Hill. A baseball manager challenges old-school traditions.X-Men: First Class (2011) CNN 24 200 202CNN Newsroom (N) CNN Newsroom (N) Anthony Bourdain Parts Unknown (N) Crimes of the Century (N) Inside Man (N) Crimes of the Century TNT 25 138 245Sword sh (2001, Suspense) John Travolta, Hugh Jackman. Lethal Weapon 4 (1998, Action) Mel Gibson, Danny Glover, Joe Pesci. (DVS)Rules of Engagement (2000) Tommy Lee Jones. NIK 26 170 299SpongeBobSpongeBobDora the ExplorerPlay Out LoudSee Dad RunWendell & VinnieThe Karate Kid Part III (1989, Drama) Ralph Macchio, Noriyuki Pat Morita. (:33) Friends SPIKE 28 168 241Bar Rescue Broke Black Sheep Bar Rescue Karaoke Katastrophe Bar Rescue Splitting one bar into two. Bar Rescue Tears for Beers Bar RescueBar Rescue MY-TV 29 32 -The Rockford FilesKojak Loan shark bribes rookie. Columbo By Dawns Early Light Military-academy commandant kills. Thriller Man in the Cage The Twilight Zone The Parallel DISN 31 172 290Austin & AllyDog With a BlogGood Luck CharlieGood Luck CharlieLiv & Maddie (N) Austin & Ally (N) Wander-YonderJessieGood Luck CharlieAustin & AllyDog With a BlogJessie LIFE 32 108 252Devious Maids Scrambling the Eggs Devious Maids Hanging the Drapes Devious MaidsDevious Maids Getting Out the Blood (:01) Devious Maids Totally Clean (:02) Devious Maids Totally Clean USA 33 105 242(5:30)Bridesmaids (2011) Kristen Wiig, Maya Rudolph. (DVS) Modern FamilyModern FamilyModern FamilyModern FamilyModern FamilyModern FamilyBridesmaids (2011) Kristen Wiig. BET 34 124 329(5:00)Why Did I Get Married? (2007) Tyler Perry, Janet Jackson. For Colored Girls (2010) Kimberly Elise. Crises, heartbreak and crimes bind together a group of women. Real Husbands of Hollywood ESPN 35 140 206(5:30) SportsCenter (N) (Live) SportsCenter (N) (Live) a MLB Baseball St. Louis Cardinals at Milwaukee Brewers. From Miller Park in Milwaukee. (N) SportsCenter (N) (Live) ESPN2 36 144 209d WNBA BasketballBaseball Tonight (N) (Live) SportsCenter (N) NHRA Drag Racing AAA Texas FallNationals. From Dallas. (N Same-day Tape) NASCAR Now (N) SUNSP 37 -Addictive FishingShip Shape TVSprtsman Adv. College Football Bethune-Cookman at Florida State. (Taped) Pro Tarpon TournamentSaltwater Exp. DISCV 38 182 278TickleTickleAirplane Repo Alone in Alaska Airplane Repo Mid-Air Collision Airplane Repo No Rescue Repo Airplane Repo Flying Blind Airplane Repo No Rescue Repo TBS 39 139 247Pursuit-Happy.Tyler Perrys Why Did I Get Married Too? (2010) Tyler Perry, Sharon Leal. (DVS)Tyler Perrys I Can Do Bad All By Myself (2009) Tyler Perry, Taraji P. Henson. (DVS) You, DupreeLive From the Red Carpet: Primetime Emmy Awards (N) (Live)American Pie (1999, Comedy) Jason Biggs, Shannon Elizabeth. True Hollywood Story CeeLo Green E! After Party: 2013 Primetime Emmy TRAVEL 46 196 277Food ParadiseFood Paradise Bacon Paradise Mud PeopleAdam Richmans Adam Richmans Making MonstersMaking Monsters HGTV 47 112 229House HuntersHunters IntlHouse HuntersHunters IntlExtreme Homes (N) Love It or List It, Too (N) House Hunters Renovation (N) House HuntersHunters Intl TLC 48 183 280Sister Wives A Wife Decides Sister WivesSister Wives (N) Sister Wives Sister Wives Tell All (N) Breaking Amish: LA Cast Off (N) Sister Wives Sister Wives Tell All HIST 49 120 269Pawn StarsPawn StarsPawn StarsPawn StarsMountain Men Judgment Day Mountain Men Meltdown (N) American Pickers Full Steam Ahead The Great SantinisThe Great Santinis ANPL 50 184 282To Be AnnouncedGator Boys One Man Wrecking Crew Call of WildmanCall-WildmanCall of WildmanCall-WildmanGator Boys Paint You Later, Alligator Call of WildmanCall of Wildman FOOD 51 110 231Chopped Take Heart Rachael vs. Guy Kids Cook-OffRachael vs. Guy Kids Cook-Off (N) The Great Food Truck Race (N) Cutthroat Kitchen (N) Iron Chef America Symon vs. Izard TBN 52 260 372T.D. JakesJoyce MeyerLeading the WayThe Blessed LifeJoel OsteenKerry ShookBelieverVoiceCre o DollarSolomon and Sheba (1995, Historical Drama) Halle Berry, Jimmy Smits. FSN-FL 56 Bull Riding Championship. (Taped) World Poker Tour: Season 11World Poker Tour: Season 11 (Taped) The Best of Pride (N) World Poker Tour: Season 11World Poker Tour: Season 11 SYFY 58 122 244(5:00)Resident Evil: AfterlifeDrive Angry (2011, Action) Nicolas Cage, Amber Heard, William Fichtner.Ghost Rider (2007, Action) Nicolas Cage, Eva Mendes, Wes Bentley. Premiere. (:31)Sin City AMC 60 130 254(5:00)Shooter (2007) Mark Wahlberg, Michael Pea. Premiere. (7:57) Breaking Bad Ozymandias Breaking Bad A conclusion closes in. (:15) Low Winter Sun (N) (:15) Talking Bad(:45) Breaking Bad COM 62 107 249(5:51) South Park(:23) South Park(6:54) South Park(:26) South Park(7:57) South Park(:29) South ParkSouth Park(:32) South ParkSouth Park (N) (:06) Tosh.0(:38) Brickleberry CMT 63 166 327To Be AnnouncedCops ReloadedCops ReloadedCops ReloadedCops ReloadedCops ReloadedCops ReloadedCops ReloadedFat CopsDog and Beth: On the Hunt NGWILD 108 190 283America the Wild Gator Country America the WildAmerica the Wild American Vampire America the Wild Monster Wolf America the Wild Night of the Grizzly America the Wild American Vampire NGC 109 186 276To Catch a SmugglerTo Catch a SmugglerDrugs, Inc. Rocky Mountain High Drugs, Inc. Miami Vices (N) Alaska State Troopers (N) Drugs, Inc. Miami Vices SCIENCE 110 193 284How the Universe Works:Survivormans Secrets of SurvivalZombie Apocalypse (N) Surviving ZombiesSurviving ZombiesSurvivorman (N) Zombie Apocalypse ID 111 192 285Evil Kin The Evil Inside Surviving Evil Two women fall prey. Dateline on ID The Plot Thickens Deadline: Crime With Tamron Hall (N) On the Case With Paula Zahn (N) Dateline on ID The Plot Thickens HBO 302 300 501(5:15)Hitchcock (2012) PG-13Behind the Candelabra (2013, Docudrama) Michael Douglas. Boardwalk Empire (N) Boardwalk EmpireBoardwalk Empire MAX 320 310 515(4:45) Dragon yVarsity Blues (1999) James Van Der Beek. R (:15)Tower Heist (2011, Comedy) Ben Stiller. PG-13 Battleship (2012, Science Fiction) Taylor Kitsch, Rihanna. PG-13 SHOW 340 318 545(4:35)Die Another Day (2002) Dexter Dexter must make a decision. Ray Donovan Bucky F... Dent Dexter Remember the Monsters? Ray Donovan Same Exactly Ray Donovan Same Exactly MONDAY EVENING SEPTEMBER 23, Valkyrie (Part 3 of 3) Genealogy Roadshow Nashville POV Janet Mino helps autistic boys. (N) BBC World NewsTavis Smiley (N) 7-CBS 7 47 47Action News JaxCBS Evening NewsJaguars AccessTwo and Half MenHow I Met/MotherHow I Met/Mother2 Broke GirlsMom Pilot Hostages Pilot (Series Premiere) (N) Action News JaxLetterman 9-CW 9 17 17Meet the BrownsMeet the BrownsHouse of PayneHouse of PayneHart of Dixie On the Road Again Whose Line Is It?Whose Line Is It?TMZ (N) Access HollywoodThe Of ceThe Of ce 10-FOX 10 30 30Family GuyFamily GuyModern FamilyThe SimpsonsBones The Cheat in the Retreat (PA) Sleepy Hollow Blood Moon (N) NewsAction News JaxModern FamilyTwo and Half Men 12-NBC 12 12 12NewsNBC Nightly NewsWheel of FortuneJeopardy! (N) The Voice The Blind Auditions, Part 1 Vocalists perform. (:01) The Blacklist PilotM*A*S*HM*A*S*HBoston LegalBoston LegalLove-RaymondLove-RaymondLove-RaymondLove-RaymondLove-RaymondKing of Queens OWN 18 189 279Ask Oprahs All Stars Sex; buying or renting a home. Dateline on OWNDateline on OWN Prime Suspect Dateline on OWN A strong suspect. Dateline on OWN A&E 19 118 265Storage WarsStorage WarsStorage WarsStorage WarsStorage WarsStorage WarsStorage WarsStorage WarsStorage WarsStorage Wars(:01) Storage Wars(:31) Storage Wars HALL 20 185 312Little House on the Prairie Blizzard Little House on the PrairieStrawberry Summer (2012, Drama) Julie Mond, Trevor Donovan. FrasierFrasierFrasierFrasier FX 22 136 248Marmaduke (2010, Comedy) Voices of Owen Wilson, Lee Pace Cops & Robbers Castle A casino owner is murdered. Castle The team searches for a sniper. Castle Cuffed (DVS) Major Crimes False Pretenses CSI: NY Party Down NIK 26 170 299SpongeBobSpongeBobVictoriousDrake & JoshAwesomenessTVFull HouseFull HouseFull HouseThe NannyThe NannyFriends(:33) Friends SPIKE 28 168 241(4:00) The PunisherKick-Ass (2010, Action) Aaron Johnson. An ordinary teen decides to become a superhero.Piranha (2010, Horror) Elisabeth Shue, Adam Scott, Jerry OConnell.The Punisher (2004, Action) MY-TV 29 32 -The Ri emanThe Ri emanM*A*S*HM*A*S*H Pilot Law & Order: Special Victims UnitLaw & Order: Special Victims UnitSeinfeldMary Tyler MooreThe Twilight ZonePerry Mason DISN 31 172 290Good Luck CharlieJessieA.N.T. FarmAustin & AllyEnchanted (2007, Fantasy) Amy Adams, Patrick Dempsey. Austin & AllyShake It Up!Austin & AllyGood Luck Charlie LIFE 32 108 252Wife Swap Pyke/Smith Wife Swap A career-focused mom.Meet the Browns (2008, Comedy-Drama) Tyler Perry, Angela Bassett, David Mann. Devious Maids Totally Clean (:31) Double Divas USA 33 105 242NCIS: Los Angeles LD50 NCIS: Los Angeles The Bank Job WWE Monday Night RAW (N) (:05) NCIS: Los Angeles Chinatown BET 34 124 329106 & Park: BETs Top 10 Live Top 10 Countdown (N)The Wash (2001, Comedy) Dr. Dre, Snoop Doggy Dogg. Higher Learning (1995, Drama) Omar Epps. Racial tensions divide a college campus. ESPN 35 140 206SportsCenter (N) Monday Night Countdown (N) (Live) e(:25) NFL Football Oakland Raiders at Denver Broncos. (N Subject to Blackout) SportsCenter (N) ESPN2 36 144 209Around the HornInterruptionE:60 SportsNationBaseball Tonight (N) (Live) d WNBA Basketball: Western Conference Semi nal -Mercury at Sparks SUNSP 37 -Rays Live! (N) Inside the RaysGolf DestinationSwing Clinic (N) Tee It up WithGolf America (N) MLB Baseball Baltimore Orioles at Tampa Bay Rays. FOX Sports Live (N) (Live) DISCV 38 182 278Fast N Loud Double Trouble Galaxie Fast N LoudFast N Loud Cool Customline Fast N Loud: Revved Up (N) Turn & Burn Drag-On (N) Fast N Loud: Revved Up TBS 39 139 247SeinfeldSeinfeldSeinfeldFamily GuyFamily GuyFamily GuyFamily GuyFamily GuyE! Entertainment SpecialE! News All you need to know about the 2013 Primetime Emmy Awards. (N) Fashion Police (N) Fashion PoliceChelsea Lately (N) E! News TRAVEL 46 196 277Bizarre Foods With Andrew ZimmernMan v. FoodMan v. FoodBizarre Foods America Austin Bizarre Foods America West Virginia Hotel Impossible Boardwalk Gold (N) Hotel Impossible Casa Verde HGTV 47 112 229Love It or List It, TooLove It or List ItLove It or List It The Cullen Family Love It or List It Joe and Linhs twins. House Hunters (N) Hunters IntlLove It or List It The Godoy Family TLC 48 183 280Toddlers & TiarasBreaking Amish: LA Judgment Day Breaking Amish: LA: Extended(:10) Breaking Amish: LA: Extended Episode (N) Breaking Amish: LA: ExtendedBreaking: LA HIST 49 120 269Pawn StarsPawn StarsPawn StarsPawn StarsPawn StarsPawn StarsPawn StarsPawn StarsPawn Stars(:31) Pawn Stars(:02) Pawn Stars(:32) Pawn Stars ANPL 50 184 282To Be AnnouncedCall-WildmanCall-WildmanCall of WildmanCall-WildmanGator Boys Errorboat Captain Gator Boys Paint You Later, AlligatorFOX Sports Live (N) (Live) SYFY 58 122 244Star Trek VIGhost Rider (2007, Action) Nicolas Cage, Eva Mendes, Wes Bentley.G.I. Joe: The Rise of Cobra (2009, Action) Channing Tatum, Dennis Quaid. (:31) Star Trek VII AMC 60 130 254The Italian JobThe Shawshank Redemption (1994) Tim Robbins. An innocent man goes to a Maine penitentiary for life in 1947.The Lord of the Rings: The Two Towers (2002, Fantasy) Elijah Wood, Ian McKellen. COM 62 107 249(5:50) South Park(:21) Tosh.0The Colbert ReportDaily Show(7:54) South Park(:25) South Park(8:56) South Park(:27) South Park(9:58) Brickleberry(:29) South ParkDaily ShowThe Colbert Report CMT 63 166 327RebaRebaRebaRebaGood Will Hunting (1997, Drama) Matt Damon. A young Boston man must deal with his genius and emotions. Cops ReloadedCops Reloaded NGWILD 108 190 283Dog Whisperer Spike in the Heart Ultimate Animal Countdown Attack HummingbirdSloth BearsA Wild Dogs TaleHummingbird NGC 109 186 276Alaska State TroopersAlaska State TroopersAlaska State Troopers Manhunt Alaska State Troopers Trail of Blood Alaska State TroopersAlaska State Troopers Trail of Blood SCIENCE 110 193 284Deep Space Marvels Destiny How the Universe Works:How the Universe Works:How the Universe Works:How the Universe Works:How the Universe Works: ID 111 192 28520/20 on ID Dangerous Deception 20/20 on ID Over the Line 20/20 on ID Killing Me With His Love 20/20 on ID (N) Twisted A couple in love. (N) 20/20 on ID Killing Me With His Love HBO 302 300 501Sound-Thunder(:45) We Bought a Zoo (2011, Comedy-Drama) Matt Damon, Scarlett Johansson. PG First Cousin Once Removed (2012) Franois Berland.The Campaign (2012, Comedy) Will Ferrell. R MAX 320 310 515The Brave One R (:35)The Watch (2012, Comedy) Ben Stiller, Vince Vaughn. R (:20)Journey 2: The Mysterious Island (2012) PGI, Robot (2004, Science Fiction) Will Smith. PG-13 SHOW 340 318 545(5:00)Gangs of New York (2002) Leonardo DiCaprio. R Dexter Remember the Monsters? Ray Donovan Same Exactly Dexter Remember the Monsters? Ray Donovan Same Exactly -WordWorldBarney & FriendsCaillouDaniel TigerSuper Why!Dinosaur TrainCat in the HatCurious GeorgeWild KrattsElectric Comp.WU(1:00) Public AffairsVaried Programs Public Affairs WGN-A 16 239 307In the Heat of the NightWGN Midday NewsWalker, Texas RangerWalker, Texas RangerLaw & Order: Criminal IntentLaw Order: CIVaried Programs TVLAND 17 106 304(:30) GunsmokeVaried ProgramsGunsmokeVaried ProgramsGunsmokeVaried ProgramsBonanzaVaried ProgramsBonanzaVaried ProgramsM*A*S*HM*A*S*H OWN 18 189 279Dr. Phil:00) MovieMovieeter RabbitDora the ExplorerDora the ExplorerSpongeBobSpongeBobTeenage Mut.Odd ParentsOdd ParentsSpongeBobSp Gravity FallsVaried Programs LIFE 32 108 252How I Met/MotherHow I Met/MotherGreys AnatomyGreys AnatomyGreys AnatomyWife SwapWife Swap USA 33 105 242Varied Programs NCIS: Los AngelesVaried Programs BET 34 124 329(11:00) Movie The ParkersThe ParkersFamily MattersMovieVaried Programs -(:30) MLB BaseballVaried Programs DISCV 38 182 278Varied Programs TBS 39 139 247According to JimWipeoutCleveland ShowAmerican DadAmerican DadFriendsFriendsFriendsFriendsKing of QueensQuints by SurpriseQuints by SurpriseIsland MediumIsland MediumWhat Not to WearVaried ProgramsSay Yes, DressSay Yes, DressRandy RescueVaried Programs HIST 49 120 269Varied Programs ANPL 50 184 282Pit BossUntamed and UncutNorth Woods Law: On the HuntSwamp WarsTo Be AnnouncedVaried Programs FOOD 51 110 231Barefoot ContessaBarefoot ContessaMoney SavingMovieVaried Programs AMC 60 130 254MovieVaried Programs COM 62 107 249(:01) MovieVaried Programs(:34) South Park(:11) South ParkVaried Programs(:10) South ParkIts Always SunnyVaried Programs(4:51) Futurama(:23) Futurama CMT 63 166 327MovieVaried ProgramsMovieVaried Programs RebaReba NGWILD 108 190 283Dog WhispererVaried Programs NGC 109 186 276Alaska State TroopersBorder WarsWild JusticeVaried Programs SCIENCE 110 193 284Varied Programs ID 111 192 285Behind Mansion WallsBehind Mansion WallsFBI: Criminal PursuitDateline on IDSomeone WatchingSomeone WatchingSouth-HomicideVaried Programs HBO 302 300 501(11:00) MovieVaried Programs MAX 320 310 515(11:15) MovieMovieVaried Programs SHOW 340 318 545(11:15) MovieVaried Programs MovieVaried Programs PAGE 21 DEAR ABBY: The other day, while backing out of a parking space, I nearly hit a woman who was walking behind my car with her tod-dler son. I didnt see them because I was dialing my cellphone and was distract-ed. The woman rightfully yelled at me to pay atten-tion and get off my phone, and although she was gra-cious and encouraged me to consider it a wake-up call, I didnt react as kindly to her out of embarrass-ment. Instead, I became defensive and didnt apolo-gize, even though it was my fault. I shudder to think of what might have happened, and I admit this wasnt the first close call Ive had. Im a married mother of two and should know better. While I cant go back and find her, I hope the woman sees this letter. I want her to know that because of that incident, I now lock my purse and phone in the trunk or place them on the backseat out of reach before I start my car. This way, I avoid the temptation to look at messages or make a call. I have also asked my kids to keep me account-able by reminding me if I happen to forget. They will be driving in a few years, and I want to set a good example for them. Please pass this idea along -especially to moms like me who try to multi-task in the car. -HANDS ON THE WHEEL IN CALIFORNIA DEAR HANDS ON THE WHEEL: Your suggestion of placing your purse and phone on the backseat out of reach is a good one. You are really lucky you didnt kill or seriously injure that mother and her child. Regardless of whether or not the woman sees your letter, I hope it will remind other drivers of the danger of driving while distracted. And while Im been divorced for 13 years, and I often wonder how to fill out questionnaires that ask my marital status. I have recently started checking single because enough time seems to have passed, and I dont define myself by my divorce. However, now Im wondering if theres a certain etiquette recommended. -STATUS UNKNOWN IN OHIO DEAR STATUS UNKNOWN: Honesty is recommended. As much as you might like to pres-ent yourself that way, you are no longer single. Calling yourself single is dishonest. As someone who has been married and divorced, you are a divor-cee -and you will be until you remarry. Saying you are single is a misrepre-sentation of the facts. DEAR ABBY HOROSCOPES ARIES (March 21-April 19): Its good to consider your options, but dont make a rash move. Wait and see what unfolds before you venture down a path you know little about. Ask questions and do your research. You can offer a little without jeopardizing your reputation. +++ TAURUS (April 20-May 20): Youll be drawn into an emotional situation. Dont overlook what others are doing. Size up whats being offered and consider how to benefit from the circum-stances that unfold. +++ GEMINI (May 21-June 20): Get out and have some fun. Dont let the little things bother you or the people making demands get to you. Say whats on your mind and focus on whatever changes make you happy. +++ CANCER (June 21-July 22): Find ways to improve your domestic situation or offer solutions to those you wish to help. Keeping busy will feed your mind, enabling you to come up with some terrific plans that can improve your skills and your life. ++++ LEO (July 23-Aug. 22): Dont put up with anyone interfering with your private life. Embrace the changes that suit you, not the ones someone else wants you to make. Do whats best for you in order to get ahead, even if it includes a move. ++ VIRGO (Aug. 23-Sept. 22): Gather information, and you will know precise-ly what needs to be done in order to get what you want. Networking, socializ-ing and attending a confer-ence will bring you greater opportunities personally and professionally. Enjoy the moment. +++++ LIBRA (Sept. 23-Oct. 22): Do whatever it takes to plan for the future. Send out resumes or talk with people who have some-thing to offer you. Taking the initiative will attract positive attention that could lead to options you may not have considered in the past. +++ SCORPIO (Oct. 23-Nov. 21): Dont limit what you can do because you dont want to face an emotional matter. Choose your words carefully and be precise in getting your point across. +++ SAGITTARIUS (Nov. 22-Dec. 21): An invest-ment may interest you, but before jumping in, look at the practical aspect of whats involved. Dont jeop-ardize what you have for something that could lead to serious loss. Request a favor that will help you make a wise decision. +++ CAPRICORN (Dec. 22Jan. 19): Ask and you shall receive. Fixing up your home or making a move that will improve your relationship or your posi-tion should be considered. +++++ AQUARIUS (Jan. 20Feb. 18): Dont feel obli-gated to follow what others do. Being comfortable with whatever situation you are faced with is important if you are going to succeed. ++ PISCES (Feb. 19-March 20): Look at an old idea from a different perspec-tive. Make creative adjust-ments and plan to move forward with your plans. Personal contracts will lead to happiness and a solid relationship with someone you can trust and count on. ++++ Abigail Van Buren THE LAST WORD Eugenia Word SUNDAY CROSSWORD Across 1 Monopolizer,VQWZHOO8 The people vs. us:KHQUHSHDWHG VSLULWHG 15 Fiscal exec+RWDQGERWKHUHG/X[XU\KRWHO DPHQLW\ :KHUHWREX\FOXEV DWDFOXE &RQILVFDWHDFKHIV DSSHWL]HU" &RXQWPHLQIRUWKHEDVH QXPEHUV\VWHP &KDUOHV1HOVRQBBB ROGJDPHVKRZVWDSOH 6SLOOFDWFKHU4XLFNURXQGRI WHQQLVPD\EH 3XOLW]HUZLQQHU James /HZLVZLWK Emmys 5RQGRPDNHU3HUIRUPDQFHDUWLVW ZLWKDSDOLQGURPLFname &RQNDFRDFKVWHDP PHPEHU" 'ULYHUVOLFLQIR)XUQLVKHVZLWK VROGLHUV 43 Clueless *UDGXDWHIURP %DUQDUGVD\ 2OG5RPDQZHOO"'LVQH\GRJ+9$&PHDVXUHVIRU VKRUW &HUWDLQUDGLRXVHU/LNHPRVWILVK9LHZIURP/RQJ,V&ORVHD9:%HHWOH RZQHUVFDUGRRU" /LNHPXFKUXVK KRXUWUDIILF %XG$EEURISROLWHQHVV PAGE 22 4D LAKE CITY REPORTER LIFE SUNDAY, SEPTEMBER 22, 2013 4DLIFE ground and never be covered by mulch or soil. But homeowners can grow strawber ries in a variety of ways. If space is a prob lem, try growing strawberries in contain ers, raised beds, or even hydroponically. Amend your garden soil with organic mat ter, water often for the first two weeks, and fertilize with a balanced fertilizer such as 6-8-8. Plants will benefit from a pre-plant fertilizer in which nitrogen is mostly in a time released formulation. Caterpillars will probably be your first pest to control. Watch for any outbreaks so you can pick and destroy. When the new foliage becomes abundant, so might the aphids and thrips. A strong spray of water will dislodge aphids and destroy their feeding mouth parts. Spider mites can become a nuisance later in December. Diseases can be controlled with fungi cides labeled for strawberries, and then kept in check by removing all dying and diseased plant parts. Whether your plants are in containers, raised beds or in the ground, if a frost or freeze is predicted, cover your plants with sheets or commercial protective cloth. The roots and crowns are tough, but the cold will set fruiting back by damaging tender flower and fruit tissue. Unless you want to share your crop with local wildlife, you may need to cover the ripening berries with netting made to keep the birds out. Pick your berries when they are nearly all red. They will not sweeten any more after they have been picked, but fully red ripe fruit will rot rapidly. To learn more about growing your own delicious strawber ries, visit growing_strawberries_in_the_flor.htm and contact the Master Gardeners at 752-5384. BERRIES: Grow your own Continued From Page 1A D. Nichelle Demorest is a horticulture agent with the Columbia County Extension of the University of Florida Institute of Food and Agricultural Sciences. stands on his hind legs.) A Scottish deerhound, a breed some what similar to the Irish wolfhound, won at Westminster in 2011. room mates Newfoundland, I fell in love with the big breed, said Hamilton, who has since owned Newfoundlands for 32 years. Caring for the 130-pound, heavy-coated Ares involves dealing with lots of hair, lots of slobber and keeping her Enola, Pa., home at 58 degrees year-round, she said, because you dont want that panting in your face. Many dog breeds, big and small, are susceptible to certain health problems. Giant breeds can be prone to orthope dic troubles, heart problems and whats known as bloat, a dangerous stomach con dition. And in general, smaller dogs tend to live longer than huge ones. Also, temperament and training are perhaps even bigger priorities for giant dogs than others because the big breeds size and appearance can be off-putting if theyre not well-behaved. You want to be able to look them in the face and have it be inviting, said dog han dler Melody Salmi, who showed the St. Bernard best-of-breed winner, Aristocrat (or, formally, Jamelles Aristocrat V Elba), Tuesday at Westminster. Hes owned by Linda and Edward Baker of Hopewell, N.J. Afterward, Aristocrat snoozed placidly in his crate. Oftentimes, I sell a puppy to people, and they say, Oh, its so big, said Aristocrats breeder, Michele Mulligan of Diamond Bar, Calif. But a year later, the same own ers will say fondly, Theyre not so big, she said. They just grow on you. DOGS: Owners of big breeds see selves as living large Continued From Page 1A ASSOCIATED PRESS In this com mitment that their smaller counterparts, large breeds remain popular among dog owners and the general public alike. Canals run dry in Vegas By HANNAH DREIER Associated Press LAS VEGAS Its not often you can use the word dry to describe a Las Vegas landmark, but tourists hoping to cruise along the Venetian hotel-casinos indoor canals are finding them tapped out. The waterways were emptied for repainting earlier this month for the first time since the casino opened in 1999. When they reopen in mid-October, the water will once again appear to sparkle below the hotels trompe-loeil sky. On Thursday, piped-in Italian music echoed off cement mixers and construc tion tools strewn around the bottom of the canals that meander through the hotels shopping mall. Tourists leaned over ornate stone and iron railings, frowning at the gray con crete.. Its one of the things that its most famous for, isnt it? Will said, still smart ing a little from the disappointment. One couple said they had come to Las Vegas exclusively to ride a gondola in air-conditioned splendor. The man behind the counter, whose job is to sell people on shows and activi ties outside the hotel, has been respond ing to inquiries with feigned shock, tell ing tourists that he still sees water flow ing through the hotel. The gag didnt go over too well with a Frenchman who spoke limited English. The nightshift kiosk clerk has been keeping a tally of people who ask about the canals. One nights list had 90 check marks. More than 500,000 visitors ride the gondolas each year, paying either $18.95 for a 10-minute group ride, or $75.80 for a romantic couples ride. Tourists who arent staying at the hotel seem to have a better attitude about the surprise. Before heading to the Venetians luxury shops, Patricia Giles of northern England joked to her traveling companion that the canals had sprung a leak. Workers who labor in the canals at night are hiding hoses, tools and big orange buckets under blue tarps beneath bridges during the day. The costumed gondoliers whose bari tone serenading provides a soundtrack to shopping and eating are gone for the month, either moved outside or tempo rarily laid off. The white wedding gon dola decked out with an officiant is also out of commission. Author Harper Lee, museum at odds By PHILLIP RAWLS Associated Press MONTes New York attorney, Robert Clarida, said the 87-year-old author, who lives in Monroeville, has never received a penny from the museums sale of Tshirts, caps and other souvenirs. They want to continue selling the merchandise without Ms. Lee getting any money, he said Friday. Museum Director Stephanie Rogers said Lees book drives tourism in the rural south Alabama county. She said the muse um web site address. The organizations attractions include the old county courthouse housing the courtroom that served as the model for the movie To Kill a Mockingbird. The courthouse draws 25,000 to 30,000 visitors annually and features a display that tells Lees story in her own words. In April and May, it will present its 25th annual produc tion of the play, To Kill a Mockingbird. Rogers said the museum pays royalties to produce the play, but it has never paid for selling the souvenirs. She said tourists want a memento of their visit, and the pro ceeds are the key to museums continued operation and its educational programs. Museum attorney Matt Goforth said Friday, We are hopeful this legal dispute, originally initiated by Ms. Lees attorneys, will not damage our relationship. Lees attorney said people occasion ally show up online up selling To Kill a Mockingbird merchandise, but a letter to cease usually takes care of that. He said the trademark application is aimed at the museum because of its continuous sale of merchandise. That merchandise is remaining on sale while the trademark application is pend ing. Attorneys on both sides said the time linees former literary agent, and com panies he allegedly created. Two other defendants had been dropped from the suit a week earlier. Lees trademark application was first reported by The Monroe Journal, the newspaper in Monroeville.
http://ufdc.ufl.edu/UF00028308/02179
CC-MAIN-2018-26
refinedweb
23,298
64.3
Created on 2011-08-18 08:57 by arnau, last changed 2014-05-30 02:44 by paul.j3. This issue is now closed. When specifying a function to be called in type keyword argument of add_argument(), the function is actually called twice (when a default value is set and then when the argument is given). While this may not be a problem in most cases (such as converting to an int for example), it is an issue for example when trying to open a file whose filename is given as a default value but is not accessible for whatever reason because the first call will fail whereas only the second should be done. I know this may sound like a twisted example but the type function should not be called twice anyhow IMHO. I tested with Python 2.7 and 3.2 from Debian packages only but the bug seems to be present in py3k and 2.7 hg branches as well. I have attached a small script showing the issue and two patches (for 2.7 and tip (py3k) hg branches), including an additional test case. All argparse tests pass well with 2.7 and 3.2. Hope that's ok. Thanks for the patch. I commented on the code review site, you should have received an email. Thanks for the review. Sorry to send that here instead of the review page, but I get an error when replying: "Invalid XSRF token.". > This looks good, especially if all existing tests still pass as you report, but > I wonder about one thing: you have removed the part where the conversion > function was applied to the default value, so I expected you to edit the other > line were the conversion function was already called, but that’s not the case. > Am I misunderstanding something? Yes, sorry, I should have perhaps explained it in further details... Here are some examples: * Example test case 1: parser = argparse.ArgumentParser() parser.add_argument('--foo', type=type_foo_func, default='foo') parser.parse_args('--foo bar'.split()) => Before the patch, type function is called in parse_known_args() for the default given in add_argument(), and then in _parse_known_args() for '--foo bar' given in parse_args above, whereas type function should have been called only for the second one. * Example test case 2: parser = argparse.ArgumentParser() parser.add_argument('--foo', type=type_foo_func) parser.parse_args('--foo bar'.split()) => This was already working well before my patch. * Example test case 3: parser = argparse.ArgumentParser() parser.add_argument('--foo', type=type_foo_func, default='foo') parser.parse_args('') => type_foo_func is called after parsing arguments (none in this case) in my patch. Therefore, my patch just moves the function type call after parsing the arguments (given to parse_args()) instead of before, only and only if it was not previously given in parse_args(). > > Lib/argparse.py:1985: if hasattr(namespace, action.dest) and \ > It is recommended to use parens to group multi-line statements, backslashes are > error-prone. I have just updated the patch on the bug report. Thanks. Any news about applying these patches? this is annoying: i’m creating a reindentation script that reindents any valid python script. the user can specify if, and how many spaces he/she wants to use per indentation level. `0` or leaving the option out means “one tab per level”. if the argument is given, appended code works as intended. but in the default case, the code fails for any of the two default values i tried. i would expect that one of the default values works: either `0`, if the default value *is* converted via the `type` function, or `"\t"` if the default value bypasses it. it seems that argparse applies the `type` function to the default instantly, and then to the argument (be it the already-converted default or a passed option). this breaks `type` functions which aren’t reflexive for values from their result set, i.e.: `t(x) = y => t(y) = y` must be true for all `x` that the function can handle Does the patch I attached fix your issue? i don’t know, since i get python from the ubuntu repositories, sorry. in which python release will this patch first be integrated? It would definitely help if you could apply the patch for Python 2.7 manually on your local installation (after making a backup of course). You can just download the patch for Python 2.7 then (only the first part of the patch can be applied, the second part is for the test so it doesn't matter): # cd /usr/lib/python2.7/ # patch -b -p2 -i /PATH/TO/THE/PATCH Thanks much. Mucking with your installed Python is probably a bad idea, and it may also be an old version (compared to the current development version which has seen hundreds of changes) where testing the patch would not give useful results. Please see the devguide. I have had a look at the issue more closely and my initial patch was not completely right as it didn't work properly with argparse_test.py despite all tests passing. Therefore, I have amended my patch to not check whether action.default was a basestring which didn't make sense at all, but check instead if action.default is None (action.default default value is None if not given to add_argument as far as I understand). I also added a test for the issue reported above as it was missing and ran patchcheck to make sure everything was fine. All the tests (include argparse_test.py) passes without problem. Could you please apply them? Many thanks. BTW, about argparse_test, the default should be either '0' or 0 but not '\t' AFAIK because even the default value is converted using the given type function. It fails even with the last 2.7 version but it works well with my patch...". Also, "action.default == getattr(namespace, action.dest)" should probably use "is" instead of "==". Other than that, the patch looks okay. >". There seems to be already a test for that, namely TestActionUserDefined, which use type=float and type=int. The value is properly converted to {int,float} when passed to __call__(). Just in case, I also tested with a 'type' function I defined myself (which only returns float()) for OptionalAction and it's working fine. > Also, "action.default == getattr(namespace, action.dest)" should > probably use "is" instead of "==". Good point, it would be much better. Thanks for the advice. I have just modified the patch with that. ping? Could you please apply this patch? It's been 4 months without reply now... I've just verified that this patch also fixes 13824 and 11839. The attached patchfile adds a test to verify that using a non-existent default file fails if you don't specify the argument, and succeeds if you do. Could someone please apply it? Sorry - got ahead of myself. It doesn't fix 13824. A deeper reading reveals that the problem wasn't quite what I thought it on first glance. The patch looks good to me. I've updated it for trunk and to include Mike Meyer's additional test. All argparse tests pass. Anyone who's able to commit and backport, please do. (I should be able to commit myself, but it's now been too long and my SSH key seems to no longer work. I'll eventually get this sorted out, but as you may have noticed, I don't have much time for argparse these days, so best not to wait on me.) New changeset 1b614921aefa by R David Murray in branch '3.2': #12776,#11839: call argparse type function only once. New changeset 74f6d87cd471 by R David Murray in branch 'default': Merge #12776,#11839: call argparse type function only once. New changeset 62b5667ef2f4 by R David Murray in branch '2.7': #12776,#11839: call argparse type function only once. Thanks, Arnaud and Mike. (And Steven, of course :) FTR a contributor to #13271 (--help should work even if a type converter fails) indicated that it’s fixed by this patch, so it may be good to add a regression test.
https://bugs.python.org/issue12776
CC-MAIN-2018-26
refinedweb
1,345
75.5
Encapsulate a field in an attribute table or data source. More... #include <qgsfield.h> Encapsulate a field in an attribute table or data source. QgsField stores metadata about an attribute field, including name, type length, and if applicable, precision. Destructor. Returns the field comment. Converts the provided variant to a compatible format. Formats string for display. Gets the length of the field. Gets the name of the field. Gets the precision of the field. Not all field types have a related precision. Set the field comment. Set the field length. Set the field name. Set the field precision. Set variant type. Set the field type. Gets variant type of the field as it will be retrieved from data source. Gets the field type. Field types vary depending on the data source. Examples are char, int, double, blob, geometry, etc. The type is stored exactly as the data store reports it, with no attempt to standardize the value.
https://api.qgis.org/2.6/classQgsField.html
CC-MAIN-2020-34
refinedweb
157
80.88
This project aims to replace the front desk cashier of food trucks and fast food joints with voice-activated vending machine, that can understand the order, checks for allergies, take payment, and deliver orders through vending- machines (soft drinks, water bottle) and through window at counter, it takes help of facial recognition to deliver order to correct person. It also remembers your allergies and your last order.Steps: 1. Speech Recognition 2. Face Recognition 3. NFC card reader 4. Everloop 5. GUI for front desk and kitchen 6. Controlling Motors1. Speech Recognition: First, we need to make an account in snips.ai . Creating an app. - If you are logging into the console for the first time it will ask for the name of assistant and the language. Give the name of your choice and select English. - After creating the assistant our first job is to create an app, you can also add apps made by others. - Go to add app and then create a new app, initially our app will not have any training data hence it will be shown as weak. Creating new intents: - In the next step, we are required to create new intents. Go to create new intent. Give it a name and short description. - Intents refer to the intention of the user. In our project, we have to recognize the intents like add items, remove items, add allergies, etc. - Each intent will contain some slots that can be number, time, names, etc, these will appear in the sentences of that intent and will be used in the intent callback codes that we will discuss further. - Start adding slots by clicking on add new slots, We can use slot types provided by default or add custom slot. - Give a name to your slot type and define the values, for example, dishes name. Make sure to define all the dishes that you want to recognize. - Add different slots and train your model by adding enough sentences and highlighting the respective slots (most of the time it will be done automatically). - Similarly, we need to add intents like:- - addItems : when a person wants to add new items to order. - removeItems: when a person wants to remove items. - response: when a person response back a questin in words like no, yes or continue. - allergies: when a person tells his allergies. - specialRequest: when a person wants to add special requests like less spicy, sweet, etc. - suggestion: When a person asks for suggestions while making an order, like a bestseller, today's special, etc. (Currently not implemented). - Once we have trained our model with enough examples, we are ready to go to the next step i.e. downloading it in the raspberry pi. Installing SAM CLI tool for offline speech recognition: Followthis tutorial for installing SAM CLI tool to your raspberry pi, for working with matrix creator. Installingassistantto raspberry pi : - After installing SAM successfully, we need to download our assistant to raspberry pi for offline speech recognition (all sam commands will be run from PC terminal, not ssh) - Connect to raspberry pi by sam connect <raspi ip address> - login to your snips account. sam login sam install assistant - It will fetch your assistant, if you have more than one it will ask you to select one. - After installing the assistant you can check the result by saying "Hey snips" and any sentence. You can check the snips output by this command. sam watch - You can check if every service is running fine by sam status At the time of writing this article the snips audio server and tts were having some issue so I downgraded them. sudo apt install snips-audio-server=0.63.3 sudo apt install snips-tts=0.63.3 Coding Intents: - It is better to start with a code template, you can download one from here. - Now we have to code functions for our intent callbacks. For example, when someone will asks to add items to their order, snips will return message with the intent name addItemsand the slots that we have defined while training our intents and each slots will contain detected words that the user has said. - We will extract this information about items and their quantity from the slots and if they are present in our menu and will give a confirmatory reply to the user. def addItems(self, intent_message): for name,values in intent_message.slots.items(): if name == "item": items = list(map(lambda x: str(x.value), values.all())) if name == "amount": amount = list(map(lambda x: int(x.value), values.all())) try: if len(items) == len(amount): add = {} add = dict(zip(items,amount)) dialogue = "" for dish,amount in add.items(): if dish in self.order.keys(): self.order[dish] = self.order[dish] + amount else: if dish in kiosk.menu.keys(): self.order[dish] = amount dialogue += str(amount) +" " + str(dish) dialogue += " is added to your order. " else: dialogue = " Sorry, please use numbers for quantity. " except: dialogue = " Sorry, I didn't get that. " self.state = 0; return dialogue - Similarly, we need to write code for all our callbacks, some syntaxes for sending dialogue to tts are: 2. Face Recognition2. Face Recognition hermes.publish_continue_session(intent_message.session_id,"Say this" ,\ ["intents of next dialogue"],"") hermes.publish_end_session(intent_message.session_id,"Say this") hermes.publish_start_session_notification("site.id","Say this","") - First make sure you have open cv4 installed, if not you can get help some here. - To confirm the installation run python in shell, and import cv2. import cv2 >>> cv2.__version__ '4.0.0' - We will be doing three things in this script, first, we will detect a human face by Haar Cascade classifier, if the detected face cannot be identified by the LBPH face recognizer using current model, we will capture some images of the face to train our model and give it some unique id and save it to our database. - Face Recognition script publishes detected face ids to the topic camera/recognisedIds" and subscribes to topic camera/addId in case when a new user gives the order, the "main script" publishes a unique user id to camera/addIdand the face recognition script trains its model to add this new id. - Here all functions are squeezed in one script for a more detailed explanation please folow this project "Real-Time Face Recognition: An End-to-End Project". 3. NFC Card Reader3. NFC Card Reader ''' Based on code by Marcelo Rovai - MJRoBot.org and on code by Anirban Kar: ''' def on_connect(client, userdata, flags, rc): print("[faceRecognition]: Connected") client.subscribe("camera/addId") def on_message(client, userdata, msg): global face_detected global face_add global face_id if str(msg.topic) == "camera/addId": face_id = str(msg.payload) face_add = 1 def initMqtt(): client.on_connect = on_connect client.on_message = on_message client.connect("localhost", 1883) client.loop_start() def getImagesAndLabels(path): imagePaths = [os.path.join(path,f) for f in os.listdir(path)] faceSamples=[] ids = [] for imagePath in imagePaths: PIL_img = Image.open(imagePath).convert('L') # convert it to grayscale img_numpy = np.array(PIL_img,'uint8') id = int(os.path.split(imagePath)[-1].split(".")[1]) faces = detector.detectMultiScale(img_numpy) for (x,y,w,h) in faces: faceSamples.append(img_numpy[y:y+h,x:x+w]) ids.append(id) return faceSamples,ids def trainer(): path = 'dataset' recognizer = cv2.face.LBPHFaceRecognizer_create() print ("\n [FaceRecognition] Training faces. It will take a few seconds. Wait ...") faces,ids = getImagesAndLabels(path) recognizer.train(faces, np.array(ids)) # Save the model into trainer/trainer.yml recognizer.write('trainer/trainer.yml') def faceAddition(face_id): face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') count = 0 while(count<30): count += 1 ret, img = cam.read() gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = face_detector.detectMultiScale(gray, 1.3, 5) #cv2.imshow('image', img) for (x,y,w,h) in faces: cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0), 2) # Save the captured image into the datasets folder cv2.imwrite("dataset/User." + str(face_id) + '.' + str(count) + ".jpg", gray[y:y+h,x:x+w]) print("[FaceRecognition]: Face capture count = " + str(count)) trainer() def faceDetection(): global cam #print('[FaceRecognition]: Face Detection ON') recognizer = cv2.face.LBPHFaceRecognizer_create() try: recognizer.read('trainer/trainer.yml') except: return [0] cascadePath = "haarcascade_frontalface_default.xml" faceCascade = cv2.CascadeClassifier(cascadePath); font = cv2.FONT_HERSHEY_SIMPLEX #iniciate id counter id = 0 # Define min window size to be recognized as a face minW = 0.1*cam.get(3) minH = 0.1*cam.get(4) counter = 0 recognised_ids = [] while counter<=20: counter+=1 ret, img =cam.read() < 100): pass else: id = 0 # storing list of recognised/unrecognised faces if not id in recognised_ids: recognised_ids.append(id) return recognised_ids - First, make sure you have NFC Reader Library for PN512 is installed if not follow this. - In our project payment is done by reading nfc tags, data.txt file contains the information regarding UID and their respective balance. - We read nfc data and extract UID in hex format and call function that opens data.txt file and searches for required UID and its balance if the UID has sufficient balance the file is updated with a new balance. - Payment also gets failed if user is unable to scan card within 60 seconds. int scan(double amount){ matrix_hal::NFC nfc; matrix_hal::NFCData nfc_data; std::cout << "[NFC]: NFC started!" << std::endl; int sucess = 0; auto past_time = std::chrono::system_clock::now(); auto current_time = std::chrono::system_clock::now(); std::chrono::duration<double> duration = (current_time-past_time); while(duration.count()<60){ current_time = std::chrono::system_clock::now(); duration = current_time-past_time; nfc.Activate(); nfc.ReadInfo(&nfc_data.info); nfc.Deactivate(); if (nfc_data.info.recently_updated) { std::cout << "[NFC] : " + nfc_data.info.ToString() << std::endl; std::string user_id = nfc_data.info.UIDToHex(); sucess = payment(user_id, amount); break; } std::this_thread::sleep_for(std::chrono::microseconds(10000)); } return sucess; } <UID in HEX> <BALANCE> - NFC program subscribe to the topic payment/startwhich contains bill amount and publishes to the topic payment/statuswhich represents whether the payment was successful or not. - Before we continue, make sure you have MATRIX HAL installed, if not follow this link. - In everloop we have to set RGB values of each leds in everloop_image object before finally writing it to the bus. for example: for (matrix_hal::LedValue &led : everloop_image.leds) { led.red = 0; // Set green to 100 led.green = 100; led.blue = 0; led.white = 0; } // Updates the Everloop on the MATRIX device everloop.Write(&everloop_image); - You also need to make sure we have paho mqtt c installed because we will be subscribing to the topic "everloop"so that we can change the colors of everloop according to the situation. - The functions that are used for mqtt communication are written in mqtthelper.cpp file and this should be added while compiling the code. #include <iostream> #include <string.h> #include "MQTTClient.h" #include "mqtthelper.h" volatile double msg = 0; MQTTClient_deliveryToken deliveredtoken; MQTTClient client; MQTTClient_message pubmsg = MQTTClient_message_initializer; MQTTClient_deliveryToken token; int msgarrvd(void *context, char *topicName, int topicLen, MQTTClient_message *message) { msg = std::stod((char*)message->payload); MQTTClient_freeMessage(&message); MQTTClient_free(topicName); return 1; } void delivered(void *context, MQTTClient_deliveryToken dt) { deliveredtoken = dt; } void connlost(void *context, char *cause) { printf("\n[Everloop]: Connection lost\n"); printf(" cause: %s\n", cause); } void initMqtt(char *ADDRESS,char *CLIENTID,char *TOPIC,int QOS){ MQTTClient_connectOptions conn_opts = MQTTClient_connectOptions_initializer; int rc; int ch; MQTTClient_create(&client, ADDRESS, CLIENTID,MQTTCLIENT_PERSISTENCE_NONE, NULL); conn_opts.keepAliveInterval = 20; conn_opts.cleansession = 1; MQTTClient_setCallbacks(client, NULL, connlost, msgarrvd, delivered); if ((rc = MQTTClient_connect(client, &conn_opts)) != MQTTCLIENT_SUCCESS) { printf("[Everloop]: Failed to connect, return code %d\n", rc); exit(EXIT_FAILURE); } std::cout<<"[Everloop]: Connected"<<std::endl; MQTTClient_subscribe(client, TOPIC, QOS); } void publishStatus(char *topic,char *payload){ int rc; pubmsg.payload = payload; pubmsg.payloadlen = 1; pubmsg.qos = 0; pubmsg.retained = 0; deliveredtoken = 0; MQTTClient_publishMessage(client,topic, &pubmsg, &token); printf("Waiting for publication of %s\n" "on topic %s \n", payload, topic); while(deliveredtoken != token); printf("[paymentny]Message with delivery\n"); } - Compile file using the command g++ -o everloop everloop.cpp mqtthelper.cpp -std=c++11 -lmatrix_creator_hal -lpaho-mqtt3c - Run your everloop program and check its working by publishing to topic everloopby running this python script.= "raspberrypi.local" port = 1883 user = "" client = mqttClient.Client("Python") #create new instance client.on_connect= on_connect #attach function to callback client.connect(broker_address, port=port) #connect to broker client.loop_start() #start the loop while Connected != True: #Wait for connection time.sleep(0.1) try: while True: value = raw_input() client.publish("everloop",value) except KeyboardInterrupt: client.disconnect() client.loop_stop() - First, make sure you have PyQt installed if not run this command sudo apt-get install python-qt4 - In front GUI we display the Menu and the current order, the script gets information from the topic guiFront/textthat is published by "main script". - The GUI for the kitchen contains orders that are yet to be served. Once the order is ready, and the "completed" button beneath the order is pressed it publishes orderid to the topic guiBack/completedOrderwhich is subscribed by "main script" that then calls the user and deliver the order after recognizing them through facial recognition. - The ideal delivery mechanism will ensure that every order reaches its correct customer. The main problem arises when people don't reach the counter in the same order as they were called, catering to this kind of problem is out of the scope of this project. - Currently, motor.cpp will run two motors whenever the user called to collect order is recognized by the camera. One motor delivers bottles from the vending machine and the other motor opens the window to collect the order from the counter.
https://www.hackster.io/rishabh-verma/voice-activated-vending-machine-b8098b
CC-MAIN-2021-17
refinedweb
2,211
50.33
I have Spark installed properly on my machine and am able to run python programs with the pyspark modules without error when using ./bin/pyspark as my python interpreter. However, when I attempt to run the regular Python shell, when I try to import pyspark modules I get this error: from pyspark import SparkContext and it says "No module named pyspark" How can I fix this? Is there an environment variable I need to set to point Python to the pyspark headers/libraries/etc.? If my spark installation is /spark/, which pyspark paths do I need to include? Or can pyspark programs only be run from the pyspark interpreter? Add the below export path line to bashrc file and and hopefully your modules will be correctly found: # Add the PySpark classes to the Python path: export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH There is one more method. Use findspark 1. Go to your python shell pip install findspark import findspark findspark.init() pip install findspark import findspark findspark.init() 2.import the necessary modules from pyspark import SparkContext from pyspark import SparkConf from pyspark import SparkContext from pyspark import SparkConf Now, you will find no errors and successful import of Spark modules will be done. If you want to know more about Spark, then do check out this awesome video tutorial:
https://intellipaat.com/community/6577/importing-pyspark-in-python-shell
CC-MAIN-2021-04
refinedweb
219
62.17
Need help cloning? Learn how to clone a repository. Atlassian SourceTree is a free Git and Mercurial client for Windows. Atlassian SourceTree is a free Git and Mercurial client for Mac. TDD ----- -On the internet, you could often hear TDD termin. From it's definition -Test-Driven Development you could thought that writing unit-tests and -TDD is the same, but it isn't. +The most common mistake in the testing world is to mess TDD and +(unit-) testing in general. TDD is not just unit-testing, it's much +more. TDD is a way to formalize process of developing software into +something that more seems like a compiler-driven development, where +instead of compiler trying to figure out errors as you develop small +pieces, you write tests to figure them out. -TDD is a methodology when you first write your test, and only then -change your code until it satisfies the test. As creators of TDD -(probably) would say, this encourages you to write better tests, not -to skip anything, keep design simple to test and so on. +Robert Martin defines `3 rules of TDD +<>`_ as +this: -My vision here is -- I don't use TDD, but also I don't write the whole -code first. +-. -What I do is I do both things simultaneously. And the reason for this -is that TDD is great idea that makes sense, but it's hard for me to -design functions and classes without first prototyping them. It's much -easier and faster to prototype with real code, not with how it will be -used when you need to create something more complex than adding just 1 -new method simple to class. +As you can see, if your whole team of programmers would work this way, +your codebase would keep being stable all the time, and it would +evolve little by little, no matter how complex the tasks are. -So I first go and prototype function with it's parameters, and only -then go and look how test for that would look like. Then I would go -and implement the function. Then I review changes and see what also -needs to be covered by tests. +From my observations, when you work with lots of "state" or when you +need to prototype as you develop, TDD is hard to adopt, and it's +sometimes even becomes absurd to see how people transform TDD into +"almost-TDD" or "better-TDD" by weaken some of TDD rules (or adding +exceptions). I actually did that too. And now I see that in places +like "best software practices.ppt" inside my company. -Also I should mention that it's very good idea to first be sure that -your test fails (write it before you implement something and make sure -it fails), because it's often that your logic is not so easy and your -test can really not test anything. - -So as a conclusion: I may be a bad (or have not enough experience) -person, but I don't use TDD. Also you should go and read more about -TDD not from here definitly, and speak to people who use it a lot (I -hope I will get opportunity to work with real person who uses it and -knows how to TDD in a good way). -Also, there's a technique called "ping-pong programming" when one -person writes a test, and another implements it. Then they -switch. Nice idea :) +I believe you shouldn't add any exceptions in these rules of TDD or +try to make it "more real-world from your perspective". Programmers +are smart enough to understand if it's not fit for they're tasks or if +they want to add some exceptions into these rules. BDD -Different idea is Behaviour-Driven Development. As I've heard at some -podcast, it was born as an idea that the big question in testing is -not "how do I test things", but "what do I need to test", "what am I -testing". And to focus on things that you test, instead of calling -tests "test_foo_bar" you should call tests starting from "should" -keyword, like "should_feed_pony()" or something, and author of BDD -created fork of JUnit that would make this happened. +Different idea of upgrading testing experience is `Behaviour-Driven +Development +<>`_. The main +idea is that you need to keep constantly focus on what you test and +why you should do that, as a result Dan North created a `JBehave` +framework, which was the same as `JUnit` but with test methods called +with prefix `should_*` instead of `test_*`. BDD has whole filosophy +behind that, you can go and read wikipedia and other things about BDD +(for example, frameworks that let you describe your tests as plain +text). As lesson from that super-idea, I now call all my tests with prefix ``test_should_`` (as you have probably already saw). That really helps -focusing on what test does. +focusing on what test does at time of test-naming. Of course, BDD is +not just about naming your tests, but the only visible part is prefix +one (at my tests). I am still investigating frameworks that let you +describe test as text, but didn't have experience with that. + +----------------------- + Ping-pong programming +As a bonus, there's a technique called "ping-pong programming" when +one person writes a test, and another implements it. Then they +switch. ----------------------------------- Writing test from action to mocks -When I write tests, I start from it's name (and focus on what it -should do), like: +To write tests, you start from it's name (and focus on what it should +do), like: .. code-block:: python def test_should_sort_by_name(self): pass -Then I go and write what it should actually do (call the action) with -"``# do``" comment before that. +Then you go and write what it should actually do (call the action) +with "``# do``" comment before that. query.order_by.assert_called_with('-name', '-updated_at') -And at last, if necessarily, you will add all the ``@patch`` before method. +And at last, if necessarily, you will add all the ``@patch`` before +method. That's an idea of building a test without a lot of thinking +about what to start from, and moving step-by-step from smaller pieces +to whole-picture of test. Let's move on to :doc:`change-the-way`.
https://bitbucket.org/k_bx/blog/diff/source/en_posts/testing_python/tdd-bdd.rst?diff2=80bf4dbb727d&at=default
CC-MAIN-2015-48
refinedweb
1,054
67.99
) Casting Bindesh Vijayan Ranch Hand Posts: 104 posted 18 years ago Hi, While Iam trying to assign an int to byte, I get an error int i=90; byte b=i; And rightly so, since byte is smaller than integer, and there is loss of bits(narrowing conversion). But while Iam trying to convert long to float, I get no errors. long l=1234555; float f = l; The explanation given is "The internal representation of integer types (byte, short, char, long) and floating point number (float, double) are totally different. converting from long to float is not a narrowing conversion.Even it may lose some significant digits. See the following table for details: Even it may lose some significant digits." I saw the internal representation of all types, but still could'nt make out the reason for such behavior. Please help. David Weitzman Ranch Hand Posts: 1365 posted 18 years ago Not the real representations, but this should give you the general idea. Think of a long representation as '10000000000' Think of a float/double representation as '10^10' 1234567890 in integerese becomes 12.3*10^8 in floatese When you convert it back to long, you have the significant digits, but not necessarily the less significant ones. 12.3*10^8 in floatese 1230000000 in longese See the Java Language Specification 2nd Ed., 5.1.2 'Widening Primitive Conversion' Ragu Sivaraman Ranch Hand Posts: 464 posted 18 years ago Narrowing Primitive conversion are relatively picky Let me explain why? Basically when you narrow int to a byte there are 2 ways you can do it 1. byte b = 90; //declaration and assignment in a single line 2. final int i =90; byte b = i; //Observe the integer being declared as final But in both the cases the int value MUST be in the range of Destination type (ie byte/short/char) Above all this implicit narrowing will be applicable to (Byte/Short/Char/Ints) only Anonymous Ranch Hand Posts: 18944 posted 18 years ago Ragu, The compiler liked what you said. It'd be great if you could tell me why the code public class a{ public static void main(String ards[]){ final int i=90; byte b=i; System.out.println(" Compiled successfully : i = " + i ); } } works fine if "i" is declared final and not otherwise. Or I'd rather put my question as why does declaring "i" as final free the variable of its obligation to be cast explicitly as a byte? Thanks in Advance Shyam [This message has been edited by Shyamsundar Gururaj (edited August 25, 2001).] Bindesh Vijayan Ranch Hand Posts: 104 posted 18 years ago Hi, Thnx David & Ragu i got it all. Jane Griscti Ranch Hand Posts: 3141 posted 18 years ago Hi Shyamsundar, When you declare a variable as 'final' the compiler knows the value will never change . It can then substitute the actual value in the bytecode instead of using producing bytecode that will look up the value at runtime. In the example, the compiler knows 'i' will always be '90' so it uses '90' wherever it finds 'i'. Hope that helps. ------------------ Jane Griscti Sun Certified Programmer for the Java� 2 Platform Jane Griscti SCJP, Co-author Mike Meyers' Java 2 Certification Passport Anonymous Ranch Hand Posts: 18944 posted 18 years ago Thanks Jane, You're a great help -Shyam Jimmy Blakely Ranch Hand Posts: 57 posted 18 years ago Some rules to remember: int and char literals are the only literals that the compiler will perform implicit narrowing conversions with at compile time. For example: short s = 12; // an int literal being assigned to a short byte b = 19; // an int literal being assigned to a byte short s2 = 'c' // a char literal being assigned to a short int i = 12.0 // not okay: double literals are not implicitly casted float f = 14.0 // not okay: double literals are not implicitly casted You ridiculous clown, did you think you could get away with it? This is my favorite tiny ad! Java file APIs (DOC, XLS, PDF, and many more) Post Reply Bookmark Topic Watch Topic New Topic Boost this thread! Similar Threads Narrowing Problem How does this work? Question about implicit promotion strange conversion?? JAVA casting bug ? Maybe! More...
https://coderanch.com/t/208258/certification/Casting
CC-MAIN-2020-16
refinedweb
708
59.13
VapourSynth GAN Implementation using RRDBNet, based on ESRGAN's implementation Project description VSGAN VapourSynth Single Image Super-Resolution Generative Adversarial Network (GAN) Introduction This is a single image super-resolution generative adversarial network handler for VapourSynth. Since VapourSynth will take the frames from a video, and feed it to VSGAN, it is essentially a single video super-resolution gan. It is a direct port of ESRGAN by xinntao, so all results, accomplishments, and such that ESRGAN does, VSGAN will do too. Using the right pre-trained model, on the right image, can have tremendous results. Here's an example from a US Region 1 (NTSC) DVD of American Dad running with VSGAN (model not public) Qualitive Comparisons against other Super-Resolution Strategies Following comparisons were taken from ESRGAN's repo Installation Requirements - NVIDIA GPU that has support for CUDA 9.2+. A CUDA Compute Capability score of 6 or higher is recommended, and a score <= 2 will be incredibly slow, if it works at all. - CPU that isn't from the stone age. While this is going to do 90% of stuff on the GPU, a super bottle-knecking CPU could limit you're GPU's potential. - An ESRGAN model file to use. Either train one or get an already trained one. There's new models being trained every day in all kinds of communities, with all kinds of specific purposes for each model, like denoising, upscaling, cleaning, inpainting, b&w to color, e.t.c. You can find models on the Game Upscale Discord or their Upscale.wiki Model Database. The model database may not be as active as the Discord though. Dependencies Install dependencies in the listed order: - Python 3.6+ and pip. The required pip packages are listed in the requirements.txt file. - VapourSynth. Ensure the Python version you have installed is supported by the version you are installing. The supported Python versions may differ per OS. - NVIDIA CUDA. - PyTorch 1.6.0+, latest version is always recommended. Important information when Installing Python, VapourSynth, and PyTorch - Ensure the Python version you have installed (or are going to install) is supported by the version of VapourSynth and PyTorch you are installing. The supported Python versions in correlation to a VapourSynth or PyTorch version may differ per OS, noticeably on Windows due to it's Python environment in general. - When installing Python and VapourSynth, you will be given the option to "Install for all users" by both. Make sure your chosen answer matches for both installations or VapourSynth and Python wont be able to find each other. Important note for Windows users: It is very important for you to tick the checkbox "Add Python X.X to PATH" while installing. The Python installer's checkbox that states "Install launcher for all users" is not referring to the Python binaries. To install for all users you must click "Customize installation" and in there, after "Optional Features" section, it will have a checkbox titled "Install for all users" unticked by default so tick it. Tips on Installing PyTorch Go to the Get Started Locally page and choose the following options: - PyTorch Build: Stable - Package: Pip - Language: Python - CUDA: Latest available version, must match the installed version. Then run the command provided by the Run this Command: text field. Tips on Installing NVIDIA CUDA Go to the NVIDIA CUDA Downloads page and download and install the version you selected on the PyTorch page earlier. If you chose for example 11.0 then 11.0 and >= 11.0 versions should work, but if you chose for example 10.2, then chances are you need specifically version 10.2, and not > 10.2. However, I cannot confirm if this is the case. Finally, Installing VSGAN It's as simple as running pip install vsgan Usage (Quick Example) from vapoursynth import RGB24 from vsgan import VSGAN # ... # Create a VSGAN instance, which creates a pytorch device instance vsgan = VSGAN("cuda") # available options: `"cuda"`, `0`, `1`, ..., e.t.c # Load an ESRGAN model into the VSGAN instance # Tip: You can run load_model() at any point to change the model vsgan.load_model(r"C:\Users\PHOENiX\Documents\ESRGAN Models\PSNR_x4_DB.pth") # Convert the clip to RGB24 as ESRGAN can only work with linear RGB data clip = core.resize.Point(clip, format=RGB24) # RGB24 is a int constant that was imported earlier # Use the VSGAN instance (with its loaded model) on a clip clip = vsgan.run(clip) # Convert back to any other color space if you wish. # ... # don't forget to set the output in your vapoursynth script clip.set_output() Documentation VSGAN([device: int/str="cuda"]) Create a PyTorch Device instance using VSGAN for the provided device device: Acceptable values are "cuda", and a device id number (e.g. 0, 1). "cpu"is not allowed as it's simply too slow and I don't want people hurting their CPU's. VSGAN.load_model(model: str) Load a model into the VSGAN Device instance model: A path to an ESRGAN .pth model file. VSGAN.run(clip: VideoNode[, chunk: bool=False]) Executes VSGAN on the provided clip, returning the resulting in a new clip. clip: Clip to use as the input frames. It must be RGB. It will also return as RGB. chunk: If your system is running out of memory, try enable chunkas it will split the image into smaller sub-images and render them one by one, then finally merging them back together. Trading memory requirements for speed and accuracy. WARNING: Since the images will be processed separately, the result may have issues on the edges of the chunks, an example of this issue. VSGAN.execute(n: int, clip: VideoNode) Executes the GAN model on nth frame from clip. n: Frame number. clip: Clip to get the frame from. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/vsgan/
CC-MAIN-2021-04
refinedweb
989
56.15
[02 - adding dark mode to your gatsby site with emotion js] March 23, 2019 by alex christie In finalizing the current version of my portfolio, I wanted to offer a dark mode toggle as a proof of concept, practice working with emotion.js, and just for usability because my blog and portfolio are so light. To implement this, I utilize a toggle component to control state, emotion.js's <Global /> styles to inject some styles at :root, and browser supported variables to change everything from font color to backgrounds to link colors. Set Up Big picture things you'll need to do: - Add state to your parent component/container - Pass state down to a ThemeToggleand ThemeWrapper - Use emotion.jsto pass themes into :rootin the ThemeWrappercomponent I'm assuming you have Gatsby.js and have emotion installed. I'm also using browser variables to style text and background colors. You can read an excellent write up on theming with variables at CSS Tricks. Adding State I built my portfolio as a standalone component, so I'm just adding state to the upper most component: class Portfolio extends React.Component { constructor(props) { super(props); this.state = { dark: false }; this.toggleTheme = this.toggleTheme.bind(this); } componentWillUnmount() { this.setState({ dark: false }); } toggleTheme() { this.setState({ dark: !this.state.dark }); } render() { return ( <Wrapper> <ThemeWrapper dark={this.state.dark} /> <Header> {...} <ThemeToggle toggleTheme={this.toggleTheme} dark={this.state.dark} /> </Header> {...} </Wrapper> ); } Here, I'm initializing this.state.dark to false and binding a toggle function to state. I'm using componentWillUnmount() in case someone moves away from the portfolio to the rest of the site (which isn't set up for dark mode yet), and then using toggleTheme() to set dark to true or false. I'm passing state into ThemeWrapper as a prop so it knows whether or not to return our darkUI. Because ThemeWrapper is going to change :root level variables, we don't need to wrap the rest of the page in it, but keeping it near the top of the component makes it syntactically clear what it's for. I'm also passing our toggleTheme() function and state into the ThemeToggle component so we can visually render what state is set to. Theme Wrapper and Themes Before we build our ThemeWrapper, we need to have a javascript file with our themes. I wrote a module that exports two themes--lightUI and darkUI. lightUI repeats the variables I have in my global.css and functions as a fallback. darkUI repeats these variables with new colors: const lightUI = { textColor: `#333`, {...} backgroundColor: `#fcfcfc`, } const darkUI = { textColor: `#fcfcfc`, {...} backgroundColor: `#333`, } module.exports = { lightUI: lightUI, darkUI: darkUI } ThemeWrapper takes these themes and uses emotion.js's global styles to inject our theme at the :root level: import React from 'react'; import { Global, css } from '@emotion/core' import { darkUI, lightUI } from './styles/themes' const ThemeWrapper = props => { const { dark } = props; return ( <Global styles={css` :root { --textColor: ${dark ? darkUI.textColor : lightUI.textColor}; --backgroundColor: ${dark ? darkUI.backgroundColor : lightUI.backgroundColor}; } `} /> ); }; export default ThemeWrapper; Here, we're importing Global and css from @emotion/core as well as our themes. We've passed state into our ThemeWrapper as a prop that's either true or false. We return a Global component with a style that initializes, at root:, our variables. We then use a ternary operator to return the associated variable depending on whether dark is set to true or false. You can read more about Global here. Theme Toggle So far we've set up state, written some themes, and a component that will render our theme based on state. The last thing to do is provide a toggle that users can actually press to change the theme. import React from 'react'; const ThemeToggle = props => { const { toggleTheme, dark } = props; return ( <button onClick={toggleTheme}> {dark ? `light` : `dark`} </button> ); } export default ThemeToggle; Remember that we've passed our toggleTheme() function and state into this component. So we're returning a button that, when clicked, toggles state and, based on the state, toggles the words light and dark. This is a very minimal implementation--you could add a sun and moon icon to signify the same thing. Wrapping Up and Further Thinking You can check out my implementation here and the source code on GitHub. There's more thinking to be done here about translating variables from dark to light UI. For example, the slider navigation could be tweaked a bit more for better contrast in dark mode, but this would require renaming or adding variables. As I note above, we could also simplify state by using hooks, but that might be for another weekend. If you have different, better, or slicker implementation, I'd love to hear about it on twitter or via email (alexj [dot] christie [at] gmail [dot] com).
https://www.inadequatefutures.com/blog/02-dark-mode-with-emotion-js/
CC-MAIN-2020-24
refinedweb
794
64.41
- BI raggzy + 15 comments Well, sadly, this problem isn't really about sparse arrays or whatever intention was. It's silly. If you read this comment, then i'm pretty sure, you have the same opinion. I propose to have some fun! Let's write shortest/cleanest code which solves this stuff in our favourite languages. Here's my for java8 :P); } AakanxuShah + 1 comment Can you please break down and explain these two lines ? I am really curious to know how you did it with just two lines of code ! - BI raggzy + 9 comments Ok) IntStream.range(0, in.nextInt()) - constructs a stream of ints [0..N). Think of it like (for i = 0; i < N; i++) .mapToObj(i -> in.next()) - for each i from stream we read a word .collect(Collectors.toList()); - gather to list So at this points we just read out N words to List<String> strings IntStream.range(0, in.nextInt()).mapToObj(i -> in.next()) -- same as previous: read count, and read M words .mapToLong(q -> strings.stream().filter(q::equals).count()) - for each of M word: iterate over N words, filter those who are equal to current word, and count elements. .forEach(System.out::println) - output that count. That is called 'functional style'. If you really want to learn this approach - i'd suggest learning haskell. It's the most pure functional language. - CA - BI raggzy + 1 comment Funny, because it's copypasted from AC solution without imports and class boilerplate. If you need - I can provide you with full copypaste so you can "just copypaste" and execute. - CA ChrisAyad + 2 comments Thank you for your helpful comment. I can't find the imports. Actually the problem is with the parsing of the input as mentioned previously. Even a simple loop had trouble. It seems "hit and miss" with this website. - BI raggzy + 0 comments Just in case, here is version with boilerplate. Java8, copypaste, ..., profit! import java.io.*; import java.util.*; import java.util.stream.*; public class Solution { // the code from above); } // } - LD ldanielmadariaga + 2 comments That's not amazing it's a clusterfuck just to make it work in 2 lines. - MS manmeet_lon + 0 comments totally agree. Besides a simpler approach would probably run quicker :) - LP loganphanxnt1 + 0 comments Agree with you. Lamda is great but not in cases with complex biz logic inside.... Thank for the link you shared. - M thricefall + 2 comments Besides a shorter solution, what are the advantages of a functional approach compared to say a HashMap? Does the functional approach take more memory and/or run time? - BI raggzy + 1 comment What are you looking for, is classical "Functional-vs-Imperative" holywar :). There are tons on that in the internet. It's "managing calculations"-vs-"managing state". In functional style you feel yourself like a mathematician constructing complete function. In imperative you feel yourself like a programmer thinking "how state will change" if I write this. Classical "what it is"-vs-"how to do it". HashMap solution could be written in functional style too. My personal thoughts on that - functional features are good, give better readability. I'd suggest using them in things like list comprehensions, e.g. you want to filter, count, aggregate something for elements. But personally I prefer mostly imperative code, because it's self-contained and self-explaning. E.g. you know what's going on for sure, though you have to write more code. Also, functional-styled code is more bugless in terms of multithreading and multicore/multinode processing - e.g. filters, aggregators/reducers is good paralellable architecture. That's why it's often used in things like map-reduce, data processing and so on. In terms of memory/runtime - depends on environment and implementation of functional language. In most cases they are constantly equal. But there are some cases when functional code under-the-hood generates a lots of object which "annoy" garbage collection. And also, there are some problems which require managing state directly. E.g. immutability would be an overdose. Let's say you can always do better in imperative. But in most cases you gain more because of shortness and cleaniness of functional. However, you still need to be sure what it's doing under-the-hood and use it carefully. - UT umangthusu + 1 comment what exactly do you mean by functional approach? I mean if you mean just breaking code into different methods such that one method does one job then we can use Hashmap in that way also. But other than that are there any serious issues in using a Hashmap instead of char array as discussed in the editorial? - NG ngokli + 0 comments "Functional programming" doesn't mean breaking code into smaller methods. Functional programming is a pretty different way to think about software (compared to writing a list of instructions to be followed). I did a little googling and came up with this: . Maybe that helps? I don't have much experience with functional programming myself, but I did enjoy reading a cute little book called The Little Schemer: . I haven't had a chance to read the rest of the series. JimB6800 + 1 comment I think he was just trying to get a 2 line solution. The code is terribly inefficient when compared with a HashMap solution, much harder to understand, and (if implemented in any real system) nearly impossible to maintain. As the original poster indicated he was going for "shortest/cleanest" code. He might have succeeded with "shortest", but "cleanest" is a stretch. - BI raggzy + 1 comment Despite of java verbosity, if you get used to functional style, what you see there is just "col1.each | e1 -> col2.filter(==e1).count | sout". Isn't that clean, easy to read, and maintainable? Terribly inefficient comparing to HashMap? Disagree. If author wanted, he'd made a test case where all strings have same hashCodes, and HashMap solution would've been even 2xtimes worse than straight O(N*N). But in general, for randomly generated (which was not stated in task) "terribly inefficient" is true, because HashMap solution gives you O(N) instead O(N*N) JimB6800 + 1 comment raggzy, no intent to inflame here, I can tell you're a true believer; I work with another. I've heard the "once you learn how to read it, it's much simpler" argument before. I do know how to read it, and sometimes use Java streaming and functional programming. That said, here are a couple of anecdotal observations. 1) A few months back, I informally polled a dozen colleagues (all experienced developers) with small samples of code using Java streaming vs. more traditional style. About 80% found the streaming code more difficult to read. 2) In some recent performance analysis work, I found issues with streaming code. IMO, it becomes so easy to just stream one structure into another, filter/collect, etc., that developers (even ones very experienced with functional programming) begin to forget about the cost of the operations. Data structures are easily created and discarded, without thought. Regarding efficiency of the example (where N is the number of strings, and Q is the number of queries): Typical hashmap O(N + Q). You're correct that it might be possible (though extremely difficult) to create strings with the same hash which could cause HashMap performance to degenerate into a TreeMap - which would give a worst case performance of O((N log UniqueN) + (Q log UniqueN)). It is likely that this is still better performance than O(N + N * Q) - unless, of course, the number of queries is very small and most values of N are unique. More plausible than strings with the same hash, would be repeated queries of the same string. With the hashmap solution we get: typical O(Q), worst case O(Q log UniqueN). For the streaming solution O(Q * N). I recommend using streaming/functional programming, where the TYPICAL reader will find it easily understandable. I recommend being careful to watch for tencencies to create and trash structures without thinking about the algorithm. And nice work on doing this with only 2 LOC. I wouldn't have thought to use IntStream.range() as a technique for reading the input like that. Pretty cool. - BI raggzy + 0 comments Ah, I thought we'd have some small imperative-functional holywar :) Someone asked me above in this thread about the same, and I find your reply here pretty similar to what I replied. Which means we're actually on the same, imperative side :) Just sometimes using Collectors.join() and IntStram.range() .max() .min() .anyMatch() and so on because it's not bad. - KV 148W1A1231 + 1 comment super bro how can i learn functional style programming language - A amitan2000 + 0 comments Well you can also use trie Data Structure Just in case if the input is very big. But in this question the input set is very small so you can just use simple code. - HR h500023029 + 0 comments Can't we use parallelStream in place to stream for better performance. - RG tierrarara + 0 comments WOW +1000 - C porglezomp + 7 comments Here's mine in Python. import collections values = collections.defaultdict(int) for _ in range(int(input())): values[input()] += 1 for _ in range(int(input())): print(values[input()]) - PB polmki99 + 3 comments Python brofist! Here's my python. from collections import defaultdict words = defaultdict(lambda: 0) for _ in range(int(input().strip())): words[input().strip()] += 1 for _ in range(int(input().strip())): print(words[input().strip()]) - AD a2_darwazeh + 1 comment Wow i really went the hard route ay, forgot about dictionaries. import re no_strings = int(input()) strings = [] no_q = 0 queries = [] for _ in range(no_strings): strings.append(input()) no_q = int(input()) for _ in range(no_q): queries.append(re.compile(r'^'+input()+'$')) for pattern in queries: found = 0 for s in strings: found += len(pattern.findall(s)) print(found) - MG mounikavas + 0 comments Can you please explain your code - CU CleverChuk + 0 comments Can you explain your code please pdog1111 + 2 comments Counter makes it even easier from collections import Counter, defaultdict counter = defaultdict(int, Counter([raw_input() for _ in range(input())])) for _ in range(input()): print counter[raw_input()] - D DanHaggard + 2 comments I haven't checked in python2 - but in 3 you don't need defaultdict if you use Counter. from collections import Counter, defaultdict cnt = Counter([input().strip() for _ in range(int(input().strip()))]) for _ in range(int(input().strip())): print(cnt[input().strip()]) - JT PapaThor + 1 comment Condensed down to 2 statements (plus the import): from collections import Counter d = Counter([input().strip() for _ in range(int(input()))]) print(*[d[input().strip()] for _ in range(int(input()))], sep='\n') - TS tusharkailash29 + 1 comment I am new to python and I had difficulty understanding the code. Why are two same two for-loops needed? And how does this statement get us the solution : print(cnt[input().strip()]) ? Thanks. - ZC CaptnCrunchCode + 0 comments @tusharkailash29 ... or anyone else currently wondering. We are comparing two variable input ranges in this question, and the ranges may vary in length. The "for loops" are used to organize the inputs as "S" strings followed by "Q" queries. cnt = Counter([input().strip() for _ in range(int(input()))]) The Counter() function creates a dictionary (key,value) pairs. The "S" string inputs are the keys, and their occurences (a.k.a counts) are the value. For every duplicate key that appears in this loop, it's value (count) increases by one. for _ in range(int(input().strip())): print(cnt[input().strip()]) In the second & third lines, "Q" query inputs are the keys we search in the "cnt" dictionary. Think of "input().strip()" in print(cnt[input().strip()]) as the key in (key,value) pair. Searching by cnt[key] provides the dictionary value associated with it. The second "for loop" repeats the cnt[key] search step for the remaining "Q" query inputs. - P philippkipp + 0 comments Python 3: You don't really need to import anything additionally: result = [] for q in queries: result.append(strings.count(q)) return result I don't know how to put a codeblock into the comments rishabh_ags + 2 comments Here's mine :) N = int(input()) strArray = [] for _ in range(N): strArray.append(input().strip()) Q = int(input()) for _ in range(Q): query = input() print(strArray.count(query)) #count how many times query occured in strArray - SZ - PN pratik_nalage + 1 comment What is the use of underscore in for loop? - KL kevyn_liang + 1 comment Python with no external libraries. lst = [] for i in range(int(input())): lst.append(input()) for i in range(int(input())): query_string = input() count = 0 for i in range(len(lst)): if query_string == lst[i]: count += 1 print(count) - PN pratik_nalage + 0 comments Can you elobrate the code. dgodfrey + 7 comments You can use an unordered_multiset (C++11) and its countmethod to make this easier: #include <iostream> #include <iterator> #include <unordered_set> using namespace std; int main() { int n, q, i; string str; unordered_multiset<string> s; cin >> n; for (i = 0; i < n; ++i) { cin >> str; s.insert(str); } cin >> q; for (i = 0; i < q; ++i) { cin >> str; cout << s.count(str) << '\n'; } } - [U - PR pulkit_rathi17 + 1 comment We can also use map with almost similar implementation. Key is required string and value is the number of occurance of that string. If a key is not present in map , it's value is '0' by default. #include <string> #include <iostream> #include <map> using namespace std; int main() { map<string,int> m; int n,q; string s; cin>>n; for(int i=0; i<n; i++){ cin>>s; m[s]++; } cin>>q; for(int i=0;i<q;i++){ cin>>s; cout<<m[s]<<endl; } return 0; } - TL david_daverio + 0 comments I dont understand one thing: unordered_set sould contain no duplicated elements, so count would return 0 or 1.... How to overcome this issue? - ST SonJohn + 5 comments ArrayList<String> a = new ArrayList<String>(); Scanner scan = new Scanner(System.in); int n = scan.nextInt(); for(int i = 0; i < n; i++) { a.add(scan.next()); } int q = scan.nextInt(); for(int i = 0; i < q; i++) { int count = 0; String s = scan.next(); for(String str : a) { if(str.equals(s)) count++; } System.out.println(count); } LOL - MA khaleelawn + 0 comments[deleted] - MA khaleelawn + 2 comments for(String str : a) can you explain it surya_teja0210 + 0 comments It is similar to for loop in c,c++. We are comparing each element in ArrayList with given strings and incrementing the count value bsaharsh85 + 0 comments its for each loop same as basic for loop with auto increment(auto i++), was not there in earlier verions of java included in Java5 version. - AB aishik_swarez + 0 comments Its still O(N^2) - AM andrexmueller + 0 comments N = int(raw_input()) page = list() for x in xrange(N): line = raw_input() page.append(line) Q = int(raw_input()) for x in xrange(Q): query = raw_input() print page.count(query) solomkinmv + 2 comments Your solution is very inefficient. Here I suggest you functional approach, better efficiency and readability. Scanner sc = new Scanner(System.in); Map<String, Long> counterMap = Stream.generate(sc::next) .limit(sc.nextInt()) .collect(groupingBy(identity(), counting())); Stream.generate(sc::next) .limit(sc.nextInt()) .map(query -> counterMap.getOrDefault(query, 0L)) .forEach(System.out::println); P.S. imports are omitted. solomkinmv + 0 comments Stream.generate(sc::next) .limit(sc.nextInt()) Reads first N words, where N is the result of sc.nextInt(). .collect(groupingBy(identity(), counting())) Creates map of word occurences. This collector groups all words by it's identity (the word itself becomes key) and then the number of words becomes value. The second stream contains queries. .map(query -> counterMap.getOrDefault(query, 0L)) Then I transform each query to the appropriate count result by accessing map of counters. If there is no such key in the map then I use 0 as the default value. .forEach(System.out::println) Finally, I print all values. - AB adelboutros + 0 comments The complexity of your algorithm is very big - RW riwiem + 1 comment My solution: str_seq = [input() for s in range(int(input()))] qry_seq = [input() for q in range(int(input()))] print(*(sum(q == s for s in str_seq) for q in qry_seq), sep='\n') I like my third line. - KK mail2karthk07 + 0 comments hw about this man..! seq = [input() for _ in range(int(input()))] [print(seq.count(input())) for _ in range(int(input()))] - B beefbong + 1 comment This is concise, but not performant. You are iterating through the entire array on every query: O(NQ) when O(N+Q) is possible with the same memory usage O(N). - BI raggzy + 0 comments Imagine all strings in test have same hashCodes. String length <= 100. It's very easy to achieve. If author wanted, he could've made such a test. Every hash algorithm is vulnerable to that kind of test. And your O(N+Q) hashtable solution will degrade to O(N*Q). So when talking about "big O", which is upper bound, "hashtable" solution has same complexity, which is O(N*Q). Better and stable algorithm would be to use TreeMap with lexicographic comparator. Which would give you O(Q*logN). By the way, starging from Java8 they combined the "best of two worlds". HashMap class uses binary tree (if keys are "comparable") instead of linked list in case there are a lots of keys with same hashCode. So in java8 it would give you O(Q*logN) if using HashMap solution. - ND wryan42675 + 1 comment Your solution runs in O(n*q). It is possible to do O(n+q). Here is a faster algorithm. Create a HashMap<String,Integer> for each string in the first set if the map contains the string increment the associated int else put it in the map associated with 1 for each string in the second set if the map contains the string print the associated int else print 0 I believe this also could be implemented in two lines of code, but that's not my style. I used 7, thought it easier to read. - BI raggzy + 0 comments Best you can do with map-based is O((n+q)*logN). Think about HashMap performance when all strings have same hashcodes, your solution is O(n*q) as well. Only in java8 they introduced "treemap inside hashmap" for equi-hash comparable objects. Here is reference to a 3-liner for hashmap solution in java8, maybe you'll like it jersey355 + 0 comments Agreed. I don't see what this problem has to do with sparse arrays either. I implemented to simplest solution using an array and iterating thorugh it to get the counts (since it is in the array problem section), but obviously there are other solutions that are much more efficient such as leveraging a hash table/map, or even a trie if you want to save space. Eitherway, not a very challenging problem. johnny_nonsense + 0 comments If you're going to use a Trainwreck, at least format it more like this: Scanner in = new Scanner(System.in); List<String> strings = IntStream.range(0,in.nextInt()) .mapToObj(i -> in.next()) .collect(Collectors.toList()); IntStream.range(0, in.nextInt()) .mapToObj(i -> in.next()) // over here, you can comment individual functions where appropriate. .mapToLong(q -> strings.stream() .filter(q::equals) .count()) .forEach(System.out::println); The line count metric is such a poor one. - SS shivanshsahu753 + 0 comments[deleted] grinya_guitar + 3 comments Come on guys =) why you think of it as the fewer lines or seconds you spent — you reach the better solution? This challenge is about fundamental computer science techniques, google about Sparse Arrays and make a decision why this challenge is named so (this is not without purpose). Believe me it's interesting and valuable =) HashMap is the right pill here and of course we would hardly find any programming language not giving HashMaps out of the box. But how HashMaps actually work under the hoods? Go through this! - AG alexfoxgill + 0 comments It's a nice thought but I wish it had been approached like e.g. the Insertion Sort series, where the challenges steer you towards the correct solution rather than setting a problem with no reference to the subject except the title yujinred + 3 comments I used the map function in C++ with every string mapped to their occurence. Once I stored everything to the map I can just extract the number of occurences and print it out. - D abhinavkanoria + 1 comment What does it mean? Could you explain it in detail? What is a map in C++, how to implement it, what is being mapped and to what? Would be really helpful. Thanks :) haruelrovix + 1 comment hi @abhinavkanoria , Map in C++ is associative containers that store elements formed by a combination of a key value and a mapped value. You can see the rest explanation from here -> My submission below is the implementation of Map to solve the challenge -> swetashah10 + 1 comment Yes and this approach is actually O(n) unlike some comments above who have achieved this in O(n^2) - AV AnvilDev + 1 comment None of these solutions are O(n), actually. You have to take into consideration the time it takes to read a string, O(c), where c is the number of characters in the string. jjlearman + 0 comments You're partly right and partly wrong. These are indeed O(N) [or really, O(N+M)], assuming O(C) is constant, which it is if C is bound -- an often reasonable assumption. These days, though, strings can sometimes be extraordinarily long, so it is an important point to keep in mind, in which case it's O(N*C) [or really O(N*C + M), assuming you optimize by hashing each string]. For example, it's not unusual to run a regular expression on arbitrary input, and the input can be many megabytes long -- in which case grepstrings that used to work just fine fail spectacularly. I learned this the hard way! Matt_Scandale + 8 comments This challenge is categorized as "Difficult"? I solved it in just a few minutes using a straight-ahead O(n^2) loop and it worked. Then I went back and re-did it with an associative array (hash map) in just a few more minutes and it worked as well. I don't think I learned anything about data structures or sparse arrays. - AA adfsdfadsf + 1 comment speak for yourself. your arrogance is extremely demeaning. nobody cares how fast you solved it you coding monkey. - SC samclearman + 1 comment Matt_Scandale is correct, this is not a difficult problem. And I appreciated his comment, as I came here specifically because I was wondering why it had been classified as difficult when it isn't (was I missing something?). You on the other and are contributing nothing to the discussion other than insults. IAmTheLight + 1 comment After some light research, it seems like SparseArrays are a memory lighter alternative to HashMaps in a subset of circumstances. However, HashMaps are faster and work for larger collections. I have no idea how to use SparseArrays, and since I already know how to use HashMaps, I also found the problem rather easy. I'm a little surprised the O(n^2) loop worked, but that's exactly the reason I came here (because at 1000x1000 it seemed feasible). This problem could use some improvement. Rather than re-classifying the difficulty, the problem statement and test cases should be improved to better illustrate the point, which may also raise the difficulty and block out the brute force method. Additionally, the problem statement would probably need to block the use of HashMaps, again, to illustrate the lesson on using SparseArrays. - GB gerald_edward_b1 + 4 comments Here is a solution that actually uses the notion of "Sparse Arrays" (if you used Maps or LINQ you really missed the point of the exercise): using System; using System.Collections.Generic; using System.IO; class Solution { public class Node { private int count = 0; private int depth; private Node[] subnodes = null; private Node[] Subnodes { get { if ( subnodes == null ) { subnodes = new Node[52]; // support A-Z a-z } return subnodes; } } public Node () : this(0) {} private Node ( int depth ) { this.depth = depth; } public int Add ( string str ) { Node current = this; for( int i=0; i<str.Length; i++ ) { int index = CharNodeIndex(str[i]); if ( current.Subnodes[index] == null ) { current.Subnodes[index] = new Node( depth + 1 ); } current = current.Subnodes[index]; } current.count++; return current.count; } public int CountOf( string str ) { Node current = this; for( int i=0; i<str.Length; i++ ) { int index = CharNodeIndex(str[i]); if ( current.Subnodes[index] == null ) { return 0; } current = current.Subnodes[index]; } return current.count; } private int CharNodeIndex ( char c ) { int index = 0; if ( c >= 'A' && c <= 'Z' ) { index = c - 'A'; } else if ( c >= 'a' && c <= 'z' ) { index = c - 'a' + 26; } else { throw new ArgumentException("String may only contain the letters A-Z and a-z"); } return index; } } static void Main(String[] args) { Node node = new Node(); int strCount = int.Parse(Console.In.ReadLine()); for ( int i=0; i<strCount; i++ ) { string str = Console.In.ReadLine(); node.Add( str ); } int queryCount = int.Parse(Console.In.ReadLine()); for ( int i=0; i<queryCount; i++ ) { string str = Console.In.ReadLine(); Console.Out.WriteLine( node.CountOf(str) ); } } } - NM nicholas_mahbou1 + 0 comments I used your solution to help me with my generic one but using LinkedLists instead of fixed size arrays using System; using System.Collections.Generic; using System.IO; class Solution { static void Main(String[] args) { int numStrings = Int32.Parse(Console.ReadLine()); var strings = new SparseArray<string, char>(); for(int i = 0; i < numStrings; i++) strings.Add(Console.ReadLine()); int numQueries = Int32.Parse(Console.ReadLine()); for(int i = 0; i < numQueries; i++) { string query = Console.ReadLine(); Console.WriteLine(strings.Count(query)); } } } class SparseArrayNode<T> { private T _item; private LinkedList<SparseArrayNode<T>> _list = new LinkedList<SparseArrayNode<T>>(); public int Count { get; set; } = 0; public SparseArrayNode() { } public SparseArrayNode(T item) { _item = item; } public SparseArrayNode<T> AddChild(T item) { var sparseArrayNode = _list.AddLast(new SparseArrayNode<T>(item)); return sparseArrayNode.Value; } public SparseArrayNode<T> Find(T item) { var currentNode = _list.First; while(currentNode != null) { if(currentNode.Value._item.Equals(item)) return currentNode.Value; currentNode = currentNode.Next; } return null; } } class SparseArray<TCollection, TItem> where TCollection : IEnumerable<TItem> { SparseArrayNode<TItem> _root = new SparseArrayNode<TItem>(); public void Add(TCollection value) { var current = _root; foreach(var item in value) { var node = current.Find(item); if(node == null) node = current.AddChild(item); current = node; } current.Count++; } public SparseArrayNode<TItem> Find(TCollection value) { var current = _root; foreach(var item in value) { current = current.Find(item); if(current == null) break; } return current; } public int Count(TCollection value) { var node = Find(value); return node != null ? node.Count : 0; } } - HF guangzefrio + 0 comments That You gived the solution solved my doubt on this problem.It's much better than using map directly as it used less memory. - PA - ZA azeyad + 0 comments I tried to find a way to implement a fix that somehow relates to the "Sparse Array" concept, but in sparse array we convert the array into a linked a list with the non empty entries only to save storage, so I imagined that converting the sparse array of word characters to a linked list is the way to go, here is my try: using System; using System.Collections.Generic; using System.IO; class Solution { static void Main(String[] args) { /* Enter your code here. Read input from STDIN. Print output to STDOUT. Your class should be named Solution */); } } static WordNode StartWordNode = null; static WordNode LastWordNode = null; public static void AddWord(string word) { //Every new word add new word node if (StartWordNode == null) LastWordNode = StartWordNode = new WordNode(); else LastWordNode = LastWordNode.NextWordNode = new WordNode(); CharacterNode currentCharacter = null; for (int i = 0; i < word.Length; i++) { if (currentCharacter == null) currentCharacter = LastWordNode.Character = new CharacterNode(); else currentCharacter = currentCharacter.NextCharacterNode = new CharacterNode(); currentCharacter.Char = word[i]; currentCharacter.Position = i; } } private static void actual_WordCount(string word) { int count = 0; WordNode currentWord = StartWordNode; while (currentWord != null) { CharacterNode currentCharNode = currentWord.Character; bool allCharsFound = true; for (int i = 0; i < word.Length; i++) { if (currentCharNode.Char != word[i] || currentCharNode.Position != i || (i == word.Length - 1 && currentCharNode.NextCharacterNode != null)) { allCharsFound = false; break; } currentCharNode = currentCharNode.NextCharacterNode; } if (allCharsFound) count++; currentWord = currentWord.NextWordNode; } Console.WriteLine(count); } public static void countWords() {); } } } public class CharacterNode { public char Char { get; set; } public int Position { get; set; } public CharacterNode NextCharacterNode { get; set; } } public class WordNode { public int WordCount { get; set; } public CharacterNode Character { get; set; } public WordNode NextWordNode { get; set; } } tangtianjie + 1 comment hi,guy,I totally agree with you, i dont't think i learned anything about data structure or sparse arrays. ramchandra_stri1 + 2 comments[deleted] balajianoopgupta + 0 comments Please dont post such comments. Discussion forum is there to learn. If you cant help others, stop commenting on others posts - JW rabbitfighter + 0 comments You deserve a medal. I had no problems with this either, but you could have just said "it had nothing to do with sparse arrays" without bragging. I think @ adfsdfadsf was a little pissed, and I don't name call but I kind of agree. Your ego is getting away with you and for newcomers, it can intimidate them. Maybe you're the class bully, or some affluent person, I don't know your circumstances, but you are not being helpful to learners with your comment. I don't think your comment was constructive. We're all trying to learn here, so maybe in future comments lose the bravado, and concentrate on substance. All comments here should be content related, not a forum to brag about how quickly you performed a task. chromano + 1 comment Well, I guess the idea was to search about Sparse Arrays (hence the title) and implement the challenge using it. Hashmaps are the obvious choice, even though Sparse Arrays will have the best compromise IMHO (storage space vs performance). You still can go ahead and learn about sparse arrays and perhaps re-implement the solution in order to learn something out of this challenge :) - AH Alexhans + 0 comments I think the goal was to teach us about sparse arrays but without sufficient memory or speed constraints we can freely decide to go with either a hash/dictionary or a sparse array. For me the quickest solution with c++ was obviously using std::map. I'll try to implement it in sparse arrays and see what the differences are time and memory wise. mhadaily + 1 comment I'd like to share my solution for Javascript function processData(input) { input = input.split('\n') var onlyNumber = input.filter(item => !isNaN(Number(item))).map(Number); var onlyStrings = input.filter(item => isNaN(Number(item))); var strings = onlyStrings.slice(0, onlyNumber[0]); var query = onlyStrings.slice(-onlyNumber[1]); query.map(item => { console.log(strings.filter(current => current === item).length); }); } 1- Get strings and query respectively 2- iterate over query and filter strings with the same value and get length out of that which is the number of repetition could be better with reduce function, however, it passes all test cases. shivams359 + 0 comments Here is my simple code for this: int main() { int n,i,m,j; cin>>n; string s[n]; for(i=0;i<n;i++) { cin>>s[i]; } cin>>m; string t[m]; for(i=0;i<m;i++) { cin>>t[i]; } for(i=0;i<m;i++) { int flag=0; for(j=0;j<n;j++) { if(t[i]==s[j]) flag++; } cout<<flag<<"\n"; } return 0; } bcreane + 1 comment The topic implies an interesting datastructure (e.g. "trie"), but a hash with the string as key and count as value is a simple and efficient solution. See github/bcreane for hash approach. Maybe there's a way to lead us toward the data structure you were thinking of, and require intermediate steps be documented in stdout? Btw, thanks for the problems :-) romashka + 0 comments The problem is reffering to sparse array. Prefix tree differs is totaly different structure. Main usage of the prefix tree (trie) is dymanic string matching, while space arrays are good for storing big sparse data in a compact form. Sparse array and matrixes are very common in leaner solvers. - AJ akik22 + 1 comment Pythonic approach. Any modification is welcome. N = [] Q = [] for _ in range(int(input())): N.append(input()) for _ in range(int(input())): Q.append(input()) for i in Q: print (N.count(i)) roger_erens + 0 comments Maybe merge the last 2 for loops into 1? for _ in range(int(input())): print(N.count(input())) - SS shabeena18 + 1 comment Here is my code import java.io.; import java.util.; public class Solution { public static void main(String[] args) { Scanner sc=new Scanner(System.in); int n=sc.nextInt(); ArrayList<String> name=new ArrayList<String>(); for(int i=0;i<n;i++) name.add(sc.next()); int q=sc.nextInt(); for(int i=0;i<q;i++) System.out.println(Collections.frequency(name,sc.next())); } } kchaitanya863 + 1 comment 2 lines in python does the work. s=[input() for s in range(int(input()))] [print(s.count(input())) for j in range(int(input()))] first line stores the inputs in a list. second line counts the occurances of present input in the previous list. Sort 555 Discussions, By: Please Login in order to post a comment
https://www.hackerrank.com/challenges/sparse-arrays/forum
CC-MAIN-2018-26
refinedweb
5,565
65.73
All cloud vendors offer APIs for accessing their services -- if they don’t, they’re not a genuine cloud vendor in my book at least. The onus is on you as a system administrator to learn how to use these APIs, which can vary wildly from one provider to another. Enter libcloud, a Python-based package that offers a unified interface to various cloud provider APIs. The list of supported vendors is impressive, and more are added all the time. Libcloud was started by Cloudkick but has since migrated to the Apache Foundation as an Incubator Project. One thing to note is that libcloud goes for breadth at the expense of depth, in that it only supports a subset of the available provider APIs -- things such as creating, rebooting, destroying an instance, and listing all instances. If you need to go in-depth with a given provider’s API, you need to use other libraries that cover all or at least a large portion of the functionality exposed by the API. Examples of such libraries are boto for Amazon EC2 and python-cloudservers for Rackspace. Introducing libcloud The current stable version on libcloud is 0.4.0. You can install it from PyPI via # easy_install apache-libcloud The main concepts of libcloud are providers, drivers, images, sizes and locations. A provider is a cloud vendor such as Amazon EC2 and Rackspace. Note that currently each EC2 region (US East, US West, EU West, Asia-Pacific Southeast) is exposed as a different provider, although they may be unified in the future. The common operations supported by libcloud are exposed for each provider through a driver. If you want to add another provider, you need to create a new driver and implement the interface common to all providers (in the Python code, this is done by subclassing a base NodeDriver class and overriding/adding methods appropriately, according to the specific needs of the provider). Images are provider-dependent, and generally represent the OS flavors available for deployment for a given provider. In EC2-speak, they are equivalent to an AMI. Sizes are provider-dependent, and represent the amount of compute, storage and network capacity that a given instance will use when deployed. The more capacity, the more you pay and the happier the provider. Locations correspond to geographical data center locations available for a given provider; however, they are not very well represented in libcloud. For example, in the case of Amazon EC2, they currently map to EC2 regions rather than EC2 availability zones. However, this will change in the near future (as I will describe below, proper EC2 availability zone management is being implemented). As another example, Rackspace is represented in libcloud as a single location, listed currently as DFW1; however, your instances will get deployed at a data center determined at your Rackspace account creation time (thanks to Paul Querna for clarifying this aspect). Managing instances with libcloud Getting a connection to a provider via a driver All the interactions with a given cloud provider happen in libcloud across a connection obtained via the driver for that provider. Here is the canonical code snippet for that, taking EC2 as an example: from libcloud.types import Provider from libcloud.providers import get_driver EC2_ACCESS_ID = 'MY ACCESS ID' EC2_SECRET_KEY = 'MY SECRET KEY' EC2Driver = get_driver(Provider.EC2) conn = EC2Driver(EC2_ACCESS_ID, EC2_SECRET_KEY) For Rackspace, the code looks like this: USER = 'MyUser' API_KEY = 'MyApiKey' Driver = get_driver(Provider.RACKSPACE) conn = Driver(USER, API_KEY) Getting a list of images available for a provider Once you get a connection, you can call a variety of informational methods on that connection, for example list_images, which returns a list of NodeImage objects. Be prepared for this call to take a while, especially in Amazon EC2, which in the US East region returns no less than 6,932 images currently. Here is a code snippet that prints the number of available images, and the first 5 images returned in the list: EC2Driver = get_driver(Provider.EC2) conn = EC2Driver(EC2_ACCESS_ID, EC2_SECRET_KEY) images = conn.list_images() print len(images) print images[:5] 6982 [<NodeImage: id=aki-00806369, name=karmic-kernel-zul/ubuntu-kernel-2.6.31-300-ec2-i386-20091001-test-04.manifest.xml, driver=Amazon EC2 (us-east-1) ...>, <NodeImage: id=aki-00896a69, name=karmic-kernel-zul/ubuntu-kernel-2.6.31-300-ec2-i386-20091002-test-04.manifest.xml, driver=Amazon EC2 (us-east-1) ...>, <NodeImage: id=aki-008b6869, name=redhat-cloud/RHEL-5-Server/5.4/x86_64/kernels/kernel-2.6.18-164.x86_64.manifest.xml, driver=Amazon EC2 (us-east-1) ...>, <NodeImage: id=aki-00f41769, name=karmic-kernel-zul/ubuntu-kernel-2.6.31-301-ec2-i386-20091012-test-06.manifest.xml, driver=Amazon EC2 (us-east-1) ...>, <NodeImage: id=aki-010be668, name=ubuntu-kernels-milestone-us/ubuntu-lucid-i386-linux-image-2.6.32-301-ec2-v-2.6.32-301.4-kernel.img.manifest.xml, driver=Amazon EC2 (us-east-1) ...>] Here is the output of same code running against the Rackspace driver: 23 [<NodeImage: id=58, name=Windows Server 2008 R2 x64 - MSSQL2K8R2, driver=Rackspace ...>, <NodeImage: id=71, name=Fedora 14, driver=Rackspace ...>, <NodeImage: id=29, name=Windows Server 2003 R2 SP2 x86, driver=Rackspace ...>, <NodeImage: id=40, name=Oracle EL Server Release 5 Update 4, driver=Rackspace ...>, <NodeImage: id=23, name=Windows Server 2003 R2 SP2 x64, driver=Rackspace ...>] Note that a NodeImage object for a given provider may have provider-specific information stored in most cases in a variable called ‘extra’. It pays to inspect the NodeImage objects by printing their __dict__ member variable. Here is an example for EC2: print images[0].__dict__ {'extra': {}, 'driver': <libcloud.drivers.ec2.ec2nodedriver, 'id': 'aki-00806369', 'name': 'karmic-kernel-zul/ubuntu-kernel-2.6.31-300-ec2-i386-20091001-test-04.manifest.xml'} In this case, the NodeImage object has an id, a name and a driver, with no ‘extra’ information. Same code running against Rackspace, with similar information being returned: print images[0].__dict__ {'extra': {'serverId': None}, 'driver': <libcloud.drivers.rackspace.rackspacenodedriver, 'id': '4', 'name': 'Debian 5.0 (lenny)'} Getting a list of sizes available for a provider When you call list_sizes on a connection to a provider, you retrieve a list of NodeSize objects representing the available sizes for that provider. Amazon EC2 example: EC2Driver = get_driver(Provider.EC2) conn = EC2Driver(EC2_ACCESS_ID, EC2_SECRET_KEY) sizes = conn.list_sizes() print len(sizes) print sizes[:5] print sizes[0].__dict__ 9 [<NodeSize: (us-east-1)="" ...="" bandwidth="None" disk="850" driver="Amazon" ec2="" id="m1.large," instance,="" name="Large" price=".38" ram="7680">, <NodeSize: (us-east-1)="" ...="" bandwidth="None" disk="1690" driver="Amazon" ec2="" extra="" id="c1.xlarge," instance,="" large="" name="High-CPU" price=".76" ram="7680">, <NodeSize: (us-east-1)="" ...="" bandwidth="None" disk="160" driver="Amazon" ec2="" id="m1.small," instance,="" name="Small" price=".095" ram="1740">, <NodeSize: (us-east-1)="" ...="" bandwidth="None" disk="350" driver="Amazon" ec2="" id="c1.medium," instance,="" medium="" name="High-CPU" price=".19" ram="1740">, <NodeSize: (us-east-1)="" ...="" bandwidth="None" disk="1690" driver="Amazon" ec2="" id="m1.xlarge," instance,="" large="" name="Extra" price=".76" ram="15360">] {'name': 'Large Instance', 'price': '.38', 'ram': 7680, 'driver': <libcloud.drivers.ec2.ec2nodedriver, 'bandwidth': None, 'disk': 850, 'id': 'm1.large'} Same code running against Rackspace: 7 [<NodeSize: ...="" bandwidth="None" disk="10" driver="Rackspace" id="1," name="256" price=".015" ram="256" server,="">, <NodeSize: ...="" bandwidth="None" disk="20" driver="Rackspace" id="2," name="512" price=".030" ram="512" server,="">, <NodeSize: ...="" bandwidth="None" disk="40" driver="Rackspace" id="3," name="1GB" price=".060" ram="1024" server,="">, <NodeSize: ...="" bandwidth="None" disk="80" driver="Rackspace" id="4," name="2GB" price=".120" ram="2048" server,="">, <NodeSize: ...="" bandwidth="None" disk="160" driver="Rackspace" id="5," name="4GB" price=".240" ram="4096" server,="">] {'name': '256 server', 'price': '.015', 'ram': 256, 'driver': <libcloud.drivers.rackspace.rackspacenodedriver, 'bandwidth': None, 'disk': 10, 'id': '1'} Getting a list of locations available for a provider As I mentioned before, locations are somewhat ambiguous currently in libcloud. For example, when you call list_locations on a connection to the EC2 provider (which represents the EC2 US East region), you get information about the region and not about the availability zones (AZs) included in that region: EC2Driver = get_driver(Provider.EC2) conn = EC2Driver(EC2_ACCESS_ID, EC2_SECRET_KEY) print conn.list_locations() [<NodeLocation: id=0, name=Amazon US N. Virginia, country=US, driver=Amazon EC2 (us-east-1)>] However, there is a patch sent by Tomaž Muraus to the libcloud mailing list which adds support for EC2 availability zones. For example, the US East region has 4 AZs: us-east-1a, us-east-1b, us-east-1c, us-east-1d. These AZs should be represented by libcloud locations, and indeed the code with the patch applied shows just that: print conn.list_locations() [<EC2NodeLocation: id=0, name=Amazon US N. Virginia, country=US, availability_zone=us-east-1a driver=Amazon EC2 (us-east-1)>, <EC2NodeLocation: id=1, name=Amazon US N. Virginia, country=US, availability_zone=us-east-1b driver=Amazon EC2 (us-east-1)>, <EC2NodeLocation: id=2, name=Amazon US N. Virginia, country=US, availability_zone=us-east-1c driver=Amazon EC2 (us-east-1)>, <EC2NodeLocation: id=3, name=Amazon US N. Virginia, country=US, availability_zone=us-east-1d driver=Amazon EC2 (us-east-1)>] Hopefully the patch will make it soon into the libcloud github repository, and then into the next libcloud release. (Update 02/24/11The patch did make it in the latest libcloud release which is 0.4.2 at this time) If you run list_locations on a Rackspace connection, you get back DFW1, even though your instances may actually get deployed at a different data center. Hopefully this too will be fixed soon in libcloud: Driver = get_driver(Provider.RACKSPACE) conn = Driver(USER, API_KEY) print conn.list_locations() [<NodeLocation: id=0, name=Rackspace DFW1, country=US, driver=Rackspace>] Launching an instance The API call for launching an instance with libcloud is create_node. It has 3 required parameters: a name for your new instance, a NodeImage and a NodeSize. You can also specify a NodeLocation (if you don’t, the default location for that provider will be used). EC2 node creation example A given provider driver may accept other parameters to the create_node call. For example, EC2 accepts an ex_keyname argument for specifying the EC2 key you want to use when creating the instance. Note that to create a node, you have to know what image and what size you want to use for that node. Here can come in handy the code snippets I showed above for retrieving images and sizes available for a given provider. You can either retrieve the full list and iterate through the list until you find your desired image and size (either by name or by id), or you can construct NodeImage and NodeSize objects from scratch, based on the desired id. Example of a NodeImage object for EC2 corresponding to a specific AMI: image = NodeImage(id="ami-014da868", name="", driver="") Example of a NodeSize object for EC2 corresponding to an m1.small instance size: size = NodeSize(id="m1.small", name="", ram=None, disk=None, bandwidth=None, price=None, driver="") Note that in both examples the only parameter that need to be set is the id, but all the other parameters need to be present in the call, even if they are set to None or the empty string. In the case of EC2, for the instance to be actually usable via ssh, you also need to pass the ex_keyname parameter and set it to a keypair name that exists in your EC2 account for that region. Libcloud provides a way to create or import a keypair programmatically. Here is a code snippet that creates a keypair via the ex_create_keypair call (specific to the libcloud EC2 driver), then saves the private key in a file in /root/.ssh on the machine running the code: keyname = sys.argv[1] resp = conn.ex_create_keypair(name=keyname) key_material = resp.get('keyMaterial') if not key_material: sys.exit(1) private_key = '/root/.ssh/%s.pem' % keyname f = open(private_key, 'w') f.write(key_material + '\n') f.close() os.chmod(private_key, 0600) You can also pass the name of an EC2 security group to create_node via the ex_securitygroup parameter. Libcloud also allows you to create security groups programmatically by means of the ex_create_security_group method specific to the libcloud EC2 driver. Now, armed with the NodeImage and NodeSize objects constructed above, as well as the keypair name, we can launch an instance in EC2: node = conn.create_node(name='test1', image=image, size=size, ex_keyname=keyname) Note that we didn’t specify any location, so we have no control over the availability zone where the instance will be created. With Tomaž’s patch we can actually get a location corresponding to our desired availability zone, then launch the instance in that zone. Here is an example for us-east-1b: locations = conn.list_locations() for location in locations: if location.availability_zone.name == 'us-east-1b': break node = conn.create_node(name='tst', image=image, size=size, location=location, ex_keyname=keyname) Once the node is created, you can call the list_nodes method on the connection object and inspect the current status of the node, along with other information about that node. In EC2, a new instance is initially shown with a status of ‘pending’. Once the status changes to ‘running’, you can ssh into that instance using the private key created above. Printing node.__dict__ for a newly created instance shows it with ‘pending’ status: {'name': 'i-f692ae9b', 'extra': {'status': 'pending', 'productcode': [], 'groups': None, 'instanceId': 'i-f692ae9b', 'dns_name': '', 'launchdatetime': '2010-12-14T20:25:22.000Z', 'imageId': 'ami-014da868', 'kernelid': None, 'keyname': 'k1', 'availability': 'us-east-1d', 'launchindex': '0', 'ramdiskid': None, 'private_dns': '', 'instancetype': 'm1.small'}, 'driver': <libcloud.drivers.ec2.ec2nodedriver, 'public_ip': [''], 'state': 3, 'private_ip': [''], 'id': 'i-f692ae9b', 'uuid': '76fcd974aab6f50092e5a637d6edbac140d7542c'} Printing node.__dict__ a few minutes after the instance was launched shows the instance with ‘running’ status: {'name': 'i-f692ae9b', 'extra': {'status': 'running', 'productcode': [], 'groups': ['default'], 'instanceId': 'i-f692ae9b', 'dns_name': 'ec2-184-72-92-114.compute-1.amazonaws.com', 'launchdatetime': '2010-12-14T20:25:22.000Z', 'imageId': 'ami-014da868', 'kernelid': None, 'keyname': 'k1', 'availability': 'us-east-1d', 'launchindex': '0', 'ramdiskid': None, 'private_dns': 'domU-12-31-39-04-65-11.compute-1.internal', 'instancetype': 'm1.small'}, 'driver': <libcloud.drivers.ec2.ec2nodedriver, 'public_ip': ['ec2-184-72-92-114.compute-1.amazonaws.com'], 'state': 0, 'private_ip': ['domU-12-31-39-04-65-11.compute-1.internal'], 'id': 'i-f692ae9b', 'uuid': '76fcd974aab6f50092e5a637d6edbac140d7542c'} Note also that the ‘extra’ member variable of the node object shows a wealth of information specific to EC2 -- things such as security group, AMI id, kernel id, availability zone, private and public DNS names, etc. Another interesting thing to note is that the name member variable of the node object is now set to the EC2 instance id, thus guaranteeing uniqueness of names across EC2 node objects. At this point (assuming the machine where you run the libcloud code is allowed ssh access into the default EC2 security group) you should be able to ssh into the newly created instance using the private key corresponding to the keypair you used to create the instance. In my case, I used the k1.pem private file created via ex_create_keypair and I ssh-ed into the private IP address of the new instance, because I was already on an EC2 instance in the same availability zone: # ssh -i ~/.ssh/k1.pem domU-12-31-39-04-65-11.compute-1.internal Rackspace node creation example Here is another example of calling node_create, this time using Rackspace as the provider. Before I ran this code, I already called list_images and list_sizes on the Rackspace connection object, so I know that I want the NodeImage with id 71 (which happens to be Fedora 14) and the NodeSize with id 1 (the smallest one). The code snippet below will create the node using the image and the size I specify, with a name that I also specify (this name needs to be different for each call of create_node): Driver = get_driver(Provider.RACKSPACE) conn = Driver(USER, API_KEY) images = conn.list_images() for image in images: if image.id == '71': break sizes = conn.list_sizes() for size in sizes: if size.id == '1': break node = conn.create_node(name='testrackspace', image=image, size=size) print node.__dict__ The code prints out: {'name': 'testrackspace', 'extra': {'metadata': {}, 'password': 'testrackspaceO1jk6O5jV', 'flavorId': '1', 'hostId': '9bff080afbd3bec3ca140048311049f9', 'imageId': '71'}, 'driver': <libcloud.drivers.rackspace.rackspacenodedriver, 'public_ip': ['184.106.187.226'], 'state': 3, 'private_ip': ['10.180.67.242'], 'id': '497741', 'uuid': '1fbf7c3fde339af9fa901af6bf0b73d4d10472bb'} Note that the name variable of the node object was set to the name we specified in the create_node call. You don’t log in with a key (at least initially) to a Rackspace node, but instead you’re given a password you can use to log in as root to the public IP that is also returned in the node information: ssh [email protected]@184.106.187.226's password:[root@testrackspace ~]# Rebooting and destroying instances Once you have a list of nodes in a given provider, it’s easy to iterate through the list and choose a given node based on its unique name -- which as we’ve seen is the instance id for EC2 and the hostname for Rackspace. Once you identify a node, you can call destroy_node or reboot_node on the connection object to terminate or reboot that node. Here is a code snippet that performs a destroy_node operation for an EC2 instance with a specific instance id: EC2Driver = get_driver(Provider.EC2) conn = EC2Driver(EC2_ACCESS_ID, EC2_SECRET_KEY) nodes = conn.list_nodes() for node in nodes: if node.name == 'i-66724d0b': conn.destroy_node(node) Here is another code snippet that performs a reboot_node operation for a Rackspace node with a specific hostname: Driver = get_driver(Provider.RACKSPACE) conn = Driver(USER, API_KEY) nodes = conn.list_nodes() for node in nodes: if node.name == 'testrackspace': conn.reboot_node(node) The Overmind project I would be remiss if I didn’t mention a new but very promising project started by Miquel Torres: Overmind. The goal of Overmind is to be a complete server provisioning and configuration management system. For the server provisioning portion, Overmind uses libcloud, while also offering a Django-based Web interface for managing providers and nodes. EC2 and Rackspace are supported currently, but it should be easy to add new providers. If you are interested in trying out Overmind and contributing code or tests, please send a message to the overmind-dev mailing list. Next versions of Overmind aim to add configuration management capabilities using Opscode Chef. Further reading - libcloud Getting Started page - libcloud API documentation - libcloud JIRA issue tracker - libcloud mailing list archives
http://agiletesting.blogspot.com/2010/12/
CC-MAIN-2019-26
refinedweb
3,093
57.37
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives This discussion is a continuation of managing closed worlds of symbols via alpha-renaming in loosely coupled concurrent apps where a number of different topics are touched, most recently a Lisp sub-thread at the end in addition to a bunch of stuff I was posting about lightweight user-space process architecture involving threads and fibers. When that discussion gets less useful due to sheer size, I'll continue here when I have anything novel to say, which probably won't be that much. You're welcome to continue here anything started there, too. (A number of things I said in the other discussion might be expected to bother folks who want to see fewer development systems around, with less competition, fewer languages, and narrower choices due to existing "winners" who will guide us to a managed promised-land where we can be tenants for a modest fee. Is that enough hyperbole? I can't be bothered to get worked up about it, but I see many things as subject to change, especially along lines of choice in portable code for app organization not dictated by trending vendors. I expect to mostly ignore politics and advocacy.) The first thing I want to continue is discussion of performance among options for implementing fibers (single-threaded user-space control flows that multiplex one OS thread) because I like heap-based spaghetti stacks, while others recommend options I think are poor choices, but to each his own. This is a reply to the opening question in Keean Schupke's Variables post yesterday. I don't expect to discuss this further, unless I see something surprising. Correcting misapprehension is not one of my compulsions, and neither is teaching how to do things that seem obvious to me, beyond friendly summary in sketch form. Infinite patience is more of a workplace habit. Could you explain how you handle variables across async calls that return to the event loop, so that I can benchmark this against the setjmp method? Variables are in stack frames, same as always when locals live in stack frames. The only difference is whether frames are in a contiguous stack array or in a discontiguous stack linked list. When a thread suspends by blocking, variables stay where they are on the stack. When a fiber suspends by parking, variables also stay where they are on the stack. You are proposing to copy parts of a stack array, and want to benchmark this against zero-copy stack lists, because you have a hypothesis that copying might be faster. That answer appears earlier in the other discussion, but there's a lot to read so I'll boil it down. As prerequisite, one should read about stacks, call discipline, and activation frames in plain vanilla old-school implementations of programming languages that use frames, so backtraces are found easily by following frame pointer (fp) list linkages. Old treatises on programming 68K in assembler give fine frame discipline explanations that are easy to read. Suppose you look at some C code whose stack is contiguous in the address space, with frames going up to main() and beyond, laid end-to-end in contiguous memory, typically linked in a list via frame pointers (which is where a debugger gets a backtrace when you ask). If you rewrite such code to use CPS (continuation passing style), one way to do this instead is via heap-based cactus stacks (aka spaghetti stacks) where each frame is separately allocated, so the stack is a linked list of frames instead of a contiguous array. Some folks make the mistake of assuming this means all calls change this way, but calls near the bottom of a call tree can remain as they were before, using a conventional contiguous C stack that unwinds to the list version upon return to trampoline. (This is not return to an event loop; find out what is meant by trampoline -- it's not complex compared to anything abstract, but I do find writing style of functional language folks using trampolines almost impossible to follow when implementation is not described imperatively in terms of which memory locations change.) One way to implement fibers is via setjmp/longjmp after copying part of the stack to and from a location on the heap. Another way is by switching which frame list is now the "top of stack" without doing anything at all to the contiguous C stack. Schupke proposes to benchmark which is faster, because I claimed the bottleneck was memory bandwidth, where touching more cache lines is slower. A good benchmark can probably be constructed, if realistic stack populations are represented, with a range of optimistic and pessimistic sizes likely bracketing what actually happens. You would need to implement the heap frame approach efficiently, using frames pooled in standard sizes that are always cache-line aligned (and preferrably powers of two in size, aligned to addresses that are a multiple of that size). There's not much good in having a frame smaller than, say, a 64 byte cache line. Obviously cache lines vary to larger sizes. A valid benchmark must also cause a realistic cache-line pressure via memory load corresponding to an actual application where you want to use far more fibers than you can afford to have threads. Your benchmark should model live traffic from active connections in a server with many thousands of connections, where only some are currently active, but each is moving a lot of data through memory and therefore completely flushing the cache almost continuously. Assume it's gigabit plus transfer rates. Say a stack is typically about sixteen to thirty frames deep in some application, like a network stack that is perhaps implementing IPsec or deduplication. You can just go sample a population of stack frames in some application, or make some assumptions about distribution in frame count and size. In server side network code, it's common to see a lot of temporary buffers on the stack for staging messages, logging control flow, analyzing IPv6 addresses, and consulting a lot of policy description. (When debugging I log a backtrace everywhere something significant happens; very deep stacks are common.) Small and shallow stacks in this context are rare to the point of non-existence. Depth (in frame count) in a fiber stack would depend on where fibers are introduced: toward the top of a network stack or at the top of some nested feature like IPsec and dedup. Closer to the leaf end of the call stack is less than half, but that end has larger frames because temp bufs show up more as the mix moves from policy to mechanism and binary formats get staged for concrete processing. Total number of fibers depends on how many things occur concurrently per connection activity: not very many for IPsec, but a lot for dedup with high granularity fragments, like every 2K on average. At one end you need a handful of fibers per connection; at the other you need dozens of fibers per connection. For a benchmark with any relation to reality in server loads causing high memory pressure, where cache lines turn over continuously, you need at least thousands of fibers whose size average somewhere between 2K and 16K, where 4K to 8K is plausible. This is exclusive of actual i/o buffer space, which is bigger, and isn't relevant to a benchmark, beyond causing cache line pressure much greater than caused by fibers. Awakening a fiber makes it's stack active. Normally this will be all or nearly all cache line misses if a lot of i/o happened while a fiber was parked. For stack frame lists, this means the topmost frame has at least one cache line miss. For contiguous stack frame arrays, this means every frame you copy before calling longjmp is a cache line miss. I would expect the setjmp method to come in at more than an order of magnitude worse. About 50 times as many cache line misses would not be surprising, but seat-of-pants guesses are often not very good. Maybe you can copy large swathes of memory using instructions with no effect on cache lines at the destination, but when awakening a fiber we still expect the heap-based source stack to be out-of-cache when read. Edit: Regarding trampoline efficiency, it's okay with me that async calls dispatch up to several times slower when they might park a fiber. (Extra cache line misses are worse than extra function calls.) Highest priority is efficiency at or near leaves of a call tree, which are far more numerous than inner nodes. Tight loops and local graphs of calls with bounded non-parking behavior don't get rewritten to dispatch via trampoline; instead they stay the same and operate as efficiently as an optimizing C compiler makes them using normal C stack that unwinds upon return to trampoline. Local areas of in-the-small code is normal C; only wide areas of in-the-large code needs CPS rewrite around points where context switches can occur. It's a hybrid tactic of old and new stack style. Not every sort of trampoline requires special privilege, or costs more than any other kind of branch; the only extra cost in my model is branching more times and making more implicit calls. I once wrote my own operating system from scratch in 80x86 assembler, so I am familiar with most of that, except the cactus stack. Really this never occurred to me as an efficient option because trampolining requires a CPU mode change (to a higher privilidge ring) and that causes an MMU page flush, which is seriously bad for performance. Exceptions can take 1000s of clock cycles compared to 10s for a regular function call. However I now realise that an OS trampoline for exceptions is not the only way to do it, and only occurred to me first because I did a lot of programming with that particular kind of thing. To do what you are doing requires efficient trampolines, how do you do that? Are you loading the stack register directly with inline assembly - this isn't really an option for me as I am aiming for portability. setcontext/getcontext offer the ability to set the stack pointer but they are obsolete, whereas setjmp/longjmp are still in the standard. I think I could use setcontext/getcontext to make a tampoline as Linux still supports them, hopefully implemented in user space, probably not going to perform as well as inline assembly to just load the stack register. It sounds like the same thing I was thinking about with pointer switching, rather than copying, and not using a heap pointer for 'variables' so my previous comments are probably not relevant. You can probably attribute this to lack of time to think about it properly (and it being a while since I have thought about these particular things). The main user class I think about is a developer, usually one who will see C source, before and after rewrite for fibers. Call this user Jim: a C dev who is working with multithreaded effect. Another class of dev may compile another language to C without caring about C any more than assembler; say this is Ann (my sister) who writes in Scheme, a sidekick of Ivy (who focuses on C++). Last is end user Tom (one of my three brothers) who doesn't code much, unless it's writing tiny HyperCard scripts. Tom understands no C whatsoever; Ann can scrape by in C; Jim lives and breathes complex concurrent server side C. My main audience is Jim, until I write a Scheme or prototype a HyperCard; and only then I'd write docs for Ann and Tom. Jim will be able to see everything reified after rewrite. I only use term fiber here at LtU because it's a standard term. In my docs I would only use fiber to explain a job, in more abstract terms. Replying to Keean Schupke's Thorn is fine: It was more the part about abstraction, that is if as a user I can see the difference between a Thorn fibre (job) and a generic fibre, there is something wrong. I can see that for Tom driving the GUI of an end user app. Ann needs to know both words. Jim has to debug job infrastructure when trying to prove it isn't jobs that are broken when some other bug occurs., Term job has a long tradition in computing. I first heard it when submitting Fortran punched card decks in the 70's, before I learned Pascal in college, where we still used punched cards. (No, I learned Dartmouth Basic first in 1975 on a paper terminal. C I learned by 83.) To schedule a run you needed to add job control cards to a deck. Systems understood a job control language. The concept of job is essentially one run of a program. This also applies to how job is used to described job control in unix shells. And when you think about it, an executing fiber is one run of a control flow, and that's what a Thorn job is. The idea that code might run as a control flow in the future is fiber: a blueprint for execution. But when actually running, it's a job. Sounds clear to me anyway. Having more than one way to say it also affords definitions that are not quite so painfully circular. because it doesn't matter to me (a Thorn user) how they are implemented, and my intuitions about fibres from other fibre systems should apply without modification. That's pretty reasonable. The ways in which you might need to know how they are implemented would involve situations where you directly parked or unparked a job yourself, because of privileged information you had, not represented some other way by existing interfaces. In other words we are really discussing generic fibres and LWPs and that the platform is Thorn is irrelevant unless discussing the internals of Thorn. Sounds good to me. Acronym LWP has two problems. First is a three-syllable letter within. Second is ambiguity if you pronounce it loop, which is said often when discussing code, so that doesn't work. It's a good name in print though. but sometimes an outside perspective can be useful, or shine light on things in a different and unexpected way. Yes, this is one reason I like talking to people: I want to be surprised. I'm especially interested in new spin on old ideas too, if outright surprise can't be had.. With any language, I have the most trouble with abstraction system of a person showing me sample code. In my favorite languages, people can write the weirdest looking things, just because of the way they think. (Often from being cargo cult copy-and-paste, when a dev doesn't know what it means, exactly.) If you haven't implemented a Lisp or used one much, I'd expect weirdness of Lisp to often be from the person who wrote the code. For example, if a Haskell programmer wrote Lisp in favorite idioms, I'd have no idea what it does. Because Lisp is low in syntax, you're likely to get a naked view of the abstraction model. I would like to discuss this too. I think Erlang handles this in a very elegant way. A lot of my current approach to fiber design started out from aiming to make C more like Erlang in some way. I'm not sure exactly when I started that. Maybe ten years ago, plus or minus a little. I haven't used it though. I'm more interested in the story about things work, rather than playing with them, unless I need to get something done.. People famously make a lot of mistakes with shared-mem concurrency, I suspect partly because not enough diagnostics are included. I was once unable to get folks to add identity of who held a semaphore, so we could diagnose deadlocks clearly. Usually the advice is: stop making mistakes! I think we should pay a tax in both space and time to reveal concurrency bugs often, even if sometimes there are false negatives (when you fail to report its wrong). Just catching a good percentage of them in a run will weed them out. I'm pretty interested in ensuring resources are released, especially when it comes to locks. Holding locks anonymously is not okay. I generally don't like shared multiple writers; I prefer one writer, who takes and executes requests to make changes on behalf of other parties, when a shared state model is really necessary. A central server is often effective when you can trust just one writer. Blame is distributed and dilluted when multiple writers could have done something wrong. [I think it only becomes really complex when database semantics are present, and you can't lose data. I actually don't want to talk about that. :-) It's just as involved as programming language discussions. Certain things must be done carefully with layers of backup, and can't be done with glib one-layer locking schemes. But we need to provide tools following clear and precise rules for folks wielding them to do hard things. I think mainly people want a guarantee locks cannot be leaked, because they can handle redundancy in guaranteeing copies for transactions themselves.]. For any given layer that makes conflict under concurrency possible, there's a suitable way to synchronize for access control. Inter-process, inter-thread, and inter-fiber concurrency all need mechanisms that accurately follow rules advertised by an interface. It's simpler in user space, so fibers are easy. If you're going to roll your own process and thread mechanisms, you have harder ones to define and implement. I'd prefer using an existing api though. For example, I only plan to code to a posix thread interface, though one abstracted through a vtable of function pointers in an env. What do you want to hear about a vfs (virtual file system) as it pertains to hosting single-app-daemon-process collections of green processes and green threads? Do posts about system effects on PL design and runtime have a chilling effect on all other LtU discussion? (Why would a few messages about one subject undermine desire to talk about other things?) Is getting your hands dirty a sign of lower class breeding? Or is it that only math is cool, and anything that smells like operations research is vulgar? This comes from curiosity rather than intent to challenge (a desire to draw out rather than suppress). My best guess is that realistic design plans sound complex, even with a focus on simplicity, and this depresses a person who hopes thinking about exactly one thing will be sufficient. Does undesirable realism intrude more on setting goals, or on safe abstract world views? Of the two I prefer to upset any assumption something can be perfect, which is silly, but easy to believe by avoiding practical evidence. Right now I'm going over standard tools for portable async i/o, like libevent and libev, and everything has problems, caused by model assumptions about memory management and control flow interface like callback organization. Of the two libevent is worse; libev was designed specifically to address libevent's imposed model effects, but it still presents an event loop design and abstracts memory use no further than allowing plugins to replace malloc, realloc, and free, etc. And standard system types like descriptors are not abstracted (no one will ever need more than an integer to identify a resource). At the very least, I want nearly every native type upgraded to a larger sized struct that adds a namespace and a generation number as a weak capability to catch lifetime and cross-domain mistakes with IDs. A 32-bit file descriptor becomes 64-bit triple (fd, ns, gen) where gen is assigned at fd creation time, and that layer knows the gen associated with each fd, so it can be stringently enforced (no access without the correct gen). There also needs to be a way to clean up and release a resource belonging to a green process that goes away, so things like file descriptors cannot leak in long running apps where processes get killed. Apparently on Windows one must use iocp (i/o completion ports) to scale anywhere near as well as posix alternatives, but they have the odd character of reporting only accomplished tasks (your read or write was done) instead of saying a non-blocking operation can now safely occur. That is, iocp usage is async but blocking; while async is good, non-blocking is better. This likely means entry points for i/o in an abstract environment interface must be able handle both "i/o can occur" and "i/o is done" (before and after i/o) in a common way, which might be an abstract valve interface for streams or datagrams. The after-io style iocp can be tested using before-io OS api by using a shim, but the reverse is not true, as folks have noted. That might be why libevent feels free to impose a specific buffering interface, which is implied by after-io style associated with iocp. That would force an extra copy for all i/o when the intended destination is immutable refcounted data structure for zero-copy streaming effects. Edit: no, said that wrong. Async means non-blocking, so the problem is that iocp api is early-resource-pinning, with fixed sized capacity. So buffer space pins early, and you can't dynamically change the size when ready without risk of blocking. A programming language can show an OS-neutral interface for efficient async (and maybe non-blocking) i/o, provided a sufficiently hands-off general api is defined. Maybe most PL i/o designs assume an existing OS interface is wrapped to hide sharp edges, often aiming for a high level when an assumption low level performance won't be needed. Topic change: queueing need not imply excessive buffering. There's an idea called "credit-based flow control" whose purpose is to effectively limit queue buffering sizes, by contracting ahead of time instead of assuming everything will be fine. Note how often control is overloaded to mean different things. Word order matters too. A control flow like a thread or fiber is a line of control that flows, while flow control is data flow that needs control. In both cases the first noun is primary and the second noun after modifies. Controlling flow with credit is basically a pessimistic strategy for bounded buffers (assume it won't work until reserved space is signalled), compared to blocking when buffers become full, which is a more optimistic strategy (assume it will work and block-or-park when a limit is hit). In both cases, green or OS condition variables can be used; for fibers, both strategies involve parking until a green condition variable signals, but one approach uses more buffer space than another. Complexity comes from using a window with credit systems, to avoid stalling a producer just to get permission to go; latency to clear queues can be smoothed by window size and credit extended. Meant as a reply to above post: Would a good starting point be a new cross platform async IO wrapper library? Is satisfies the Unix idea of doing one single thing well, and is likely to get to the point of being useful quicker. You can then build your further work on that once the API is stabilised. My larger project needs an aio abstraction as a sub-part. Experiments run in a first-draft daemon need it now, in portable form so it's not throw-away. Should be quickly useful for testing pieces as written. Odds of a stable first api are pretty good, but it would depend on other parts of base definitions. A standalone library for that alone might be extractable, but focus on a larger library precludes time for support. People seem to like libevent and libev; to assume anyone wants another alternative is presumptuous. :-) Can you use the existing libraries? I inferred that your problems with them meant that you couldn't use them, but it sounds like you can use them, although they are not ideal. libuv seems to be a cross-platform version of libev, that can work on Windows, and is used by nodejs, and seems to be used by the Rust language. Almost unusable, cost exceeding benefit, when work is throw-away and takes several times longer, guessing from first examination. Both interfere with prototype structure and have more hoops to jump through than manual hookup to abstracted system calls. I may keep replies no longer than your own remarks, so write more to get longer answers when I like the tone and content. Because I am interested in your experiences implementing this, it may be useful to myself and other readers of this. Is there some reason why we should not discuss how something like this might be implemented? So you're not going to use an existing library? libuv seems to use the same API as libev, but with better cross platform support (using IO Completion Ports on Windows). What other choice is there that does async properly on both Windows and Linux? Most discussion is fun when both sides are willing to contribute near equal amounts of work in writing. I'll let you know how it goes. I must want a different api for some reason, which doesn't map cleanly. My first note implies I plan use iocp on Windows too. The Rust runtime based on libuv didn't scale, so they threw it out of the main runtime into a separate 'libgreen' and just went with kernel threads. They discussed this on the reddit thread about the Rust 1.0 alpha release. Rust's future seems to be a C#-like await/async function annotations that performs a CPS transform. This is more inline with their goal of a zero-overhead FFI with C. libgreen requires expensive stack switching for FFI. I am interested that you found the performance of green threads to be a problem. In theory cooperative scheduling in user space should be lower overhead than preemptive switching in kernel space. Were you using Cactus-stacks to change contexts or copying the stack? It's not me, this was the conclusion of the Rust devs based on data. The problem wasn't green threads per se, it was the interaction of some competing requirements. They wanted a zero-overhead FFI for calling out to C (meaning raw pointers), and they didn't want to require GC so there was no way to relocate pointers on the stack or to support cactus stacks. Thus the FFI required a separate C stack from the Rust stack, inducing user space context switches for FFI calls (as I understand it, which isn't uncommon). Furthermore, they said something about libuv not being multithreaded, so perhaps it doesn't implement work stealing, which would be needed to properly scale above a certain point. I can't find the thread I'm referring to after a quick search, so you'll probably have to look up the discussions on the Rust lists. When I'm making prototype or throw-away code, I usually regard Windows as irrelevant. It's insufficiently documented, development tools are not installed by default, and most of its I/O stuff is, as you've been noticing, obtuse. It's not worth my time when I'm just trying to get something to work. And it seems like most new stuff requires asynchronous I/O, which is exactly what it's worst at. That said, I/O and system interaction in general seems to be way harder than it needs to be in virtually all computer systems, and doing it in a semantics-neutral building-block compatible way is especially hard in third-party libraries for those systems. It's hard to even check for anything's existence, for example, unless compiled in the presence of an assumption that everything required to support its existence is already available. In too many cases you have rather harsh options. You can attempt to use it and crash if it's not available, or not attempt to use it and never learn whether it's available. Or, you just can't build the library at all unless everything that would be necessary for it to become available is already in place. When I'm at the bottom level of a project -- putting things in place that will become the building blocks for other things -- I want as close to neutral semantics as I can get, and most libraries don't provide it. As Rys says, I don't want anything that interferes with prototype structure. Most libraries of these things require jumping through additional hoops to support conveniences that the prototype structure will not be taking advantage of. That would be okay except that they almost universally follow the use-it-or-break problem I outlined above. They either provide no way to opt out of those conveniences which would release me from the requirement to still have them available, or provide no way to opt out of them that doesn't require me to still jump through those hoops and have available all the infrastructure that they'd need. It's sort of like driving a Jaguar - the car can be a marvel of engineering, but if the airconditioner breaks, you'll wind up vapor locked at the side of the road, because that damn airconditioner, whether you want to aircondition the interior or not, is necessary to the function of its engine cooling system! (I agree and this is how it works out in my situation, for folks like Schupke.) I want a daemon prototype with no planned throw-away code, as sample code and as a harness for hosting experiments or driving other things via connections. I won't bother with Windows at first, starting with Linux and MacOS, using primarily posix interfaces plus as few other things as possible. I can simulate Windows iocp with an after-io shim on top of the before-io interfaces on Mac and Linux, exposing the same interface one would use on Windows. My approach involves at least two libraries: one for fibers and green processes, and another using it that builds up the whole daemon OS process prototype. Pretend momentarily that libyf is a fiber library and libyd is a daemon library, which depends on libyf. Both of them see the outside world using only abstract interfaces: C-based objects using vtables filled with function pointers that abstract everything, so no dependencies exist on system headers or anything outside those libraries. The smaller libyf won't know how an OS process is organized, and the larger libyd won't know how i/o works, for example, only that there's an interface the host app can satisfy. A host app where main() is actually located puts pointers to functions (satisfying the interface and contracts) into vtables used when constructing the daemon runtime. This is where dependency on specific system calls is injected. Slots remain null (zero) when no way to satisfy api exists in an app; each platform plugs in the way things work for that OS. If anything critical is missing, the daemon can't work, and runtime failure occurs if you execute anyway. This sort of organization is sometimes called "don't call us, we'll call you", otherwise known as the Hollywood principle (HoP), as well as several closely related names like Inversion of control (IoC) and the Dependency inversion principle (DIP). The OS is not in charge, in the sense of driving things, other than satisfying requests for i/o when they occur. (All OS signals propagate via self-pipe pattern.) The daemon itself is the driver, which makes only calls using api it defined, which the host app satisfied by installing function pointers conforming to expected behavior. Usually I work bottom-up when coding, but with at least one top-down phase to meet in the middle. This i/o architecture segment of design is the top-down phase, but with control inverted so the daemon is in charge as far as specification goes. Anything that interferes with this prototype goal is throw-away code because it cannot be kept long term, with dependencies minimized to suit the daemon. The identity of OS resources cannot be under OS control, as far as the daemon shows to code that it hosts inside, because the daemon will simulate things that don't exist in the OS and use the same ID space, but with namespace extensions to prevent ID collisions. If one daemon peers with another, they can expose descriptors to each other, so the same OS descriptor appears more than once, but additional disambiguation says which address space actually serves a specific descriptor. It's basically illegal to make system calls in the daemon, and if you try, it won't work because IDs like descriptors will be obfuscated or swizzled. (Yes, in the same address space you can circumvent this, but if you're going to subvert the model there's little reason to use it.) This begs for whole-program transformation of code hosted in the daemon, done either by hand or by tool, so only daemon-defined api is used anywhere inside. Exceptions can be white-listed at compile time, and this is a good idea for anything simple, bounded, non-blocking, and fast that works fine as a leaf call. A primary avenue of interacting with the world outside is via sockets, or anything else the daemon can change into dataflow resembling a socket. So you are defining an API for IO and will implement a backend for each platform. Leaving Hollywood aside, this is what I suggested several posts ago when suggesting you might implement your own library, however I probably didn't explain very well. I meant a library as a method of code organisation, not something you necessarily have to publish separately and maintain, although if it were me I would put it in a separate source code repository with its own unit tests from the start. After this initial suggestion I took your response about not needing another library to imply that you were going to use an existing Async IO library, hence why I mentioned libuv. As you can see my initial response was the same as yours, to build a minimal and natural abstraction. I suppose if you want to load platform IO drivers at runtime a vtable is necessary, but I wouldn't bother. Insead I would have a static API, after all you need to recompile for different CPUs on different platforms anyway. Unnecessary indirection via a vtable is bad for performance, and does not change the actual contract definition. Sorry for mis-reading intent too far toward new standalone aio polling library support. An inverse relation holds between velocity and inter-person sync (rephrasing Fred Brooks' idea of communication tax). I hope what I come up with seems clear. Any code style used that people hate can be fixed by a code-rewriter pass. Often the opposite of what I say works too, if you revise contracts to match; but saying each one drives people crazy. The host env abstraction can be static, yes, and that's going to be likely for some methods folks want to dispatch via macro instead of vtable indirection. A header for the env interface can be hacked to statically dispatch directly to OS api instead of using the env vtable. The fully general way might be better for a shared library. That way of saying it (all vtable all the time for the world outside) helps clarify a daemon merely has references to things defined by the host env, logically outside so it's replaceable. I know devs who seem to believe you can't replace standard entry points (yes you can), because they never see alternatives to naked direct calls. The hard alternative is vtables; you can always macro it back to direct. It's also a common belief that direct (without vtable) is measurably faster, even when a system call will occur with much higher overhead. Twenty years ago devs argued about whether C++ must be slower than C just because of virtual function call overhead; it's a lot less relevant now when cache line misses are a bigger deal than indirection. I focus less on libraries that cannot change, and more on source code that can be completely transformed by a dev building an app, by running it through a tool that rewrites to a form better suiting what is desired at runtime. This POV is unusual, and lets you do odd things like publish multiple versions of a library (and the rewriter too, plus rules, telling folks to give it a go themselves). Test suites you can still run after rewrite seem more important than meticulous standardization. (My intended audience is someone like me, perhaps much younger, who finds this useful.) A while ago I figured out some rules affecting aio library API, as induced by resource reclamation requirements for a lightweight process (lwp, also known as lane). A process should release all resources held when it dies, and so should an lwp (aka lane). So memory, locks, and IDs (descriptors) must all be freed, if held on exit. After I considered union file systems next, I felt no need to go back and rethink, so a plan seens stable. A fiber holds only green locks, because only runtime interfacing with threads has any OS lock issues, unrelated to fiber lifetime. So a fiber unlocks each green lock held on exit. Further, a fiber can park on at most one green lock, so it can be a member of at most one wait list. So if present on a wait list, a fiber removes itself on exit. Thus no stale pointers can remain in green lock runtime when a green process is killed, and no locks can be held by a fiber after it goes, so killing cannot induce deadlock. The memory story is short: incremental refcount release in a cleanup green process. The location of all handles is known in reflection metainfo for perfect collection. No destructors run, just return of storage to allocating vats. Release is one-way dataflow, so errors that would normally kill an lwp, if any, are just counted in statistics. IDs allocated by an env (environment) are also tracked by the env when release must occur, and the env knows every ID held by a green process, so when it dies the runtime releases them all, or tasks a cleanup entity with this. In effect, the env implementation is required to maintain an index of all IDs held, indexed by lane, so they can be freed en masse. Each index entry has enough metainfo about a resource to release it, even if the resource is owned by some remote entity outside the current OS process. Among other things, every resource has an associated generation number, so they can only be released when gen is correct (so a fiber cannot double free anything). Every ID is owned by some particular lwp; if two different lwp instances need to share an external resource by ID, one of them dups the original ID so it has its own private ID that needs release. Presumably the common resource is refcounted by index entries, so the last one out turns off the lights. While you can use another lwp's ID for a resource directly without dup, it will become invalid when the original owner dies, thus killing you on first use afterward when you depend on it being valid. As a consequence of this scheme, no ID need be large in the abstract API presented by an env, because indirection is used any time an ID (say for something remote) would need to be larger. Actually, indirection is always used, so any ID needing a lot of bits can live in the env's ID table, while a fiber sees only a local ID for the table entry. In general, no more than 32 bits more is necessary per abstract ID, so it can include 16 bits for gen and another 16 bits for namespace. Any descriptor normally described by an int in OS system call interfaces can be replaced with the following opaque 64-bit struct, also passed as a scalar value. We omit unique namespace prefixes here for brevity and clarity: int struct id64 { /* opaque abstraction of int resource descriptor */ u32 id_key; /* key identifying resource within the namespace */ u16 id_ns; /* namespace scope for the interpretation of key */ u16 id_gen; /* (nonzero) low bits of key's generation number */ }; For example, when this represents a local OS socket descriptor, the value of id_ns might say how to extract the descriptor from id_key, or it might only describe a table entry where the socket descriptor actually lives. Field id_gen acts like a weak capability, to protect against memory corruption, or accidentally targeting a resource owned by some other green process. id_ns id_key id_gen There's no way to tell it apart from any other sort of stream, unless you make a call that tells you how this particular resource is managed, if for some reason you care. Any resource needing a lot of bits to represent correctly (in the local process address space) can hide that via table indirection, so this 64-bit ID is the interface everywhere. So there's no reason an aio library need use a bigger ID than this, even when long chains of plumbing are involved. You can ask for io notification, but you needn't see how it's done or have any reason to care, unless you implement an inspector for debugging. Suppose you spawn green processes to service an actor protocol, then exchange messages using an ID with this 64-bit format to name an actor. Some range of values in id_ns might name local threads hosting lwp instances, where id_key names the target green process. Another range of values in id_ns might mean an actor is remote, perhaps an lwp running in another daemon, whose full addressing path is spelled out in a table entry described by the ID. In effect, the purpose of uniform ID use for descriptors is loose coupling, to prevent dependency on physical representation, and to put cooperative firewalls between mutable lwp fiber storage space and environment managed resources. Not very many bits are needed for scaling purposes, because indirection creates flexibility to use IDs as big as necessary internally. HN topic 84% of a single-threaded 1KB write in Redis is spent in the kernel has a number of interesting comments about user space scheduling as performance optimization.
http://lambda-the-ultimate.org/node/5105
CC-MAIN-2017-47
refinedweb
7,232
57.71
Usuario Azure HPC Scheduler: Running an SOA client outside of Azure This question is in regards to the recently released Windows Azure HPC Scheduler. I've been able to deploy a simple SOA service within Azure using the AzureSampleService project. The included sample code (SOAHelloWorldAzureClient) is made to run from the head node. How would modify this code to run from outside of Azure? I've tried to use "mynamespace.cloudapp.net" as the head node when calling Session.CreateSession but I get the error "Fail to get the cluster name". I've specified the correct username and password. I've also tried setting the TransportScheme to TransportScheme.WebAPI. My understanding is that CreateSession makes a call to a WebAPI exposed by the Azure Scheduler. Is it possible that this WebAPI isn't mounted by the AzureSampleService project by default? Pregunta Todas las respuestas - I've resolved this issue. Looking at the samples I needed to set the ServicePointManager.ServerCertificateValidationCallback property. Thanks for your help. - Propuesto como respuesta Jeremy Espenshade martes, 22 de noviembre de 2011 22:31
https://social.microsoft.com/Forums/es-ES/20ab7be3-3c24-417e-bd1e-b3b11d49528c/azure-hpc-scheduler-running-an-soa-client-outside-of-azure?forum=windowshpcdevs
CC-MAIN-2016-44
refinedweb
178
60.21
probably feel comfortable with it quickly – it looks a lot like a Windows File Explorer window or the Control Panel in Windows Vista. You choose a category of object types from the left to view that list on the right. Select any item in the list and the details pane at the bottom will display information about the selected add-on. Most changes you make in Manage Add-ons take effect immediately, although some (like disabling a toolbar or explorer bar) might still require you to restart Internet Explorer. … with lots of improvements over IE7 do not need to make changes to existing controls to continue to be managed in IE8. However, with the richer set of information and controls put in the hands of the user in IE8, control authors might wish to provide more detailed information with their controls. While the same set of information (such as publisher or version) is available in IE8 as was available in IE7, now it’s easier for users to view it. Add-ons without sufficient information (like an empty publisher name or version number) are often removed or disabled by users. Add-on developers should read this article and this blog post about ActiveX best practices for more information on how to properly develop IE add-ons.’ve kept the functionality of the management experience for Search Providers in IE8, but moved it here. IE8 helps you to quickly see what Search Providers are installed, which is your default, and where it is sending information when you submit a search. Additionally, you can change the order that Search Providers are listed (IE7 always sorted them alphabetically). Internet Explorer 8 continues to support the OpenSearch standard for Search Providers. You can read more about OpenSearch here. Activities Activities, which are new to IE8, are also managed from the Manage Add-ons window. Just like Search Providers, you can view, manage, and remove installed Activities, find new Activities, and learn more about Activities directly from this window. Managing Add-ons in No Add-ons Mode IE7 and IE8 support “No Add-ons Mode,” a troubleshooting mode. When you run IE this way, no 3rd party code runs, which allows you to do things like disable troublesome controls or repair Windows via Windows Update (which is why that control is allowed to run in this mode). You can start No Add-ons Mode in a few ways: PingBack from So much attention to addons... and no attention to the options dialog, or the prompt/print dialogs. What gives? Ah, so now I have an easier way to turn off BHO's when they infect my computer instead of removing them with adware software. @Aubrey: A much better idea is not to install such junk in the first place. Install Windows Defender if you need Windows help you identify unsafe downloads. These improvements are great! Thank you very much. This is a late request, but it would be cool if you could somehow make profiles of enabled/disabled addons and tie them to certain sites. there are some sites which require addons that do not play nice with other addons. I always have to go in and enable/disable combinations to get these sites to work. Great work so far, IE8 beta is working good for me. @ Steve : Thanks for the feedback. Have you tried IE8's per-use ActiveX control feature? You can now allow or disallow ActiveX controls only at certain sites, instead of allowing permissions universally. That way sites that need a given ActiveX control for legitimate reasons can be given permission to run them, but sites that don't can be blocked. The profile idea is interesting, I'll think about it. Thanks, -Christopher This is a vast improvement over the maze of finding things in previous version of IE, congrats again on the great improvements added to IE8 thus far. What would be cool is the ability for the user/admin to make a text-comment about what the addon is. This should be completely restricted for software vendors. So for exmple I can mark something as an antivirus but if malware somehow got on to a client's system it couldn't mark itself as something. Good work. Firefox should also do the logical thing and move the search manager to the Addons manager. The add-ons manager seems to use 16 color icons, is this a know bug? Looks great! Please kill off IE6. It shouldn't be in XP SP3. WU should force it off everything. All these improvements are better when more people actually use them... Sorry for the rant... looks great! Now all that's needed is for MSIE to lower the bar for add-on development itself; e.g. pure script-based add-ons instead of having to resort to .NET, COM or other sorts of windows-proprietary programming and having to deal with registry-settings and other install-complexities... @ Kris: We've looked at this, and don't believe it's our bug. We're asking the binaries for their default icons, and that's what they're returning. I suspect that the authors need to make changes to give us higher resolution icons when asked. We certainly support high-res icons, and we're not doing anything special to avoid them. @Christopher Vaughan [MSFT]: Isn't each icon an array of icon images for each color-size pair? If the authors are not making them so that the "best" ones are default, can't each image of the icon be queried and checked so the nicest one can display? Windows Explorer has little problem with this, so the IE team should be asking the "WE" team for some advice. When you reduce the width of the Add-ons window, the help text at the top gets truncated. The big blue title is redundant as the title of the window is already in the _title_ bar. The 'category' column in the Activities list is redundant as everything is grouped into categories by default. The link to find more add-ons is very obscure and probably shouldn't use an ellipses as it's just a normal link like the one next to it (Learn more about add-ons). The light blue text for headings is very faint and has no contrast (obviously). The listing order column in the search providers list is/should be redundant. When clicking "Find more whatever", a new IE window opens instead of using an existing window. The UI for reordering search providers is unconventional and rudimentary. I mean, it uses plain 'hyperlinks' for 'Move up' and 'Move down'. What about drag and drop? Or at least some up/down arrows. Still in the search providers list, the 'Alphabetic sort' link is a bit confusing, you should just be able to click the 'Name' column to rearrange the providers by name. The usability of the search providers UI gets worse when you order the list by name, because then the 'Move up' and 'Move down' links don't do what they say. I am absolutely flabbergasted with the search providers list right now, I don't know what's happening. Still waiting for a good extensions/plugin system. yes, yes, there is a complex activeX system that very few people have figured out how to use, but NOTHING on the scale of FireFox. Which is just plain ABSURD considering that MS has all that technology at its disposal. True, it isn't all dedicated to the IE team, but frankly it doesn't need to be. A good dotNet based system would go a LONG LONG way to improving things, and perhaps even improving the original promise of dotNet. It doesn't HAVE to be rocket science, but for some reason, the typical MS mentatlity can't seem to break away from doing things the hardest possible way. Please, PLEASE consider finding a way for making it easy to write dotNet extensions for IE. If you don't understand what I mean, check out firefox. (come on, we KNOW you've looked at the competition - admit they got a few things right and be willing to follow their lead!) And while you are at it, how about a SANE freaking way to plug in a download manager? Nice and simple? "here's a program path, run it with the URL..." None of this ActiveX/Com plug in here, register there, support these badly documented interfaces here, pray there, fail randomly... حسب تجربتي للاصدار الثامن جيد جدآ واكثر من رائع . شكرى لجهودكم . Hi! I second the request for a managed plugin system in IE 8. It would really nice, if the IE 8 team adopted the common Add-In framework already available in the .NET 3.5 libraries. Beat the competition - just do it! go go go! (cue rally ..) It's awesome to see such great improvement in IE. I have a small suggestion/request. Please improve the find option. Replace the existing not-so-good dialog box with a trendy and slick find bar. انترنت اكسلورر 8 افضل من 7 واتمنا ان الاصدار التجريبي ما يطول The filter select field obviously doesn't fit well in that side panel, move it to the top, above the border. The headings appear to be a non-standard (for Vista) colour, other Vista windows have a much darker blue. There are two close buttons, one should be enough. Here's a few of these ideas put into a mock-up: @ Christopher On IE8 Beta 1 on XP we see an issue with one of our controls (on corporate clients in its millions) and some others such as Shockwave. +++++++++++++++++++++ Can't uninstall from Downloaded Program Files : "MeadCo SCriptX will be permanently uninstalled" : OK "Failed to remove MeadCo ScriptX" We see occasional crashing on occache.dll to accompany this. This also breaks our .msi installation, which looks for and uninstalls any existing.cab-based install before proceeding. More detail is available if required, including other issues with uninstall on IE8 Beta 1 on Windows Vista. "Can't uninstall from Downloaded Program Files :" Sorry, I should have said that we already submitted a bug report on connect, so this is for extra visibility. Glad to see the updates. I hope we continue to get improved developer add-ons. I choose my primary browser based on the developer add-ons available, and use other browsers for testing. I've commented on a couple of other IE blog postings, but this one seems the most promising. I noticed Eric Lawrence's link to and his statement that .NET "extensions" have been possible since .NET 1.1/IE6, but only through a six year old bandobject wrapper on codeproject.com. It doesn't do much to hide the the underlying COM plumbing from the .NET developer. Is there a plan for the IE team to create some supported plumbing for managed bandobjects and BHOs? The Office team did something like that (a managed shim project) for Office add-ons before VSTO was available, and it definitely spurred office development. I have some IE addon ideas, and call me lazy, but I'd love to be able to try some things without needing to learn C++ and COM. Could we have a download manager please? Even my cell browser supports pausing and resuming of downloads... As <a href="">Rowan's comment details</a>, the IE UI is still plagued by countless small problems which add up to a very poor user experience. Rowan's mockup is an improvement, imho. I have additional observations. At the start of the first screenshot, all this text is redundant: "View and manage Add-ons that are installed on your computer. Disable Add-ons to troubleshoot problems with Windows Internet Explorer." * The window is titled "Add-ons" in the title bar. * You can see the Add-ons in the ListView without being told you can see them. * Where would they be installed other than on the user's computer? * The term "troubleshoot" is only going to be known by people who understand that disabling Add-ons is one way (not the only way) of fixing a problematic install of IE. * Indeed, advanced users are the only people who will find this window in the first place and have any idea what any of the information in it means. * There's a "Learn more about Add-ons" link in the bottom right if they want to know what useful things they can do with this window. Removing the heading text (as Rowan suggests) and this unhelpful explanatory text would salvage a lot of vertical space. It would also make the window better match successful Windows conventions, i.e. not covering your windows with text which is not actually going to help people. It might look like Windows Explorer to you. It doesn't to me. There's no menu bar, toolbar or statusbar. There are lots of command buttons, text, headings, side-by-side text and the ListView does not take up nearly all the space. As there are only 3 types of add-on, you could use tabs. That would: * save a lot of space to make the window usable at narrower widths (especially important as you need wide ListView columns to view all the details); * be more consistent with other settings windows in IE and throughout Windows (at least in Windows XP); * and be a more familiar way to move between the views (the Folder sidebar in Windows Explorer is only used by a few power users). There is a great deal of wasted space around the bottom of the window. In the first screenshot, the "Status" column is redundant due to the "Enabled" and "Disabled" grouping. I would expect "Enabled" add-ons to be displayed first. Those are the ones which may have done something that prompted you to want to manage them. The window is resizeable, which is good. But there's no Maximise/Restore button, Minimise button or control box. The lack of control box prevents resizing for users who cannot use a mouse, AFAIK. Oh, and make it look like an XP application in the XP build. Obviously. These user interface and accessibility issues are elemental. It's rather sad to see these mistakes in brand-new UI for a version 8 application which has the resources of Microsoft behind it. Even in a beta. But it's very good to know someone is thinking about these things, implementing some of them and then blogging about them. More attention to detail and thinking more clearly about what actually benefits users will improve things further, imho. Gotta love Ben's comments. Please do what he says, it makes sense. You might want to look at the choice of colours used in that dialog under XP. If, like me, you have a light highlight colour, the headings become virtually invisible. It really looks a mess under XP as it matches nothing else on IE or the system. I love everyone's enthusiasm and passion around this feature. We have some bugs and issues in our B1 UI that we're working on, but what's in B1 is basically our final design- we don't have plans to change our layout to use tabs instead, for instance. In general what I'm hearing is that people like the technical improvements, which is great. @Christopher Vaughan[MSFT] Why release a Beta to the public, if you are not open to constructive criticism on design flaws and usability issues? This is almost as bad as the core IE folks that think that since the prompt() dialog kinda works, that it shouldn't be touched, even though it is a usability nightmare! bug 109 & 139 I just don't get the feeling of community involvement at all from Microsoft. The DOM fixes in IE8 Standards mode are about the only good thing in IE8 so far! What I don't like about Microsoft: incompetent lies. This blog starts with a picture and a statement: "When you look at the Manage Add-ons UI, you’ll probably feel comfortable with it quickly – it looks a lot like a Windows File Explorer window" Hands up everyone: does this dialog look anything like windows explorer? Answer: no, it doesn't. It has no tool-bar-strip-thingie at the top The left side doesn't have a favorite links/folder pair The 'detail' area uses different algorithms for displaying everything. Windows Explorer, for example, carefully shows me the entire date/time of a file. (Date Modified: 5/2/2005 12:27 PM). This one uses a totally different string to represent a date/time ("Sunday, October 22, 2006...") which is then cut off. The list is split into enabled/disabled. Windows explorer never does that. I'm not saying it's a bad interface (except for the date/time thing -- you should be consistent). But it's sure as (bleep) not Windows Explorer. Hence: Incompetent lie. It doesn't look like what you say it looks like. And to make it easy for everyone to tell, you put a giant picture of it right for us to see. I like the improvement. But just like other people said, can we have scripting support? It takes too much trouble to make add-on. You guys are not thinging leaving it to Live Tool Bar team to do this? Are you? And I certainly hate COM objects because I don't know where to delete it. If it is just a scripting object, you can simply store the text code file in an easy to find "Add-On Script" folder. If I don't like it, I can delete the code and I am 100% sure that means the program is gone. Rather than deleting a Yahoo tool bar using uninstaller and find out that the tool bar is still there. I would be great to have something like VBA on Excel for IE. "We have some bugs and issues in our B1 UI that we're working on, but what's in B1 is basically our final design- we don't have plans to change our layout to use tabs instead, for instance." That's ok, I only focused on how to improve the UI in ways that wouldn't require an overhaul of the design. I haven't touched on the details pane yet because it seems so useless, my only suggestion would be to remove the pane completely and present the details another way. But that would require too much work. "I love everyone's enthusiasm and passion around this feature." Should be: I love everyone's concerns about our UI. - Wow! way to be a team player! If you have no plans to fix it, then why bother releasing a beta, just go straight to RTM so that we can all suffer with this. 1.) Most users aren't on Vista yet (or avoid it like the plague) thus it isn't familiar at all. 2.) Why show disabled ones first? This typically isn't what I care about, because I disabled them! just show them in context, grayed out. 3.) Could you use more whitespace at the top of this dialog? surely we can reduce the table view of addons to something with only 5 results (/sarcasm) 4.) Disabled/Enabled "fieldset" groupers are un-necs. the column indicates their status. 5.) whitespace at the bottom of the screen is also abundant, at the cost of hiding details with ellipses... Ben & others: This UI is more like Vista than XP. Obviously they're making IE to work in XP, Vista, and Win7 and biasing toward the future than the past. Hence the more explanation text and greater whitespace - more like a task dialog than a message box. Clearly they have no interest in making in "more like XP" which by the time IE8 is in common use will be two versions back. Also * Few people use or see help links, so while they add some value the dialog needs to be useable without them. This is where Vista was heading... * Three types of add-ons today; yes, but almost certainly more tomorrow. It's obvious they designed the new dialog for future expandability. Another three or four types would be easy to add. Not only are tabs like in Internet Options archaic UI elements, once a tab control has enough items to need multiple rows or scrolling it becomes a UI disaster. * The bottom of the window varies from having very little to having quite a bit, depending on what's selected. Sometimes it's downright crowded. * If you have grouping enabled based on Status, then yes the Status column is redundant. Otherwise it's not. Same as any other listview with grouping support. * If you’re a keyboard user, then use the system menu (Alt+Spacebar) and then use Move or Size from there. Now you know a little bit more... User interface design is not "elemental" its super freakin hard. What's obviously easy is to merely dissect a blog post and a set of screenshots, probably without even using the feature. UI dude: multi-row tabs are unwieldy. But that fact, along with the age of tabs, does not mean there's no place for tabs at all. Quite the contrary--tabs in a single row are and remain a far more intuitive control than list selectors. Any good UI textbook will tell you that. So long as we're talking about add-ons, here's something you can pass along to the Windows Live Toolbar engineering team: it causes IE to behave sluggishly. Often, when closing an IE tab or window, there'll be a noticeable delay of several seconds where nothing happens before the browser actually exits; occasionally, there'll be a much longer delay when starting IE before the browser actually loads (I would assume that the Toolbar is attempting to retrieve data from the Internet before it starts; if the Toolbar has trouble communicating with its servers, then the browser simply hangs until the Toolbar is finished loading, then accesses its home page of MSN). Disable the toolbar, and these delays go away. I've noticed this behavior in both IE7 and IE8, and am running Windows XP SP2. This looks quite good, except substiting lists for Tabs. Since the amount of choice in the left list ist clearly limited, it seems to me as well that Tabs would be a better and more intuitive choice. Functionality-wise it looks good. Now to the off-topic parts (only because these topics have never been discussed on this blog so far): Any news though on what IE8 will and will not be able to render: - proper support for application/xhtml+xml? (Including the refusal of non-well-formed xhtml) - SVG Tiny? Will the following be fixed: - proper ARIA support without proprietary twists? Especially concerning accessibility, please be interoperable! - proper, syncronous XmlHttpRequest without proprietary twists? - also, xhtml-like namespaces should be removed from from html parsing mode or such an extension should be discussed within the HTML5 WG Does it make a difference, whether its tabs or a side thingy? Most users have a screen space of 1024X768 at LEAST, so they could probably care less. Though i completely agree with putting the enabled add-ons first and the disabled ones last, and by removing the status column because they're grouped anyway I would like to be able to maximize the add-on manager window, and also, I don't like the fact that some of the text within the add-on manager just cuts out, rather than wraps. Would definitely use Tabs over the list on the Left side. Its an early (in fact the first!) beta, so there is plenty of time to fix this bug. If Tabs are used, there is much more real-estate available for information about each add on, thus cropping the text won't be necessary. The "Vista" view, versus the Column view of Exploring before Vista... was a usability downgrade. Custom columns gave users lots of flexibility, sorting and personalized views, with nothing truncated. In this new dialog, with the Vista-like view, data is missing, data is truncated, and you can't establish a quick "grid" view of everything in your browser. Definately a downgrade in user experience. Final note, to really improve the experience here, a subscription to a black list of addons would be ideal! This way spyware, virus, malware and other toolbars/BHO's could be completely avoided, since they would never get the chance to install! (or you could allow them to install, but be disabled by default). Its time that Microsoft stepped forward to limit the malware out there that only installs on Windows, due to a lack of foresight in the security realm. A simple google search will bring up all the BHO identifier GUIDs that should be blocked. See here for a simple list: ... "CoolWebSearch" anyone? Does anyone else find it odd, that in Microsoft's IE Feedback site Connect (the pseudo-public bug tracker), that the Status options are: Bug/Feature Status list: 1 Active 2 Resolved 3 Closed 4 Not Active 5 Not Closed I don't understand 4 and 5, but where is the "Fixed" status? I was hoping to query for which bugs have already been fixed internally, so that I can avoid worrying about making workarounds... but I can't even run such a search! I also thought that maybe the "Resolved" status might be doubled up to indicate "Resolved-not-a-bug", "Resolved-works-as-designed", AND "Resolved-we-fixed-this-internally"... But the number of resolved bugs is Zarro. Just wondering when "Fixed" is going to be "Fixed" in Connect. A download manager would be really really nice. Or some simple resume support. And yeah I know that there are programs like IE Pro that support it. But build-in would be much nicer. >Though i completely agree with putting the enabled add-ons first and the disabled ones last, and by removing the status column because they're grouped anyway Uh, did you use the feature or just read the post, the comments, and look at the screen shot? Right click on the listview, and you can change sort order, add/remove columns, change grouping, etc. So if you want to sort by status and group by status, feel free. One thing you can't do is group or sort by columns you've hidden, so you can't remove the status field. That's a classic limitation of the listview. Also, there the usual listview inconsistency - if you right click on the header row vs. a selected item vs. a non-selected item, what should happen? Window apps (as well as Windows itself) do this differently. Explorer windows, for example, when you right click an unselected item will select it then show the selected item context menu, unless you clicked a blank field in which case you get the generic context menu for the list. But that's not the same context menu you get if you right click the header. And BTW, UI design books (textbooks or otherwise) don't sing the praises of tab type controls. They'll talk about the pros and cons of each type of general UI control. There are places where they are useful and places where they are not. I don't have their design guidelines, but it does seem obvious the Windows designers are staying away from tabs more and more. But you need to look at this not from list vs. tab control. You need to look at the high level design, and if you do you'll probably see that this layout is a classic three pane design. Category selection in list on the left, item selection in a list on the top right, details on that item on the bottom. There are some high level details that vary (for example, should the category selection list go all the way to the bottom of the window or should the item details pane go all the way to the left; here they chose the latter). A three pane solution is common where the differences between items vary only in details (type of columns, for example), and the categories are similar (explorer bar vs. toolbar is a user distinction, they could have chosen to put those in the same category). The tab vs. column argument is really an argument against the three pane approach. And since I think three pane works great in this situation, I wouldn't use a tab control. Now that search provider is included in Manage add-ons are you going to remove the search provider option in Internet Options and replace it manage add-ons. A UI update for internet option is always welcome. MIA: DOWNLOAD MANAGER ability two view two tabs in one Internet Explorer Windows Super Drag and Drop? Right click add keyword for search? Shouldn't be extensions but built in the browser. Let's see, IE8 still doesn't FULLY support the following web standards that MS should have supported long back: - XHTML - DOM Level 2 - partial support - DOM Level 3 - Various XML standards (XForms, EXSLT etc) - SVG - JavaScript 1.8 - CSS 3 - APNG - Firefox 3 and Opera 9.5 support it >Now that search provider is included in Manage add-ons are you going to remove the search provider option in Internet Options and replace it manage add-ons. Your question is answered by actually *using the IE8 beta* rather than just reading blog posts. The answer is "yes". The Search Defaults Settings button in Internet Options launches this new dialog with the "Search Providers" category selected. Hey IE team, I understand that the final version of IE8 is not yet determined, but can you at least tell: 1. Whether it will simultaneously release with Windows 7 or much earlier than that (as expected). 2. Whether it will release in 2008 or 2009? Surprising you've gone for the Unix convention of -extoff when /extoff would be more in-keeping with Windows. And also surprising you haven't used the familiar Windows terminology of safe mode - eg /safemode UI Dude, I appreciate your views on the pros and cons of a Tabs+ListView+Details design versus a ListBox+ListView+Details design. I understand and fully agree that the Vista build of IE8 must follow Vista conventions. But, equally, the XP build must follow XP conventions. Being consistent with the OS you release on makes the app instantly familiar to users of that OS. It enables skills to be transferred from app to app. As the window does not display an icon in the top left of the title bar, there is no visual indication that it has a control box. As such, a user has no visible clue that the control box keyboard shortcut will do anything. Showing the icon and the common sizing buttons is the right thing to do here, I think you'll agree? I am familiar with the customisations this ListView has. From default view (non-grouped and ordered by "Name"): 1. Right-click a header. 2. Select "Group by" > "Status". The list is now grouped by "Status", disabled items first, ordered by "Name". The items remain grouped by "Status", disabled items first, but the ordering by "Name" is now reversed. Reversing the "Sort by" order upon reselecting a "Group by" item is unintuitive. For me, at least... As the "Group by" > "Status" item has a tick next to it, that indicates an on/off state. Clicking it the first time turns on the grouping and ticks the item, which is fine. Clicking it again should turn off that grouping, unticking the item. The "None" item and separator should not be present. Alternatively, the tick could be replaced by a bullet, indicating only one can be set at a time. Like a radio button group. In this case, there should be no separator between the "None" and "Name" items. Clicking an already-selected "Group by" option could reverse the order of the grouping. I'd say the columnar sort order should remain as it was. When grouping is on, the ListView items cannot be sorted differently. When grouping is off, the "Sort by" items from "Publisher" to "Date added" affect the column *after* the correct one. "Sort By" > "Version" does nothing. Clicking an already-selected "Sort by" item does nothing. It could reverse the sort order. This would be consistent with reversing the sort order by clicking the ListView header a 2nd time. Alas, defeat! Right-clicking a selected row provides a context menu with curious item ordering. 2 of the 3 items which affect the row are at the top, which makes sense because the row is what you right-clicked on. But the 3rd item, "Copy", is right at the bottom. It's as far away as possible from what you right-clicked on. Additionally, it's in the group of menu items which affects columns and ordering. Putting this in the top group of items would be better, wouldn't it? Your defence for several of the UI choices certainly have merits. But the quality of this particular implementation is beyond defence, afaict. The "squint test" gives a blur of misaligned text scattered above and below the main controls. And closer you look, the more odd and confusing things you find. In a product to be used by ~0.5 billion people, getting every UI feature to work properly and intuitively is especially important. Christopher Vaughan [MSFT] he was true to his word about reading the comments which are left here. But it seems the comments will not affect IE8's UI, no matter how salient the points being raised? Ben, it's obvious that the IE team isn't going to make IE8 be "XP look-and-feel on XP" and "Vista on Vista" and "Win7 on Win7". That would just be dumb. XP and Vista are both done deals for the Windows team. So the Windows team is working on Win7 right now. Obviously IE is going to be part of Win7, and since the IE team is part the Windows team, it only stands to reason they're writing for Win7, with a backward eye on Vista and XP. And since they're starting with the IE7 codebase, chances are much of it will still look like IE7. The new stuff will probably look like Win7 as much it can, within whatever limits still running on XP poses. Perhaps they'll change icons or color gradients or twiddly stuff like that, but overall, they're coding for Win7. I think developers love to get their minds wrapped around some notion of "XP conventions" that are set in stone. The web uses a lot of different UI concepts, and it toddles along just fine. The world won't fall apart because IE8 looks more like Vista on XP than it "should" according to some nebulous XP guidelines.?" I think this dialog has some visual sloppiness but it's beta 1. They probably have a stack of polish bugs to go fix. BTW, I'm not defending their UI. I'm explaining my view of why they chose what they chose. I think a three pane design works here, so I agree with their overall design. Not the same thing. I have to say many of the comments here show a lack of analytical effort and plain laziness. If you want to critique someone's UI, then at least actually install it and use it for a while. It's clear only a minority of commenters actually did that. That's just lazy, and that's what motivated me to comment in the first place. secuer ie can no cennet to internet Thanks for implementing some requests. But I do have a questions about this interface in combination with group policies. Within our company we're using the policy settings "deny all add-ons unless specifically allowed in the add-on list". This policy has proven its use: it has reduced helpdesk calls and made the browser much more stable. The only downside is the management of the allow list. I've made suggestions in the past to allow management based on the publisher information which would greatly reduce the overhead. The biggest issue is when a user is navigating to a web site that uses an add-on which is not yet installed and isn't on the allow list. In this scenario the add-on management interface is completely useless because it doesn't show any information about this add-on, which makes it hard to add this add-on to the allow list. Until now we're using a sort of debugging tool called kapimon which allows us to see which add-on's are blocked. Although this works, it's rather difficult and requires administrator rights. Wouldn't it be possible to show the class-id in the add-on interface of a blocked add-on? I know other fields will be difficult to show but you could leave them blank or display "unknown". This would really help us in troubleshooting add-on management via group policies. Could you take a look at management of add-on’s based on other criteria such as publisher, etc.? I would also be very interested in seeing detailed information what per-user ActiveX controls is all about. Kris Great, an easy way to disable that POS Acrobat reader plugin. Hey, IE team, IE7 added the option to prevent using Javascript to modify the status bar. Can you add the ability to allow users to disable the option when right click is disabled on a web page, so right click cannot be disabled. @someone CSS 3 isn't even fully finalized yet, to my knowledge. How could they "FULLY" support it? I missed the IE chat! :( Will you be posting the transcript soon? Is the bug tracking public yet? I wish to be able to enable an add-in only for some websites, e.g. on some websites “flush” provides value, however on most websites it just slows me down and crashes IE. I think I am looking for something like a tool bar that lets me turn on/off all add-ins that are in the “sometime enable” category, and then for IE to remember my choose for the given web site. It looks really good! Hope there will not be a lot of bugs when this IE version comes out! Guess what, I get to fix your mistakes again! UI Dude wrote: ?" Perhaps it should be modeless and shown in the taskbar, then? Bookmarks Manager and Downloads Manager from Firefox are a good examples of how well this can work. A modeless Add-ons Manager in IE8 would be similarly effective, I think. There are many badly designed applications on any OS you care to mention. Bad design needs to be reduced, not encouraged. There *are* guidelines for GUIs on Windows XP. There *is* a wide range standard controls. Users *do* find it harder when they are completely ignored. For example, inconsistent iconography makes it harder to figure out at a glance what a toolbar button with no text label will do. Sure you can hover the mouse and get a tooltip. But this isn't helping users get things done. And if the tooltip text is inconsistent with the platform? Then it's no help anyway and users lose faith in waiting for tooltips next time there's something they don't recognise immediately. OS consistency is an open-and-shut case, imho. Products excel when they heed it. ." Yes, you could look at it that way. Then again, if these features are too difficult to get right perhaps they should be dropped? Clicking column headers covers the most useful sorting abilities already. It takes fewer clicks on larger hit areas, too. I wonder if the IE team have discussions like this in the corridors and cafeteria? :) Built in Download Manager is absolute for any good browser. Whenever I need to download files for more than 500MB, I install DAP, put download request into it. As soon as download completes, I uninstall DAP! Why can't a basic Pause/Resume be implemented in IE, if not extensive DM? I AM NOT A DELOPER SO THESE COMMENTS ARE TOO MUCH FOR ME Seriously, what is the improvement in experience? They've added a left nav. Sounds like snake oil to me. Hi, I’m Matt Crowley, Program Manager for Extensibility with Internet Explorer. The team was very excited     아래 글은 IEBlog에 올라온 IE 8 보안 관련 글 중 두번째 재 파트 5까지 나와있는데 시리즈로 번 뿐
http://blogs.msdn.com/ie/archive/2008/03/20/add-on-management-improvements-in-internet-explorer-8.aspx
crawl-002
refinedweb
6,830
72.97
Render Children in React Using Fragment or Array Components What comes to your mind when React 16 comes up? Context? Error Boundary? Those are on point. React 16 came with those goodies and much more, but In this post, we’ll be looking at the rendering power it also introduced — namely, the ability to render children using Fragments and Array Components. These are new and really exciting concepts that came out of the React 16 release, so let’s look at them closer and get to know them. Fragments It used to be that React components could only return a single element. If you have ever tried to return more than one element, you know that you’ll will be greeted with this error: Syntax error: Adjacent JSX elements must be wrapped in an enclosing tag. The way out of that is to make use of a wrapper div or span element that acts as the enclosing tag. So instead of doing this: class Countries extends React.Component { render() { return ( <li>Canada</li> <li>Australia</li> <li>Norway</li> <li>Mexico</li> ) } } …you have to add either an ol or ul tag as a wrapper on those list items: class Countries extends React.Component { render() { return ( <ul> <li>Canada</li> <li>Australia</li> <li>Norway</li> <li>Mexico</li> </ul> ) } } Most times, this may not be the initial design you had for the application, but you are left with no choice but to compromise on this ground. React 16 solves this with Fragments. This new features allows you to wrap a list of children without adding an extra node. So, instead of adding an additional element as a wrapper like we did in the last example, we can throw <React.Fragment> in there to do the job: class Countries extends React.Component { render() { return ( <React.Fragment> <li>Canada</li> <li>Australia</li> <li>Norway</li> <li>Mexico</li> </React.Fragment> ) } } You may think that doesn’t make much difference. But, imagine a situation where you have a component that lists different items such as fruits and other things. These items are all components, and if you are making use of old React versions, the items in these individual components will have to be wrapped in an enclosing tag. Now, however, you can make use of fragments and do away with that unnecessary markup. Here’s a sample of what I mean: class Items extends React.Component { render() { return ( <React.Fragment> <Fruit /> <Beverages /> <Drinks /> </React.Fragment> ) } } We have three child components inside of the fragment and can now create a component for the container that wraps it. This is much more in line with being able to create components out of everything and being able to compile code with less cruft. Fragment Shorthand There is a shorthand syntax when working with Fragments, which you can use. Staying true to its fragment nature, the syntax is like a fragment itself, leaving only only empty brackets behind. Going back to our last example: class Fruit extends React.Component { render() { return ( <> <li>Apple</li> <li>Orange</li> <li>Blueberry</li> <li>Cherry</li> </> ) } } Question: Is a fragment better than a container div? You may be looking for a good reason to use Fragments instead of the container div you have always been using. Dan Abramov answered the question on StackOverflow. To summarize: - It’s a tiny bit faster and has less memory usage (no need to create an extra DOM node). This only has a real benefit on very large and/or deep trees, but application performance often suffers from death by a thousand cuts. This is one less cut. - Some CSS mechanisms like flexbox and grid have a special parent-child relationship, and adding divs in the middle makes it harder to maintain the design while extracting logical components. - The DOM inspector is less cluttered. Keys in Fragments When mapping a list of items, you still need to make use of keys the same way as before. For example, let’s say we want to pass a list of items as props from a parent component to a child component. In the child component, we want to map through the list of items we have and output each item as a separate entity. Here’s how that looks: const preload = { "data" : [ { "name": "Reactjs", "url": "", "description": "A JavaScript library for building user interfaces", }, { "name": "Vuejs", "url": "", "description": "The Progressive JavaScript Framework", }, { "name": "Emberjs", "url": "", "description": "Ember.js is an open-source JavaScript web framework, based on the Model–view–viewmodel pattern" } ] } const Frameworks = (props) => { return ( <React.Fragment> {props.items.data.map(item => ( <React.Fragment key={item.id}> <h2>{item.name}</h2> <p>{item.url}</p> <p>{item.description}</p> </React.Fragment> ))} </React.Fragment> ) } const App = () => { return ( <Frameworks items={preload} /> ) } See the Pen React Fragment Pen by Kingsley Silas Chijioke (@kinsomicrote) on CodePen. You can see that now, in this case, we are not making use of any divs in the Frameworks component. That’s the key difference! Render Children Using an Array of Components The second specific thing that came out of React 16 we want to look at is the ability to render multiple children using an array of components. This is a clear timesaver because it allows us to cram as many into a render instead of having to do it one-by-one. Here is an example: class Frameworks extends React.Component { render () { return ( [ <p>JavaScript:</p> <li>React</li>, <li>Vuejs</li>, <li>Angular</li> ] ) } } You can also do the same with a stateless functional component: const Frontend = () => { return [ <h3>Front-End:</h3>, <li>Reactjs</li>, <li>Vuejs</li> ] } const Backend = () => { return [ <h3>Back-End:</h3>, <li>Express</li>, <li>Restify</li> ] } const App = () => { return [ <h2>JavaScript Tools</h2>, <Frontend />, <Backend /> ] } See the Pen React Fragment 2 Pen by Kingsley Silas Chijioke (@kinsomicrote) on CodePen. Conclusion Like the Context API and Error Boundary feature that were introduced in React 16, rendering children components with Fragment and multiples of them with Array Components are two more awesome features you can start making use of as you build your application. Have you started using these in a project? Let me know how in the comments so we can compare notes. 🙂 The post Render Children in React Using Fragment or Array Components appeared first on CSS-Tricks.
http://design-lance.com/render-children-in-react-using-fragment-or-array-components/
CC-MAIN-2018-34
refinedweb
1,054
61.36
The following form allows you to view linux man pages. #include <blkid.h> cc file.c -lblkid The libblkid library is used to identify block devices (disks) as to their content (e.g. filesystem type) as well as extracting additional information such as filesystem labels/volume names, unique identi- fiers/serial numbers. A common use is to allow use of LABEL= and UUID= tags instead of hard-coding specific block device names into configura- tion permis- sion on the raw block device, otherwise not). The cache file also allows unprivileged users (normally anyone other than root, or those not in the "disk" group) to locate devices by label/id. The standard location of the cache file can be overridden by the environment vari- able vis- ible devices, so the use of the cache file is required in this situa- tion. The standard location of the /etc/blkid.conf config file can be over- ridden by the environment variable BLKID_CONF. The following options control the libblkid library: SEND_UEVENT=<yes|not> Sends uevent when /dev/disk/by-{label,uuid}/ symlink does not match with LABEL or UUID on the device. Default is "yes". libblkid was written by Andreas Dilger for the ext2 filesystem util- ties, with input from Ted Ts'o. The library was subsequently heavily modified by Ted Ts'o. The low-level probing code was rewritten by Karel Zak. /etc/blkid.tab caches data extracted from each recognized block device /etc/blkid.conf configuration file libblkid is part of the util-linux package since version 2.15 and is available from. libblkid is available under the terms of the GNU Library General Public License (LGPL), version 2 (or at your discretion any later version). blkid(8) findfs(8) [email protected]
http://www.linuxguruz.com/man-pages/libblkid/
CC-MAIN-2017-43
refinedweb
291
57.47
In my last post we identified some of the key elements needed for enabling a navigation model for our “BSM Dashboard”. We identified a basic structure model as part of that activity to enable us to display key content and have logical components to click on for navigation. This basic structure model resembles what would be instantiated within TBSM’s service model. The actual navigation actions will require us to think about our template model which I’ll touch on in an upcoming post. The Business Service Composer (BSC) is a new capability of the previously released component registry viewer tool also known as the “crviewer”. It’s part of the IBM Solution for BSM as well as fix pack 1 for TBSM v6.1. We’ll be walking through the practical use of the BSC in this blog posting. For more information on the BSC, please see the documentation here. For information on the crviewer tool, please see documentation here. To start, the BSC only works upon data that’s been loaded into the SCR. It doesn’t work on anything you’ve manually built via the GUI or RADSHELL APIs or used an automated method for service model instantiations such as an ESDA or AutoPop type rule. There is a configuration option that will enable an AutoPop rule to send instance information to the SCR database, but this is off by default. The typical ways to get content into the SCR database prior to TBSM v6.1 was via an IDML book (aka DLA book) or via TADDM. In TBSM v6.1 a new capability within Netcool/Impact v6.1 is the “SCR API” which provides a pathway for Netcool/Impact to collect information and send it into the SCR database. For our demo, I’ve hacked up a DLA that was exported from an ITM Monitoring environment. It contains the Tivoli Common Data Model (CDM) descriptions for six Linux systems. I’ve given these systems a simple node name label that represents their functional role within our online sales application technology infrastructure. Having a basic understanding of the Tivoli CDM or the custom namespaces used in your DLA books will be important as you use the BSC. I’ll touch more on this in our next posting when we create the policies for automatic instance placement against our static structure model. The demo DLA is available here.
https://www.ibm.com/developerworks/mydeveloperworks/blogs/7d5ebce8-2dd8-449c-a58e-4676134e3eb8/entry/creating_a_structure_model_for_the_bsm_dashboard_using_the_business_service_composer_bsc_bsm_solution_development_series_and_demo_development9?lang=pt_br
CC-MAIN-2017-04
refinedweb
400
62.98
Java Program Orçamento $15-20 USD The area of an ellipse is pab, where a is the length of the minor (smaller) axis and b is the length of the major (larger) axis. Assume the following class Circle with x and y coordinates specifying the center point and a color attribute. Assume for an ellipse we will use diameter as the minor axis. Please create a class Ellipse that inherits from circle and extends it appropriately. Use overriding to define the area. public class circle { private int x ; //coords private int y ; private String color; private int diameter ; // in pixels private final double pi = 3.1416; public double getArea() return pi * ((diamter/2)*(diamter/2)) ; //pi r squared End //Assume a bunch of get/set methods below here to set the attributes. public void setx(…etc etc public void sety(…etc etc Java 1.4
https://www.br.freelancer.com/projects/php-java/java-program.3001206/
CC-MAIN-2018-09
refinedweb
145
54.22
Arg... If I leave the above code out completely, neither 0.6 or 0.7 work (as expected). Can you point out what, if anything, I'm doing wrong, since I thought setting enthought.__path__ is supposed to work with 0.6? I think I have three options at this point: 1.) have enstaller require setuptools>=0.7 (which enthought is hosting and will be found automatically by the enstaller install script) 2.) write a function in the enstaller egg build script that bundles all the packages into a namespace other than 'enthought.' inside of the egg, say 'bundle.' for example, and does a global search-and-replace of 'enthought.' with 'bundle.' Assuming the script gets everything, this seems like it would be the most straightforward and would not require me to fiddle with the python/setuptools import machinery. 3.) find the silly mistake I made which is causing the initial fix to break for 0.6, if possible. Thanks in advance! >. > > > > I've tried removing all the "enthought." eggs from sys.path > > before anything > >else gets imported, but even that doesn't work (and it just felt wrong, and > >probably is). > > If "enthought" is a namespace package, you would need to either be > using an 0.7-development version of setuptools (i.e., a trunk SVN > version), or else you'd need to remove those eggs from the path > *before* pkg_resources is first imported. Otherwise, it would indeed not work. > > > ! > > Not without using an 0.7 version of setuptools, or manipulating > enthought.__path__. The straightforward way to do it would be for > enstaller's __init__.py to do this: > > import enthought, os > enthought.__path__ = os.path.dirname(os.path.dirname(__file__)) > > This will ensure that all enthought.* modules imported from then on > will only be from within the parent directory of enstaller. > > I don't think I'd recommend this to anyone in the general case, but > this might be a reasonable workaround if you truly want to > "de-namespace" the package. (It will *not*, however, prevent new > eggs from being added to __path__ if you add anything to sys.path or > the working set afterward.) > > -- Rick Ratzel - Enthought, Inc. 515 Congress Avenue, Suite 2100 - Austin, Texas 78701 512-536-1057 x229 - Fax: 512-536-1059
https://mail.python.org/pipermail/distutils-sig/2007-June/007725.html
CC-MAIN-2016-50
refinedweb
377
77.94
Java Lambda Streams and Groovy Closures Comparisons Java Lambda Streams and Groovy Closures Comparisons Want to learn more about the difference between in lambda streams in both Java and Groovy? Check out this post to learn more about the differences between them. Join the DZone community and get the full member experience.Join For Free In this blog post, we will look at some of the proverbial operations on a list data structure and make some comparisons between Java 8/9 and the Groovy syntax. First let's talk about the data structure. Think of it as just a simple Rugby player who has name and a rating. Java class RugbyPlayer { private String name; private Integer rating; RugbyPlayer(String name, Integer rating) { this.name = name; this.rating = rating; } public String toString() { return name + "," + rating; } public String getName() { return name; } public Integer getRating() { return rating; } } //... //... List<RugbyPlayer> players = Arrays.asList( new RugbyPlayer("Tadgh Furlong", 9), new RugbyPlayer("Bundee AKi", 7), new RugbyPlayer("Rory Best", 8), new RugbyPlayer("Jacob StockDale", 8) ); Groovy @ToString class RugbyPlayer { String name Integer rating } //... //... List<RugbyPlayer> players = [ new RugbyPlayer(name: "Tadgh Furlong", rating: 9), new RugbyPlayer(name: "Bundee AKi", rating: 7), new RugbyPlayer(name: "Rory Best", rating: 8), new RugbyPlayer(name: "Jacob StockDale", rating: 8) ] Find a Specific Record Java // Find Tadgh Furlong Optional<RugbyPlayer> result = players.stream() .filter(player -> player.getName().indexOf("Tadgh") >= 0) .findFirst(); String outputMessage = result.isPresent() ? result.get().toString() : "not found"; Groovy println players.find{it.name.indexOf("Tadgh") >= 0} - The Java lambda has just one parameter — player. This doesn't need to be typed, since its type can be inferred. Note: this lambda only uses one parameter. If there were two parameters in the parameter list, the parenthesis would be needed around the parameter list. - In Java, a stream must be created from the list first. A lambda is then used before performing a function that will then return an Optional. - The lambda definition doesn't need a return statement. It also doesn't need {} braces or one of those semi-colons to complete a Java statement. However, you can use {}if you want. However, if you do include brackets, you must include the ;and the return statement. Note: if your lambda is more than one line, you don't have a choice — you must use {}. It is a recommended, best practice to keep lambdas short and just one line. - Java 8 supports fluent APIs for pipeline stream operations. This is also supported in Groovy Collection operations. - In Java a player variable that is specified for the Lambda, the Groovy closure doesn't need to specify a variable. It can just use "it," which is the implicit reference to the parameter (similar to _ in Scala). - The Java filter API takes a parameters of the type predicate. A functional interface means that it can be used as the assignment target for a lambda expression or method reference. Along with that, predicate is a type of functional interface. It's one abstract method is the boolean test(T t). In this case, while using the lambda, the player corresponds to t. The body definition should evaluate if it is true or a false. In our case, the player.getName().indexOf("Tadgh")will always evaluate it as either true or false. True will correspond to a match. - Java 8 has other types of functional interfaces: - Function — it takes one argument and returns a result - Consumer — it takes one argument and returns no result (represents a side effect) - Supplier — it takes no arguments - Java 8 can infer the type for the lambda input parameters. Note that if you have to specify the parameter type, the declaration must be in brackets. This adds further verbosity. - Groovy can println directly. No System.outis needed, and there is no need for subsequent braces. - Like Java, Groovy doesn't need the return statement. However, this isn't just for closures. In Groovy, it extends to every method. Whatever is evaluated as the last line is automatically returned. - Groovy has no concept of a functional interface. This means that if you forget to ensure your last expression as an appropriate boolean expression, you get unexpected results and bugs at runtime. - The arrow operator is used in both Groovy and Java to mean essentially the same thing, separating the parameter list from the body definition. In Groovy, it is only needed if you need to declare the parameters (the default it, doesn't suffice). Note: In Scala, =>is used. Java // Find all players with a rating over 8 List<RugbyPlayer> ratedPlayers = players.stream() .filter(player -> player.getRating() >= 8) .collect(Collectors.toList()); ratedPlayers.forEach(System.out::println); Groovy println players.findAll{it.rating >= 8} - In the Java version, the iterable object ratedPlayershas its forEachmethod invoked. This method takes a functional interface of the consumer (see Jdoc here). Consumer methods are a function that takes an input parameter and returns nothing — it is void. - In Java, the stream.filter()will return another stream. Stream.collect()is one of Java 8's stream terminal methods. It performs mutable fold operations on the data elements held inside the stream instances returned by the filter method. Collectors.toList ()returns a Collector, which collects all stream elements into a list. - When using the toList()collector, you can't assume the type of list that will be used. If you want more control, you need to use the toCollection(). For example: .collect(toCollection(LinkedList::new) - Note: We could have omitted the .collect() operation and invoked forEach straight on the stream. This would make the Java code shorter. players.stream() .filter(player -> player.getRating() >= 8) .forEach(System.out::println); System.out::printlnis a method reference and is a new feature in Java 8. It is syntactic sugar to reduce the verbosity of some lambdas. This is essentially saying that for every element in ratedPlayers, execute the System.out.println, passing in the the current element as a parameter. - Again, we get less syntax from Groovy. The function can operate on the collection, and there is no need to create a stream. - We could have just printed the entire list in the Java sample, but heck I wanted to demo the forEachand method reference. Map From Object Type to Another Java // Map the Rugby players to just names. // Note, the way we convert the list to a stream and then back again to a to a list using the collect API. System.out.println("Names only..."); List<String> playerNames = players.stream().map(player -> player.getName()).collect(Collectors.toList()); playerNames.forEach(System.out::println); Groovy println players.collect{it.name} - A stream is needed to be created first before executing the Lambda. Then, the collect()method is invoked on the stream. This is needed to convert it back to a list. This also makes code more verbose. - That said if all you are doing is printing the list, you can just do: players.stream() .map(player -> player.getName()) .forEach(System.out::println); Perform a Reduction Calculation Java System.out.println("Max player rating only..."); Optional<Integer> maxRatingOptional = players.stream() .map(RugbyPlayer::getRating) .reduce(Integer::max); String maxRating = maxRatingOptional.isPresent() ? maxRatingOptional.get().toString() : "No max"; System.out.println("Max rating=" + maxRating); Groovy def here = players.inject(null){ max, it -> it.rating > max?.rating ? it : max } - In the Java version, the reduced operation is invoked on the stream. There are three different versions of this method. In this version, no initial value is specified, meaning that an optional type is returned. The input parameter of type BinaryOperatoris a functional interface that means a lamda expression or method reference can be used to specify its value. In this case, the method reference Integer.max()is used. - The null safe operator is used in the Groovy inject closure so that the first comparsion will work. - In Java, it is possible to avoid the isPresent check on the optional by just doing... players.stream() .map(RugbyPlayer::getRating()) .reduce(Integer::max) .map(Object::toString()) .orElse("No Max"); Summary - Groovy is still far more terse. - However, some of the operations in Java are lazily run. For example, map()and filter()are considered intermediate. They won't execute unless a terminal function, e.g. forEach, collects and reduces on the stream. This made the code more verbose in some cases, but it also means that it can be more performant. - Groovy also offers some lazy functions. The full Java code can be found here. And, the full Groovy code can be found here. If you enjoyed this article and want to learn more about Java Streams, check out this collection of tutorials and articles on all things Java Streams. Published at DZone with permission of Alex Staveley , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/java-lambda-streams-and-groovy-clauses-comparisons
CC-MAIN-2020-05
refinedweb
1,475
50.84
GSwR V: Methods to the Madness Getting Started with Ruby We’ve covered some of Ruby’s most important object types in the last three posts – Strings, Integers & Floats and collections such as Arrays, Ranges and Hashes. We’ve also looked at the methods that give these objects their functionality. In this post, we’re going to look at creating our own methods. To define a method we use the def keyword, followed by the name of the method. The code for the method comes next before finishing with the end keyword. Let’s have a look at some examples in IRB: def say_hello "Hello Ruby!" end This method is called say_hello and returns the string “Hello Ruby!”. The last line of a method in Ruby is its return value which is the value that is returned by the method when it is called. To call a method, simply enter its name: say_hello => Hello Ruby! Methods are useful. They can make your code easier to read (as long as you give your methods descriptive names) and mean that you don’t have to write repetitive blocks of code over and over again. Also, if you want some of the functionality to change, then you only need to update the method in one place. This is known as the DRY (Don’t Repeat Yourself) principle and it is important to keep in mind when programming. Adding Parameters Methods can be made more effective by including parameters. These are values that are provide for the method to use. For example we can improve the say_hello method by adding a ‘name’ parameter: def say_hello(name) "Hello #{name}!" end Parameters are added to the method definition after the method name (the parentheses are optional but recommended). In the body of the function, the name of the parameter acts like a variable equal to the value that is passed to the method as an argument when it is called: say_hello "Sinatra" # Again, parentheses are optional => "Hello Sinatra!" In this case, the string “Sinatra” is provided as an argument and we use string interpolation to insert the arguument into the string that is returned by the method. We can provide a default value for a parameter by putting it equal to the default value in the method definition: def say_hello(name="Ruby") "Hello #{name}" end This means we can now call the say_hello method with or without arguments: say_hello => "Hello Ruby!" say_hello "DAZ" => "Hello DAZ!" We can add more parameters, some with default arguments and some without (but the ones without must come first). Here is another funtion called greet that uses multiple parameters: def greet(name,job,greeting="Hello") "#{greeting} #{name}, Your job as a #{job} sounds fun" end greet("Walt","Cook") => "Hello Walt, Your job as a Cook sounds fun" greet("Jessie","Cook","Hey") => "Hey Jessie, Your job as a Cook sounds fun" We can also create methods with an unspecified number of arguments by placing a ‘*’ in front of the last parameter in the method definition. Any number of arguments will then be stored as an array with the same name as the parameter. The following method will take any number of names as an argument and store them in an array called names: def say_hello(*names) names.each { |name| puts "Hello #{name}!" } end say_hello("Walt","Skylar","Jessie") Hello Walt! Hello Skylar! Hello Jessie! => ["Walt", "Sklar", "Jessie"] say_hello "Sherlock", "John" Hello Sherlock! Hello John! => ["Sherlock", "John"] say_hello "Heisenberg" Hello Heisenberg! => ["Heisenberg"] Notice that the return value of the method is the names array. Another option is to use keyword arguments instead (although these are only available from Ruby version 2.0 onwards). These act by using a hash like syntax for the parameters, as demonstrated in an updated version of our greet method: def greet(name="Ruby", job: "programming language", greeting: "Hello") "#{greeting} #{name}, Your job as a #{job} sounds fun" end The arguments can then be entered in any order using the keywords: greet greeting: "Hi" => "Hi Ruby, Your job as a programming language sounds fun" greet "Sherlock", job: "detective", greeting: "Greetings" => "Greetings Sherlock, Your job as a detective sounds fun" greet "John", greeting: "Good Day", job: "doctor" => "Good Day John, Your job as a doctor sounds fun" Notice that the order of the keyword arguments doesn’t matter and, if some of them are omitted then, the default value is used instead. We can also add an extra parameter at the end with ** in front of it. This will collect any extra keyword arguments that are not specified in the method definition in a hash with the same name as the parameter: def greet(name="Ruby", job: "programming language", greeting: "Hello", **options) "#{greeting} #{name}, Your job as a #{job} sounds fun. Here is some extra information about you: #{options}" end greet "Saul", job: "Colonel", human: false => "Hello Saul, Your job as a Colonel sounds fun. Here is some extra information about you: {:human=>false}" greet "Kara", job: "Viper Pilot", callsign: "Starbuck", human: true => "Hello Kara, Your job as a Viper Pilot sounds fun. Here is some extra information about you: {:callsign=>\"Starbuck\", :human=>true}" It’s also possible to add a block as a parameter to a method, by placing a & before its name. The block can then be accessed in the method definition by referring to it. This is useful if you want to run some specific code when a method is called. Here’s a basic example of a method called repeat which accepts a block of code as well as an argument telling it how many times to run that code: def repeat(number=2, &block) number.times { block.call } end repeat(3) { puts "Ruby!" } Ruby! Ruby! Ruby! => 3 The block is optional and there is a handy method called block_given? that allows us to check if the block is given when the method is called. Here is a method called roll_dice that takes the number of sides to the dice as an argument as well as a block that can be used to do something with the value rolled on the dice: def roll_dice(sides=6,&block) if block_given? block.call(rand(1..sides)) else rand(1..sides) end end If the method is called without a block, then it simple returns the number rolled on the dice: roll_dice => 6 If we want to mimic a 20-sided dice then we can enter a sides argument: roll_dice(20) => 13 Now say we want to roll the dice, but then double the result and then add 5. We can use a block to do this: roll_dice { |result| 2 * result + 5 } Another example of using a block might be if we want to return whether the result of rolling the dice was odd or even: roll_dice do |result| if result.odd? "odd" else "even" end end Using blocks as parameters can make methods extremely flexible and powerful. The route handlers in Sinatra take blocks as arguments. The method definition for a GET request looks similar to this: def get(route,options={},&block) ... code goes here end The route argument is a string that tells us the route to match. This is followed by a hash of options set to an empty hash by default. Last of all, the method takes a block, which is the code that we want to run when the route is visited. Refactoring Play Your Cards Right We’re going to have a go at refactoring the code for the ‘Play Your Cards Right’ Sinatra app that we created in the last tutorial. Refactoring code is the process of improving its structure and maintainability without changing its behaviour. What we’re going to do is replace some of the chunks of code with methods. This will make the code easier to follow and easier to maintain – if we want to make a change with the functionality, we just need to change the method in one place. Sinatra uses helper methods to describe methods that are used in route handlers and views. These are placed in a helpers block that can go anywhere in the code, but is usually placed near the beginning and looks like this: helpers do # helper methods go here end To start with, we’re going to rewrite the app using the names of methods to describe the behaviour we want. Creat a new file called ‘play_your_cards_right_refactored.rb’ and add the following code: require 'sinatra' enable :sessions configure do set :deck, [] suits = %w[ Hearts Diamonds Clubs Spades ] values = %w[ Ace 2 3 4 5 6 7 8 9 10 Jack Queen King ] suits.each do |suit| values.each do |value| settings.deck << "#{value} of #{suit}" end end end helpers do # helper methods will go here end get '/' do set_up_game redirect to('/play') end get '/:guess' do card = draw_card value = value_of card if player_has_a_losing value game_over card else update_session_with value ask_about card end end This is very similar to the last piece of code (and it has exactly the same functionality) except that it’s much easier to see what’s going on in each route handler. This is because we have replaced a lot of the Ruby code with methods and chosen some descriptive method names, making the code more readable. It almost looks like pseudocode. Let’s have a look at each route handler in turn to see what it does: get '/' do set_up_game redirect to('/play') end This route handler sets up the game then redirects to the ‘/play’ route … in fact, this shouldn’t even need explaining because it says it right there in the code! The method names tell us exactly what is happening – all of the code has been extracted into a method called set_up_game which needs to be created. Place the following inside the helpers block: def set_up_game session[:deck] = settings.deck.shuffle session[:guesses] = -1 session[:value] = 0 end This sets up the session variables that we’ll need in the game. The deck is shuffled, the guesses are set to -1 (because it gets incremented by 1 on the first play, even though a correct guess hasn’t been made) and the value variable is set to 0. redirect and to are both methods built into Sinatra and have been purposefully named so that the code in a route handler reads almost like English. Now let’s have a look at the start of the route handler that deals with the gameplay: get '/:guess' do card = draw_card value = value_of card # more code follows end First of all we need to draw a card, so we write a method to do that. Add the following in the helpers block: def draw_card session[:deck].pop end This is a very short method, but it gives us more descriptive code. Next we want to find out the value of the card. The code that we used for this in part 4 was quite a long case statement to check for picture cards. We can extract this into a method that takes a card as an argument and then returns the value of that card. The following code also goes in the helpers block: def value_of card case card[0] when "J" then 11 when "Q" then 12 when "K" then 13 else card.to_i end end Next, we have an if statement to check if the player has guess correctly or not: if player_has_a_losing value game_over card else update_session_with value ask_about card end This also uses the name of the methods to make the code very descriptive. The first method is called player_has_a_losing and it has a parameter named value. This combination of name and parameter name makes it read nicely. This method also needs to go in the helpers block: def player_has_a_losing value (value < session[:value] and params[:guess] == 'higher') or (value > session[:value] and params[:guess] == 'lower') end This returns true or false depending on whether the value entered as an arguement is higher or lower than the value stored in session[:value] (the previous card’s value) compared to the player’s guess, which is stored in the params hash with a key of :guess. If the player_has_a_losing method returns false, then we move on to two more functions. The first is update_session_with, which takes an argument of value. This does exactly as it says, as well as updating the number of guesses by 1. It also goes in the helpers block: def update_session_with value session[:value] = value session[:guesses] += 1 end The next method that needs to go in the helpers block is ask_about. This simply asks the player whether the card is higher or lower. It takes the current card as the argument: def ask_about card "The card is the #{ card }. Do you think the next card will be <a href='/higher'>Higher</a> or <a href='/lower'>Lower</a>?" end That’s the last of all our helper methods. If you try running the code by typing ruby play_your_cards_right_refactored.rb into a terminal and then visit you should see the same game as before. This is exactly what we want to happen when we refactor our code – no change on the outside, but more readable and maintainable code on the inside. Scope In some of the helper methods we just used, we had to supply either the card or value as a parameter to the methods. You might be wondering why we had to do this when card and value both existed already as variables. This is because a variable only exists inside a method if it has been created in the method or if it has been entered as an argument. This can be seen in the following example (check it in IRB): name = "Walt" job = "teacher" def say_my_name name = "Heisenberg" job = "cook" puts "You're #{name} and you're a #{job}" end puts "You're #{name} and you're a #{job}" => You're Walt and you're a teacher say_my_name => You're Heisenberg and you're a Cook When puts is called outside the method name is ‘Walt’ and job is ‘teacher’, but as soon as you go inside the method name becomes ‘Heisenberg’ and job is ‘cook’. A method doesn’t have any access to any variable created outside of it (they need to be entered as arguments to the method)· You also can’t access any variable created inside a method from outside of the method either – any values you want to access after the method has been called should be returned by the method. The places in the code where a variable is accessible is known as the variable’s scope. That’s All Folks Hopefully this tutorial has helped to introduce methods and shown how useful they can be in making your code more flexible, maintainable, reusable, and easier to read (as long as they are well named). The methods we were writing in this tutorial were actually more like functions. As I mentioned in the very first post, Ruby is actually an object orientated language and the methods should be methods of objects. The methods we have been writing are actually all methods of the special main object (See this post by Pat for more in-depth info about this). In the next post, we’ll be getting classy and looking at how Ruby’s class system works. We’ll go over how to add methods to existing classes such as Strings and Integers and how to create your own classes with their own public and private methods. In the meantime, please leave any comments or questions that you might have in the comments section below. - Matt
http://www.sitepoint.com/gswr-v-methods/
CC-MAIN-2014-41
refinedweb
2,604
64.75
Convert Python's pickled data into XML, or excel format Bütçe €30-250 EUR Hello folks, I have multiple data files, stored in python's .pkl format. I need to create some utility, which would unpickle data, decode them and save in XML or any other readable format (or into MS Excel), so it could be later imported into Oracle database. A test file which needs to be converted is attached. As an output is expected, an utility (script) which will read all files in selected directory and converts them to desired formats (xml) into output directory. Should be an easy task for somebody who is familiar with python. Thanks, Tomas A basic script to start: import pprint, pickle, json pkl_file = open('[url removed, login to view]', 'rb') data1 = [url removed, login to view](pkl_file) [url removed, login to view](data1) >> print unpickled structired data. This output needs to be converted into XML format data_json = [url removed, login to view](data1) >> this fails [url removed, login to view]() Import into Oracle database in not part of this project. Seçilen: Hello, I am a python developer. I will be glad to help. I can use openpyxl package to create excel output file for you. Or using lxml library for xml generation. Regards,
https://www.tr.freelancer.com/projects/excel-python/convert-python-pickled-data-into/
CC-MAIN-2017-39
refinedweb
211
61.16
In the last few posts I have outlined in great detail how to make a simple Revit Add-in using the IExternalCommand implementation. Doing that is a great and really fast way of adding new tools to Revit, but after a while we will realize that we just need a little more organization. Luckily for us Revit API offers a way to create our own tab in Revit and an array of different ways to add buttons to it. For the sake of keeping this tutorial simple and easy to follow, I will only show you how to convert our all-ready CurveTotalLength tool to a button. In this post we will cover a few things like: - IExternalApplication implementation - new RibbonPanel - new PushButton - resource management First let’s open our Visual Studio session that we worked on last week. It should look like this: On the right hand side, in our Solution Explorer I will first change the name of the CurveTotalLength cs file from Class1.cs to CurveTotalLength.cs. - Right click on Class1.cs - Click on Rename and type in “CurveTotalLength.cs” Now, let’s just “Build it” and we can close that file. This was just a quick maintenance procedure to make sure that our file is named properly because next what we will do is import that file into a new Project in Visual Studio. Unfortunately there isn’t an easy way to just take an existing project in Visual Studio and rename it. The easiest way is to actually just create a new project and add an existing one to it if needed. At least that’s what I have been doing. So let’s create a new project in Visual Studio. This step was described before here. We will call it GrimshawRibbon, but you can call it whatever you want – just remember that it’s not easy to rename the project folder structure after it was created. Next, just like we discussed before we need to reference in Revit API and Revit APIUI libraries. Again, look at this link here. Now, let’s just quickly rename the default Class1.cs to App.cs like so(see image above): Click Yes, to confirm. Next, let’s add our existing CurveTotalLength project to this solution (SHIFT + ALT + A). When the window pops up we need to navigate to our CurveTotalLength project location and look for CurveTotalLength.cs file. You should see a new file appear in your Solution Explorer like so: Next let’s quickly add a new folder to our project and then move(drag and drop) the CurveTotalLength to that folder to keep our application nicely organized: - Right click on our project. - Hover over Add… - Click on New Folder Before we start implementing the IExternalApplication, we need to make sure that we have all of the “using” statements as well as one more assembly loaded (PresentationCore), that we will need to define an icon/image for our button. - Right click on References in Solution Explorer, then click on Add Reference… - Click on “Assemblies” on the left side. - In search field type in “Presentation”. - Select PresentationCore from the list. - Click OK. Now, that we have all of the assemblies needed*, we can get to implementing IExternalApplication. First let’s create a “road map” using pseudo code before we start filling in the blanks: *You will notice that we also defined using System.Reflection in code below. System is loaded in by default so there was no need to add it in, and all we needed to do was just type “using System.Reflection” to start using methods defined in that assembly. This is a basic outline for our method. We haven’t done any heavy lifting yet, but we have a good idea about what needs to be done. We already loaded in PresentationCore and I mentioned that we will need it to define an image for our button. Before we get to code and explain how to define that image, let’s create the image and load it into our project first: I usually make my images roughly 320 x 320 pixels. That’s too big for the icon (32 x 32), but it’s much easier to work with that in Illustrator or Photoshop. Save your image to PNG (with transparency) and then you can easily create a 32px icon version of the file using online service called ICO Converter. For a single push button, we need a 32 pixel image so these settings will be fine: - Select a PNG file. - Select pixel size needed - Select Bit Depth - Click Convert Files will be automatically saved in our Downloads folder and will be called favicon.ico so all we need to do is rename that file to totalLength.png. You will be prompted if you want to change the file extension to PNG so please click yes to confirm. Now, let’s add it into our Visual Studio project. First create a new folder in our Solution Explorer and call it “Resources” then add our file like so: - Right click on Resources folder - Hover over Add - Click on Existing Item… Now we just need to navigate to the image that we want to use as our icon and select it. Once the image is inserted we need to change its BuildAction: - Select our image in the Solution Explorer - Change Build Action to Resource Our image is ready, so are all of the assemblies needed, so we can go ahead and define our method that will create the tab and a new button: Let’s go over it line by line: - This defines what kind of method we want to create. For our purpose a “static”, “void” method will suffice. Static means that we are creating a method that can be invoked without first creating an instance of its parent class. Void means that our method will not return anything. - . - . - This will be the name displayed on our tab. - Here we create an instance of a new tab with the given name. - . - . - Here we create a new Panel which will be added to our Tab. We call this panel Tools. - . - . - Here we use Reflection method called Assembly to get a path to folder that our application is being compiled to. - . - . - Before we create a new PushButton we need to create a PushButtonData instance - first input is a unique name/id for our new button - this is text that will be displayed under our button. I wanted our name to be two lines hence the System.Environment.NewLine piece of code. - this is a location of a dll that will be called when button is pushed - this is the name of the method that will be called including a namespace. - . - Here we add the new PushButton to our Panel. - Here we define a tooltip message that will be displayed when user hovers over our button. - Here we define a BitmapImage from the source. Bear in mind that it has to be defined as a URI source so we are creating a new instance of URI like so: new Uri(“pack://application:,,,/GrimshawRibbon;component/Resources/totalLength.png“) where “GrimshawRibbon” is the name of our current project followed by “;component” and then a “/Resources” which is the name of the folder we put our image into, then finally “/totalLength.png” which is the name of the file itself. - Here we set the LargeImage property of the button to our image. - . This is really the gist of our application. Just to make sure that it all makes sense here’s where our variable are coming from: Now that we have a method defined that will create the tab, all we have to do is call it when our plug-in is loaded into Revit – every time Revit starts. Here’s the full code: The last step before we can fire up Revit is making sure that we have a addin manifest file that will register our application with Revit. Just like we did in the previous tutorial let’s just add a new TextFile to our project and call it GrimshawRibbon.addin. You can reference the steps from here. Here’s the code for this addin file (it differs slightly from an ExternalCommand registration): Make sure that you are changing the addin file Build Action to Content and its Copy to Output Directory to Copy if newer. Now, if we build our project we should have two files, that we can copy to our Revit addins location. Again, if you don’t know where that is, please reference previous tutorials here. Truth is that you should really read the previous three tutorials before attempting this one. :-) Now, if you fire up Revit, you will have something like this: Adding your own tab will probably make you realize that having a whole new tab just for one tool is kind of meaningless and you will set out on the path to fill it up with things that will make your life as a Revit user that much more easier. I have been doing this for a few months now, but I am up to 15 custom tools that I created for our office. Some are really simple like the one we were working with during this tutorial series, and some are a little bit more involved. Either way, having your own toolbar in your favorite application will for sure give you at least a small reason to be proud of yourself. Thanks for sticking it out with me. It has been a long tutorial, lots of steps involved, but if you got through then its all it matters. Also, if I have missed anything – forgot to post an important image, skipped an important step – please let me know. You can download the final DLL files from here:GrimshawDT - GrimshawRibbon2016 (1483 downloads) Absolutely enjoy these posts Konrad. It’s about time someone explains the process in a easy to follow way! Kudos! Agree, I had to learn the hard way so you guys have it easy. :-) You are welcome! Konrad, I’ve been struggling months trying to figure out a proper way to include my macros in an organized panel within the ribbon. Thanks a lot and congrats for the amazing job. You are welcome! I do accept gifts of gratitude, usually in form of free drinks. :-) Hi Konrad! Great post, thank you! For Revit 2016 it work like a charm, but when I’ve tested it for Revit 2014 it doesn’t work. Have you any advice how apopt it for Revit 2014? Adam, Yes, I would recommend going to Start>Control Panel> Programs> Revit 2014 and hitting that Uninstall button on top of the window. :-) After the initial shock and nostalgia passes, believe me, life is better in year 2016 AD. But, seriously I can’t help you here. I refuse to provide backwards compatibility and localization issues help. I know, I am a horrible person. I will burn in hell, but at least it will be a modern one. Peace! Konrad, you make me laugh! I’m totally agree with you, but unfortunately, those who are creating content for Revit MEP are forced to use Revit 2014 or even worse Revit 2012 to insure backward compatibility. That’s why I was looking for a solution at least for Revit 2014. Nonetheless, thank you for your reply! Adam, first thing that I would check is to make sure that you are referencing in RevitAPI and RevitAPIUI dlls from the 2014 location. Then try and compile. If everything works then you won’t have to anything else. If however, it doesn’t compile because I used some methods that were added to Revit API post 2014 version (I am not sure if I did, since I never built this particular plug-in for older versions) then you will have to replace them with their older versions, or it might not even be possible. I can’t tell because I never tried it myself. Good luck! Konrad – thanks for the great tutorial, I was able to get my first button and got it to work great, what I am struggling with is adding a second button. I’m trying to add the Legend Duplicator and I keep getting a lot of errors when I copy the Push Button Data section and try to add it for a second button… Thanks again Please see the answer below. Konrad, Anything special when adding multiple Push buttons to the same panel (separate panels seems to work ok)? I defined the PB data, and image. “AddItem” will not produce another button on the panel. It will build without errors, just all buttons after that will not load. Should I be able to copy (using your code above as an example): // create push button for CurveTotalLength PushButtonData b1Data = new PushButtonData( “cmdCurveTotalLength”, “Total” + System.Environment.NewLine + ” Length “, thisAssemblyPath, “TotalLength.CurveTotalLength”); PushButton pb1 = ribbonPanel.AddItem(b1Data) as PushButton; pb1.ToolTip = “Select Multiple Lines to Obtain Total Length”; BitmapImage pb1Image = new BitmapImage(new Uri(“pack://application:,,,/GrimshawRibbon;component/Resources/totalLength.png”)); pb1.LargeImage = pb1Image; And redefine a new button, then add it to the same panel? I am looking at some examples here: It doesn’t look like these examples are doing anything different. Defining the push button data, then adding it to the panel. Next button, defining the push button data, then adding it to the panel. Looks like I’m missing something somewhere. If you have any thoughts on this, much appreciated. Thanks, You need to define a new PushButtonData for the new button. Then you just add it to the same panel by calling the RibbonPanel.AddItem() method. It looks like this Attachment: Capture.png Absolutely amazing Konrad I have just followed the three steps to create a Addin, and finally this to create my own Tab! There where some bumps on the way. But with a little logic, and help from Visual Studio. You guide is perfect! Thanks for the good post. Introducing us to true Addins!! :D I am glad you liked it. Hi Konrad, Thanks for the posts they’re very informative. I was wondering if it is possible to amend your code so I can add scripts which I have made in dynamo to the ribbon. I’m not that savvy with coding so I haven’t figured it out yet, a bit of a learning curve but I’ve been wanting to learn for a while now so this will push me! Thanks, TJ Awesome tutorial, Konrad! Thanks for making this so very easy to comprehend. Best of luck at the new place! Ollie Thanks! Hey. Thanks for this tutorial. I’ve been getting one error that is preventing the app from loading, any thoughts? : There was a mismatch between the processor architecture of the project being built “MSIL” and the processor architecture of the reference “RevitAPI”, . This is just a warning. You can ignore it. As far as I remember I have been getting it for the past 5 years. The issue above was solved by changing the debugger target framework to 4.5 and processor to x64. However I have one more question. Is there a way to add python scripts to these buttons. I’ve written a few python scripts that I would like to make into a button. Advice? There is a great project called pyRevit that makes this super easy. Hi, Thanks for this very useful tutorial. Finally a good example :-) But i did everything you said and when i start Revit 2018 i get the message shown in the picture? Can this be solved or has it something to do with the Revit version? Thanks! Attachment: Capture.jpg This is the first time i am visiting your website. And this post is really helpful for beginners like me. Thank you Konrad! Thanks Konrad, great work! You are welcome!
https://archi-lab.net/create-your-own-tab-and-buttons-in-revit/?replytocom=2248
CC-MAIN-2019-39
refinedweb
2,652
72.26
February 2010 Volume 25 Number 02 CLR Inside Out - Formatting and Parsing Time Intervals in the .NET Framework 4 By Ron Petrusha | February 2010 In the Microsoft .NET Framework 4, the TimeSpan structure has been enhanced by adding support for both formatting and parsing that is comparable to the formatting and parsing support for DateTime values. In this article, I’ll survey the new formatting and parsing features, as well as provide some helpful tips for working with TimeSpan values. Formatting in the .NET Framework 3.5 and Earlier Versions In the Microsoft .NET Framework 3.5 and earlier versions, the single formatting method for time intervals is the parameterless TimeSpan.ToString method. The exact format of the returned string depends on the TimeSpan value. At a minimum, it includes the hours, minutes and seconds components of a TimeSpan value. If it is non-zero, the day component is included as well. And if there is a fractional seconds component, all seven digits of the ticks component are included. The period (“.”) is used as the separator between days and hours and between seconds and fractional seconds. Expanded Support for Formatting in the .NET Framework 4 While the default TimeSpan.ToString method behaves identically in the .NET Framework 4, there are now two additional overloads. The first has a single parameter, which can be either a standard or custom format string that defines the format of the result string. The second has two parameters: a standard or custom format string, and an IFormatProvider implementation representing the culture that supplies formatting information. This method, incidentally, provides the IFormattable implementation for the TimeSpan structure; it allows TimeSpan values to be used with methods, such as String.Format, that support composite formatting. In addition to including standard and custom format strings and providing an IFormattable implementation, formatted strings can now be culture-sensitive. Two standard format strings, “g” (the general short format specifier) and “G” (the general long format specifier) use the formatting conventions of either the current culture or a specific culture in the result string. The example formats in Figure 1 provide an illustration by displaying the result string for a time interval formatted using the “G” format string and the en-US and fr-FR cultures. Figure 1 Time Interval Formatted Using “G” Format String (VB) Visual Basic Imports System.Globalization Module Example Public Sub Main() Dim interval As New TimeSpan(1, 12, 42, 30, 566) Dim cultures() As CultureInfo = { New CultureInfo("en-US"), New CultureInfo("fr-FR") } For Each culture As CultureInfo In cultures Console.WriteLine("{0}: {1}", culture, interval.ToString( "G", culture)) Next End Sub End Module Figure 1 Time Interval Formatted Using “G” Format String (C#) using System; using System.Globalization; public class Example { public static void Main() { TimeSpan interval = new TimeSpan(1, 12, 42, 30, 566); CultureInfo[] cultures = { new CultureInfo("en-US"), new CultureInfo(“"fr-FR") }; foreach (CultureInfo culture in cultures) Console.WriteLine("{0}: {1}", culture, interval.ToString( _ "G", culture)); } } The example in Figure 1 displays the following output: en-US: 1:12:42:30.5660000 fr-FR: 1:12:42:30,5660000 Parsing in the .NET Framework 3.5 and Earlier Versions In the .NET Framework 3.5 and earlier versions, support for parsing time intervals is handled by the static System.TimeSpan.Parse and System.TimeSpan.TryParse methods, which support a limited number of invariant formats. The example in Figure 2 parses the string representation of a time interval in each format recognized by the method. Figure 2 Parsing Time Interval String in Multiple Formats (VB) Module Example Public Sub Main() Dim values() As String = {"12", "12.16:07", "12.16:07:32", _ "12.16:07:32.449", "12.16:07:32.4491522", _ "16:07", "16:07:32", "16:07:32.449" } For Each value In values Try Console.WriteLine("Converted {0} to {1}", _ value, TimeSpan.Parse(value)) Catch e As OverflowException Console.WriteLine("Overflow: {0}", value) Catch e As FormatException Console.WriteLine("Bad Format: {0}", value) End Try Next End Sub Figure 2 Parsing Time Interval String in Multiple Formats (C#) using System; public class Example { public static void Main() { string[] values = { "12", "12.16:07", "12.16:07:32", "12.16:07:32.449", "12.16:07:32.4491522", "16:07", "16:07:32", "16:07:32.449" }; foreach (var value in values) try { Console.WriteLine("Converted {0} to {1}", value, TimeSpan.Parse(value));} catch (OverflowException) { Console.WriteLine("Overflow: {0}", value); } catch (FormatException) { Console.WriteLine("Bad Format: {0}", value); } } } The example in Figure 2 displays the following output: Converted 12 to 12.00:00:00 Converted 12.16:07 to 12.16:07:00 Converted 12.16:07:32 to 12.16:07:32 Converted 12.16:07:32.449 to 12.16:07:32.4490000 Converted 12.16:07:32.4491522 to 12.16:07:32.4491522 Converted 16:07 to 16:07:00 Converted 16:07:32 to 16:07:32 Converted 16:07:32.449 to 16:07:32.4490000 As the output shows, the method can parse a single integer, which it interprets as the number of days in a time interval (more about this later). Otherwise, it requires that the string to be parsed includes at least an hour and a minute value. Expanded Support for Parsing in the .NET Framework 4 In the .NET Framework 4 and Silverlight 4, support for parsing the string representations of time intervals has been enhanced and is now comparable to support for parsing date and time strings. The TimeSpan structure now includes a new overload for the Parse and TryParse methods, as well as completely new ParseExact and TryParseExact methods, each of which has four overloads. These parsing methods support standard and custom format strings, and offer some support for culture-sensitive formatting. Two standard format strings (“g” and “G”) are culture-sensitive, while the remaining standard format strings (“c”, “t” and “T”) as well as all custom format strings are invariant. Support for parsing and formatting time intervals will be further enhanced in future releases of the .NET Framework. The example in Figure 3 illustrates how you can use the ParseExact method to parse time interval data in the .NET Framework 4. It defines an array of seven custom format strings; if the string representation of the time interval to be parsed does not conform to one of these formats, the method fails and throws an exception. Figure 3 Parsing Time Interval Data with ParseExact Method (VB) Module modMain Public Sub Main() Dim formats() As String = { "hh", "%h", "h\:mm", "hh\:mm", "d\.hh\:mm\:ss", "fffff", "hhmm" } Dim values() As String = { "16", "1", "16:03", "1:12", "1.13:34:15", "41237", "0609" } Dim interval As TimeSpan For Each value In values Try interval = TimeSpan.ParseExact(value, formats, Nothing) Console.WriteLine("Converted '{0}' to {1}", value, interval) Catch e As FormatException Console.WriteLine("Invalid format: {0}", value) Catch e As OverflowException Console.WriteLine("Overflow: {0}", value) Catch e As ArgumentNullException Console.WriteLine("No string to parse") End Try Next End Sub End Module Figure 3 Parsing Time Interval Data with ParseExact Method (C#) using System; public class Example { public static void Main() { string[] formats = { "hh", "%h", @"h\:mm", @"hh\:mm", @"d\.hh\:mm\:ss", "fffff", "hhmm" }; string[] values = { "16", "1", "16:03", '1:12', "1.13:34:15", "41237", "0609" }; TimeSpan interval; foreach (var value in values) { try { interval = TimeSpan.ParseExact(value, formats, null); Console.WriteLine("Converted '{0}' to {1}", value, interval); } catch (FormatException) { Console.WriteLine("Invalid format: {0}", value); } catch (OverflowException) { Console.WriteLine("Overflow: {0}", value); } catch (ArgumentNullException) { Console.WriteLine("No string to parse"); } } } } The example in Figure 3 displays the following output: Converted ‘16’ to 16:00:00 Converted ‘1’ to 01:00:00 Converted ‘16:03’ to 16:03:00 Converted ‘1:12’ to 01:12:00 Converted ‘1.13:34:15’ to 1.13:34:15 Converted ‘41237’ to 00:00:00.4123700 Converted ‘0609’ to 06:09:00 Instantiating a TimeSpan with a Single Numeric Value Interestingly, if these same seven time interval strings were passed to the TimeSpan.Parse(String) method in any version of the .NET Framework, they would all parse successfully, but in four cases, they would return a different result. Calling TimeSpan.Parse(String) with these strings produces the following output: Converted ‘16’ to 16.00:00:00 Converted ‘1’ to 1.00:00:00 Converted ‘16:03’ to 16:03:00 Converted ‘1:12’ to 01:12:00 Converted ‘1.13:34:15’ to 1.13:34:15 Converted ‘41237’ to 41237.00:00:00 Converted ‘0609’ to 609.00:00:00 The major difference in the TimeSpan.Parse(String) and TimeSpan.ParseExact(String, String[], IFormatProvider) method calls lies in the handling of strings that represent integer values. The TimeSpan.Parse(String) method interprets them as days. The interpretation of integers by the TimeSpan.ParseExact(String, String[], IFormatProvider) method depends on the custom format strings supplied in the string array parameter. In this example, strings that have only one or two integer digits are interpreted as the number of hours, strings with four digits are interpreted as the number of hours and minutes, and strings that have five integer digits are interpreted as a fractional number of seconds. In many cases, .NET Framework applications receive strings containing time interval data in an arbitrary format (such as integers representing a number of milliseconds, or integers representing a number of hours). In previous versions of the .NET Framework, it was necessary to manipulate this data so that it would be in an acceptable format before passing it to the TimeSpan.Parse method. In the .NET Framework 4, you can use custom format strings to define the interpretation of time interval strings that contain only integers, and preliminary manipulation of string data is not necessary. The example in Figure 4 illustrates this by providing different representations for integers that have from one to five digits. Figure 4 Representations of Integers with 1 to 5 Digits (VB) Module Example Public Sub Main() Dim formats() As String = { "%h", "hh", "fff", "ffff', 'fffff' } Dim values() As String = { "3", "17", "192", "3451", "79123", "01233" } For Each value In values Dim interval As TimeSpan If TimeSpan.TryParseExact(value, formats, Nothing, interval) Then Console.WriteLine("Converted '{0}' to {1}", value, interval.ToString()) Else Console.WriteLine("Unable to parse {0}.", value) End If Next End Sub End Module Figure 4 Representations of Integers with 1 to 5 Digits (C#) using System; public class Example { public static void Main() { string[] formats = { "%h", "hh", "fff", "ffff", "fffff" }; string[] values = { "3", "17", "192", "3451", "79123", "01233" }; foreach (var value in values) { TimeSpan interval; if (TimeSpan.TryParseExact(value, formats, null, out interval)) Console.WriteLine("Converted '{0}' to {1}", value, interval.ToString()); else Console.WriteLine("Unable to parse {0}.", value); } } } The example in Figure 4 displays the following output: Converted ‘3’ to 03:00:00 Converted ‘17’ to 17:00:00 Converted ‘192’ to 00:00:00.1920000 Converted ‘3451’ to 00:00:00.3451000 Converted ‘79123’ to 00:00:00.7912300 Converted ‘01233’ to 00:00:00.0123300 Handling OverflowExceptions When Parsing Time Intervals These new TimeSpan parsing and formatting features introduced in the .NET Framework 4 retain one behavior that some customers have found inconvenient. For backward compatibility, the TimeSpan parsing methods throw an OverflowException under the following conditions: - If the value of the hours component exceeds 23. - If the value of the minutes component exceeds 59. - If the value of the seconds component exceeds 59. There are a number of ways to handle this. Instead of calling the TimeSpan.Parse method, you could use the Int32.Parse method to convert the individual string components to integer values, which you can then pass to one of the TimeSpan class constructors. Unlike the TimeSpan parsing methods, the TimeSpan constructors do not throw an OverflowException if the hours, minutes or seconds value passed to the constructor is out of range. This is an acceptable solution, although it does have one limitation: It requires that all strings be parsed and converted to integers before calling the TimeSpan constructor. If most of the data to be parsed does not overflow during the parsing operation, this solution involves unnecessary processing. Another alternative is to try to parse the data, and then handle the OverflowException that is thrown when individual time interval components are out of range. Again, this is an acceptable solution, although unnecessary exception handling in an application can be expensive. The best solution is to use the TimeSpan.TryParse method to initially parse the data, and then to manipulate the individual time interval components only if the method returns false. If the parse operation fails, you can use the String.Split method to separate the string representation of the time interval into its individual components, which you can then pass to the TimeSpan(Int32, Int32, Int32, Int32, Int32) constructor. The example in Figure 5 provides a simple implementation: Figure 5 Handling Nonstandard Time Interval Strings (VB) Module Example Public Sub Main() Dim values() As String = { "37:16:45.33", "0:128:16.324", "120:08" } Dim interval As TimeSpan For Each value In values Try interval = ParseIntervalWithoutOverflow(value) Console.WriteLine("'{0}' --> {1}", value, interval) Catch e As FormatException Console.WriteLine("Unable to parse {0}.", value) End Try Next End Sub Private Function ParseIntervalWithoutOverflow(value As String) As TimeSpan Dim interval As TimeSpan If Not TimeSpan.TryParse(value, interval) Then Try ‘ Handle failure by breaking string into components. Dim components() As String = value.Split( {"."c, ":"c } ) Dim offset As Integer = 0 Dim days, hours, minutes, seconds, milliseconds As Integer ‘ Test whether days are present. If value.IndexOf(".") >= 0 AndAlso value.IndexOf(".") < value.IndexOf(":") Then offset = 1 days = Int32.Parse(components(0)) End If ‘ Call TryParse to parse values so no exceptions result. hours = Int32.Parse(components(offset)) minutes = Int32.Parse(components(offset + 1)) If components.Length >= offset + 3 Then seconds = Int32.Parse(components(offset + 2)) End If If components.Length >= offset + 4 Then milliseconds = Int32.Parse(components(offset + 3)) End If ‘ Call constructor. interval = New TimeSpan(days, hours, minutes, seconds, milliseconds) Catch e As FormatException Throw New FormatException( String.Format("Unable to parse '{0}'"), e) Catch e As ArgumentOutOfRangeException Throw New FormatException( String.Format("Unable to parse '{0}'"), e) Catch e As OverflowException Throw New FormatException( String.Format("Unable to parse '{0}'"), e) Catch e As ArgumentNullException Throw New ArgumentNullException("value cannot be null.", e) End Try End If Return interval End Function End Module Figure 5 Handling Nonstandard Time Interval Strings (C#) using System; public class Example { public static void Main() { string[] values = { "37:16:45.33", "0:128:16.324", "120:08" }; TimeSpan interval; foreach (var value in values) { try { interval = ParseIntervalWithoutOverflow(value); Console.WriteLine("'{0}' --> {1}", value, interval); } catch (FormatException) { Console.WriteLine("Unable to parse {0}.", value); } } } private static TimeSpan ParseIntervalWithoutOverflow(string value) { TimeSpan interval; if (! TimeSpan.TryParse(value, out interval)) { try { // Handle failure by breaking string into components. string[] components = value.Split( new Char[] {'.', ':' } ); int offset = 0; int days = 0; int hours = 0; int minutes = 0; int seconds = 0; int milliseconds = 0; // Test whether days are present. if (value.IndexOf(".") >= 0 && value.IndexOf(".") < value.IndexOf(":")) { offset = 1; days = Int32.Parse(components[0]); } // Call TryParse to parse values so no exceptions result. hours = Int32.Parse(components[offset]); minutes = Int32.Parse(components[offset + 1]); if (components.Length >= offset + 3) seconds = Int32.Parse(components[offset + 2]); if (components.Length >= offset + 4) milliseconds = Int32.Parse(components[offset + 3]); // Call constructor. interval = new TimeSpan(days, hours, minutes, seconds, milliseconds); } catch (FormatException e) { throw new FormatException( String.Format("Unable to parse '{0}'"), e); } catch (ArgumentOutOfRangeException e) { throw new FormatException( String.Format("Unable to parse '{0}'"), e); } catch (OverflowException e) { throw new FormatException( String.Format("Unable to parse '{0}'"), e); } catch (ArgumentNullException e) { throw new ArgumentNullException("value cannot be null.", e); } } return interval; } } As the following output shows, the example in Figure 5 successfully handles hour values that are greater than 23, as well as minute and second values that are greater than 59: ‘37:16:45.33’ --> 1.13:16:45.0330000 ‘0:128:16.324’ --> 02:08:16.3240000 ‘120:08’ --> 5.00:08:00 Application Compatibility Paradoxically, enhanced formatting support for TimeSpan values in the .NET Framework 4 has broken some applications that formatted TimeSpan values in previous versions of the .NET Framework. The following code, for example, executes normally in the .NET Framework 3.5, but throws a FormatException in the .NET Framework 4: string result = String.Format("{0:r}", new TimeSpan(4, 23, 17)); To format each argument in its parameter list, the String.Format method determines whether the object implements IFormattable. If it does, it calls the object’s IFormattable.ToString implementation. If it does not, it discards any format string supplied in the index item and calls the object’s parameterless ToString method. In the .NET Framework 3.5 and earlier versions, TimeSpan does not implement IFormattable, nor does it support format strings. Therefore, the “r” format string is ignored, and the parameterless TimeSpan.ToString method is called. In the .NET Framework 4, on the other hand, TimeSpan.ToString(String, IFormatProvider) is called and passed the unsupported format string, which causes the exception. If possible, this code should be modified by calling the parameterless TimeSpan.ToString method, or by passing a valid format string to a formatting method. If that is not possible, however, a <TimeSpan_LegacyFormatMode> element can be added to the application’s configuration file so that it resembles the following: <?xml version ="1.0"?> <configuration> <runtime> <TimeSpan_LegacyFormatMode enabled="true"/> </runtime> </configuration> By setting its enabled attribute to true, you can ensure that TimeSpan uses legacy formatting behavior. Ron Petrusha *is a programming writer on the .NET Framework Base Class Library team. He is also the author of numerous programming books and articles on Windows programming, Web programming and programming with VB.NET. * Thanks to the following technical experts for reviewing this article: Melitta Andersen and Josh Free
https://docs.microsoft.com/en-us/archive/msdn-magazine/2010/february/formatting-and-parsing-time-intervals
CC-MAIN-2019-47
refinedweb
2,991
50.84
0 Ok I have Visual C++ 2008 and I made a new project and made an app. It's a basic window with a menu, icon, etc. Now I have the following program for instance: #include <iostream> #include <math.h> using namespace std; int main() { char answer; float c, e, x, Derivative; cout << "\nWhat is the coefficient, exponent, and x; "<< endl << "Seperate each with a space.\t"; cin >> c >> e >> x; Derivative = ((c * e) * (pow((x) , (e - 1)))); cout << "\n Your derivative is: " << Derivative << "." << endl << "Thanks for using our program!"; cout << "\nPress <0> to exit the program:\t"; cin >> answer; if (answer == '0') { exit(1); } else { if (answer != '0') { main(); } } return 0; } What I want to do is have this run within the window I already have made. SO it will run like it does in command prompt, but in the window: get it? Thanks in advance.
https://www.daniweb.com/programming/software-development/threads/144984/how-to-integrate-a-program-into-a-win32-app
CC-MAIN-2017-26
refinedweb
147
82.54
I’m writing this on 9/14/2016. I make note of the date because the request to get the size of an S3 Bucket may seem a very important bit of information but AWS does not have an easy method with which to collect that info. I fully expect them to add that functionality at some point. As of this date, I could only come up with 2 methods to get the size of a bucket. One could list of all bucket items and iterate over all the objects while keeping a running total. That method does work, but I found that for a bucket with many thousands of items, this method could take hours per bucket. A better method uses AWS Cloudwatch logs instead. When an S3 bucket is created, it also creates 2 cloudwatch metrics and I use that to pull the Average size over a set period, usually 1 day. Here’s what I came up with: import boto3 import datetime now = datetime.datetime.now() cw = boto3.client('cloudwatch') s3client = boto3.client('s3') # Get a list of all buckets allbuckets = s3client.list_buckets() # Header Line for the output going to standard out print('Bucket'.ljust(45) + 'Size in Bytes'.rjust(25)) # Iterate through each bucket for bucket in allbuckets['Buckets']: # For each bucket item, look up the cooresponding metrics from CloudWatch response = cw.get_metric_statistics(Namespace='AWS/S3', MetricName='BucketSizeBytes', Dimensions=[ {'Name': 'BucketName', 'Value': bucket['Name']}, {'Name': 'StorageType', 'Value': 'StandardStorage'} ], Statistics=['Average'], Period=3600, StartTime=(now-datetime.timedelta(days=1)).isoformat(), EndTime=now.isoformat() ) # The cloudwatch metrics will have the single datapoint, so we just report on it. for item in response["Datapoints"]: print(bucket["Name"].ljust(45) + str("{:,}".format(int(item["Average"]))).rjust(25)) # Note the use of "{:,}".format. # This is a new shorthand method to format output. # I just discovered it recently. Oh my goodness! Incredible article dude! Thank you so much, However I am encountering issues with your RSS. I don?t understand why I am unable to join it. Is there anybody else getting similar RSS issues? Anyone who knows the solution can you kindly respond? Thanx!! Works like a charm. Awesome. Well, this saved my day! Thank you very much! 🙂 This is exactly what I needed . thanks so much its really Awesome 🙂 Can we send the print output on e mail using boto.ses, i am new to python if you can share the code it will be a grate help. To send mail through SES, you dont need a Boto3 call. SES, once setup, is just another SMTP email server. You would sendmail to the SES host just like you would any other mail server. You have the SES host, port, id, and password. Wow!!! this is awesome! just what I needed man…. Wow!!! this is awesome! just what I needed man…. Thats Great work bro But i want to put this in csv file how can i do it This is Awesome!!!
https://www.slsmk.com/getting-the-size-of-an-s3-bucket-using-boto3-for-aws/
CC-MAIN-2020-05
refinedweb
491
69.18
Sounds fair to me. Indeed the ducktyping argument makes sense, and I have a hard time imagining any namespace conflicts or other confusion. Should this attribute return none for non-structured arrays, or simply be undefined? On Tue, Sep 30, 2014 at 12:49 PM, John Zwinck <jzwinck at gmail.com> wrote: > I first proposed this on GitHub: > ; jaimefrio requested that > I bring it to this list for discussion. > > My proposal is to add a keys() method to NumPy's array class ndarray. > The behavior would be to return self.dtype.names, i.e. the "column > names" for a structured array (and None when dtype.names is None, > which it is for pure numeric arrays without named columns). > > I originally proposed to add a values() method also, but I am tabling > that for now so we needn't discuss it in this thread. > > The motivation is to enhance the ability to use duck typing with NumPy > arrays, Python dicts, and other types like Pandas DataFrames, h5py > Files, and more. It's a fairly common thing to want to get the "keys" > of a container, where "keys" is understood to be a sequence of values > one can pass to __getitem__(), and this is exactly what I'm aiming at. > > Thoughts? > > John Zwinck > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/numpy-discussion/2014-September/071246.html
CC-MAIN-2021-39
refinedweb
229
72.76
Jeff Bay's Object Calisthenics is an aggregate constraint combined of the following nine rules: - Use only one level of indentation per method. - Don't use the elsekeyword. - Wrap all primitives and strings (in public API). - Use only one dot per line. - Don't abbreviate (long names). - Keep all entities small. - Don't use any classes with more than two instance variables. - Use first-class collections. - Don't use any getters/setters/properties. Object Calisthenics is an exercise in object orientation. How is that? One of the core concepts of OOP is Abstraction: A class should capture one and only one key abstraction. Obviously primitive and built-in types lack abstraction. Wrapping primitives (rule #3) and wrapping collections (rule #8) drive the code towards more abstractions. Small entities (rule #6) help to keep our abstractions focused. Further objects are defined by what they do, not what they contain. All data must be hidden within its class. This is Encapsulation. Rule #9, No Properties, forces you to stay away from accessing the fields from outside of the class. Next to Abstraction and Encapsulation, these nine rules help Loose Coupling and High Cohesion. Loose Coupling is achieved by minimizing the number of messages sent between a class and its collaborator. Rule #4, One Dot Per Line, reduces the coupling introduced by a single line. This rule is misleading, because what Jeff really meant was the Law Of Demeter. The law is not about counting dots per line, it is about dependencies: "Only talk to your immediate friends." Sometimes even a single dot in a line will violate the law. High Cohesion means that related data and behaviour should be in one place. This means that most of the methods defined on a class should use most of the data members most of the time. Rule #5, Don't Abbreviate, addresses this: When a name of a field or method gets long and we wish to shorten it, obviously the enclosing scope does not provide enough context for the name, which means that the element is not cohesive with the other elements of the class. We need another class to provide the missing context. Next to naming, small entities (rule #6) have a higher probability of being cohesive because there are less fields and methods. Limiting the number of instance variables (rule #7) also keeps cohesion high. The remaining rules #1 and #2, One Level Of Indentation and No elseaim to make the code simpler by avoiding nested code constructs. After all who wants to be a PHP Street Fighter. ;-) Checking Code for Compliance with Object Calisthenics When facilitating coding exercises with composite constraints, I noticed how easy it is to overlook certain violations. We are used to conditionals or dereferencing pointers that we might not notice them when reading code. Some rules like the Law Of Demeter or a maximum size of classes need a detailed inspection of the code to verify. To check Java code for compliance with Object Calisthenics I use PMD. PMD contains several rules we can use: - Rule java/coupling.xml/LawOfDemeterfor rule #4. - Rule #6 can be checked with NcssTypeCount. A NCSS count of 30 is usually around 50 lines of code. <rule ref="rulesets/java/codesize.xml/NcssTypeCount"> <properties> <property name="minimum" value="30" /> </properties> </rule> - And there is TooManyFieldsfor rule #7. <rule ref="rulesets/java/codesize.xml/TooManyFields"> <properties> <property name="maxfields" value="2" /> </properties> </rule> object-calisthenics.xmlwith these rules: java/constraints.xml/NoElseKeywordis very simple. All elsekeywords are flagged by the XPath expression //IfStatement[@Else='true']. java/codecop.xml/FirstClassCollectionslooks for fields of known collection types and then checks the number of fields. java/codecop.xml/OneLevelOfIntention java/constraints.xml/NoGetterAndSetterneeds a more elaborate XPath expression. It is checking MethodDeclaratorand its inner Block/ BlockStatement/ Statement/ StatementExpression/ Expression/ PrimaryExpressions. java/codecop.xml/PrimitiveObsessionis implemented in code. It checks PMD's ASTConstructorDeclarationand ASTMethodDeclarationfor primitive parameters and return types. codecop.xmland constraints.xml. When I read Jeff Bay's original essay, the rules were clear. At least I thought so. Verifying them automatically showed some areas where different interpretations are possible. Different people see Object Calisthenics in different ways. In comparison, the Object Calisthenics rules for PHP_CodeSniffer implement One Level Of Indentation by allowing a nesting of one. For example there can be conditionals and there can be loops, but no conditional inside of a loop. So the code is either formatted at method level or indented one level deep. My PMD rule is more strict: Either there is no indentation - no conditional, no loop - or everything is indented once: for example, if there is a loop, than the whole method body must be inside this loop. This does not allow more than one conditional or loop per method. My rule follows Jeff's idea that each method does exactly one thing. Of course, I like my strict version, while my friend Aki Salmi said that I went to far as it is more like Zero Level Of Indentation. Probably he is right and I will recreate this rule and keep the Zero Level Of Indentation for the (upcoming) Brutal version of Object Calisthenics. ;-) Wrap All Primitives There is no PHP_CodeSniffer rule for that, as Tomas Votruba considers it "too strict, vague or annoying". Indeed, this rule is very annoying if you use primitives all the way and your only data structure is an associative array or hash map. All containers like java.util.List, Setor Mapare considered primitive as well. Samir Talwar said that every type that was not written by yourself is primitive because it is not from your domain. This prohibits the direct usage of Files and URLs to name a few, but let's not go there. (Read more about the issue of primitives in one of my older posts.) My rule allows primitive values in constructors as well as getters to implement the classic Value Object pattern. (The rule's implementation is simplistic and it is possible to cheat by passing primitives to constructors. And the getters will be flagged by rule #9, so no use for them in Object Calisthenics anyway.) I agree with Tomas that this rule is too strict, because there is no point in wrapping primitive payloads, e.g. strings that are only displayed to the user and not acted on by the system. These will be false positives. There are certain methods with primitives in their signatures like equalsand hashCodethat are required by Java. Further we might have plain numbers in our domain or we use indexing of some sort, both will be false positives, too. One Dot Per Line As I said before, I use PMD's LawOfDemeterto verify rule #4. The law allows sending messages to objects that are - the immediate parts of thisor - the arguments of the current method or - objects created inside the current method or - objects in global variables. Arraydefines a method def [](index).) Another interesting fact is that (at least in PMD) the law flags calling methods on enums. The enum instances are not created locally, so we cannot send them messages. On the other hand, an enum is a global variable, so maybe it should be allowed to call methods on it. The PHP_CodeSniffer rule follows the rule's name and checks that there is only one dot per line. This creates better code, because train wrecks will be split into explaining variables which make debugging easier. Also Tomas is checking for known fluent interfaces. Fluent interfaces - by definition - look like they are violating the Law Of Demeter. As long as the fluent interface returns the same instance, as for example basic builders do, there is no violation. When following a more relaxed version of the law, the Class Version of Law Of Demeter, than different implementations of the same type are still possible. The Java Stream API, where many calls return a new Streaminstance of a different class - or the same class with a different generic type - is likely to violate the law. It does not matter. Fluent interfaces are designed to improve readability of code. Law Of Demeter violations in fluent interfaces are false positives. Don't Abbreviate I found it difficult to check for abbreviations, so rule #5 is not enforced. I thought of implementing this rule using a dictionary, but that is prone to false positives as the dictionary cannot contain all terms from all domains we create software for. The PHP_CodeSniffer rules check for names shorter than three characters and allow certain exceptions like id. This is a good start but is not catching all abbreviations, especially as the need to abbreviate arises from long names. Another option would be to analyse the name for its camel case patterns, requiring all names to contain lowercase characters between the uppercase ones. This would flag acronyms like IDor URLbut no real abbreviations like usror loc. Small Entities Small is relative. Different people use different limits depending on programming language. Jeff Bay's 50 lines work well for Java. Rafael Dohms proposes to use 200 lines for PHP. PHP_CodeSniffer checks function length and number of methods per class, too. Fabian Schwarz-Fritz limits packages to ten classes. All these additional rules follow Jeff Bay's original idea and I will add them to the rule set in the future. Two Instance Variables Allowing only two instance variables seems arbitrary - why not have three or five. Some people have changed the rules to allow five fields. I do not see how the choice of language makes a difference. Two is the smallest number that allows composition of object trees. In PHP_CodeSniffer there is no rule for this because the number depends on the "individual domain of each project". When an entity or value object consists of three or more equal parts, the rule will flag the code but there is no problem. For example, a class BoundingBoxmight contain four fields top, left, bottom, right. Depending on the values, introducing a new wrapper class Coordinateto reduce these fields to topLeftand bottomRightmight make sense. No Properties My PMD rule finds methods that return an instance field (a getter) or update it (a setter). PHP_CodeSniffer checks for methods using the typical naming conventions. It further forbids the usage of public fields, which is a great idea. As we wrapped all primitives (rule #3) and we have no getters, we can never check their values. So how do we create state based tests? Mark Needham has discussed "whether we should implement equalsand hashCodemethodsor HashSet." From what I have seen, most object oriented developers struggle with that constraint. Getters and setters are very ingrained. In fact some people have dropped that constraint from Object Calisthenics. There are several ways to live without accessors. Samir Talwar has written why avoiding Getters, Setters and Properties is such a powerful mind shift. Java Project Setup I created a repository containing the starting point for the LCD Numbers Kata:Apache Maven projects. The projects are set up to check the code using the Maven PMD Plugin on each test execution. Here is the relevant snippet from the pom.xml: <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-pmd-plugin</artifactId> <version>3.7</version> <configuration> <printFailingErrors>true</printFailingErrors> <linkXRef>false</linkXRef> <typeResolution>true</typeResolution> <targetJdk>1.8</targetJdk> <sourceEncoding>${encoding}</sourceEncoding> <includeTests>true</includeTests> <rulesets> <ruleset>/rulesets/java/object-calisthenics.xml</ruleset> </rulesets> </configuration> <executions> <execution> <phase>test</phase> <goals> <goal>check</goal> </goals> </execution> </executions> <dependencies> <dependency> <groupId>org.codecop</groupId> <artifactId>pmd-rules</artifactId> <version>1.2.3</version> </dependency> </dependencies> </plugin> </plugins> </build>You can add this snippet to any Maven project and enjoy Object Calisthenics. The Jar file of pmd-rulesis available in my personal Maven repository. To test your setup there is sample code in both projects and mvnw testwill show two violations: [INFO] PMD Failure: SampleClass.java:2 Rule:TooManyFields Priority:3 Too many fields. [INFO] PMD Failure: SampleClass:9 Rule:NoElseKeyword Priority:3 No else keyword.It is possible to check the rules alone with mvnw pmd:check. (Using the Maven Shell the time to run the checks is reduced by 50%.) There are two run_pmdscripts, one for Windows ( .bat) and one for Linux ( .sh). Obviously code analysis cannot find everything. On the other hand - as discussed earlier - some violations will be false positives, e.g. when using the Stream API. You can use // NOPMDcomments and @SuppressWarnings("PMD")annotations to suppress false positives. I recommend using exact suppressions, e.g. @SuppressWarnings("PMD.TooManyFields")to skip violations because other violations at the same line will still be found. Use your good judgement. The goal of Object Calisthenics is to follow all nine rules, not to suppress them. Learnings Object Calisthenics is a great exercise. I used it all of my workshops on Object Oriented Programming and in several exercises I did myself. The verification of the rules helped me and the participants to follow the constraints and made the exercise more strict. (If people were stuck I sometimes recommended to ignore one or another PMD violations, at least for some time.) People liked it and had insights into object orientation: It is definitely a "different" and "challenging way to code". "It is good to have small classes. Now that I have many classes, I see more structure." You should give it a try, too. Jeff Bay even recommends to run an exercise or prototype of at least 1000 lines for at least 20 hours. The question if Object Calisthenics is applicable to real working systems remains. While it is excellent for exercise, it might be too strict to be used in production. On the other hand, in his final note, Jeff Bay talks about a system of 100,000 lines of code written in this style, where the "people working on it feel that its development is so much less tiresome when embracing these rules".
http://blog.code-cop.org/2018/01/
CC-MAIN-2020-50
refinedweb
2,317
57.98
...... Pointer 1. A pointer is a place in memory that keeps address of another place inside. 2. Pointer can’t be initialized at definition. 3. Pointer is dynamic in nature. The memory allocation can be resized or freed later. 4. The assembly code of Pointer is different than Array. Array 1. An array is a single, pre allocated chunk of contiguous elements (all of the same type), fixed in size and location. 2. Array can be initialized at definition. For Example int num[] = { 2, 4, 5} 3. They are static in nature. Once memory is allocated , it cannot be resized or freed dynamically. 4. The assembly code of Array is different than Pointer. Pointers are used for storing address of dynamically allocated arrays and for arrays which are passed as arguments to functions. In other contexts, arrays and pointer are two different things, see the following programs to justify this statement. Behavior of sizeof operator // 1st program to show that array and pointers are different #include <stdio.h> int main() { int arr[] = {10, 20, 30, 40, 50, 60}; int *ptr = arr; // sizof(int) * (number of element in arr[]) is printed printf("Size of arr[] %d\n", sizeof(arr)); // sizeof a pointer is printed which is same for all type // of pointers (char *, void *, etc) printf("Size of ptr %d", sizeof(ptr)); return 0; } Output: Size of arr[] 24 Size of ptr 4 Although array and pointer are different things, following properties of array make them look similar. 1) Array name gives address of first element of array. Consider the following program for example. #include <stdio.h> int main() { int arr[] = {10, 20, 30, 40, 50, 60}; int *ptr = arr; // Assigns address of array to ptr printf("Value of first element is %d", *ptr) return 0; } Value of first element is 10 2). #include <stdio.h> int main() { int arr[] = {10, 20, 30, 40, 50, 60}; int *ptr = arr; printf("arr[2] = %d\n", arr[2]); printf("*(ptr + 2) = %d\n", *(arr + 2)); printf("ptr[2] = %d\n", ptr[2]); printf("*(ptr + 2) = %d\n", *(ptr + 2)); return 0; } arr[2] = 30 *(ptr + 2) = 30 ptr[2] = 30 *(ptr + 2) = 30 3) Array parameters are always passed as pointers, even when we use square brackets. #include <stdio.h> int fun(int ptr[]) { int x = 10; // size of a pointer is printed printf("sizeof(ptr) = %d\n", sizeof(ptr)); // This allowed because ptr is a pointer, not array ptr = &x; printf("*ptr = %d ", *ptr); return 0; } int main() { int arr[] = {10, 20, 30, 40, 50, 60}; fun(arr); return 0; } sizeof(ptr) = 4 *ptr = 10 Difference of Pointer and Array in C/C++ Array can be initialized at definition. Example int num[] = { 1, 2, 3,4,5} Pointer is dynamic in nature. The memory allocation can be resized or freed later. int arr[ ] = { 1, 2 }; p = arr; /* p is pointing to arr */ How pointer p will behave ? Can someone help me with the example when to use normal function calling and when pointer function calling. What was the reason for introducing pointer function calling in C? Forgot Your Password? 2018 © Queryhome
https://www.queryhome.com/tech/109210/what-is-the-difference-between-array-and-pointer-in-c
CC-MAIN-2021-10
refinedweb
519
61.16
|--==> Jurij Smakov writes: JS> Hello, JS> On Thu, 9 Dec 2004, Free Ekanayaka wrote: >> >> >>I've applied the following patch to the Makefile: JS> [patch skipped] >>Before uploading the package I'd like to test it, but I've not access >>to any sparc machine. >> >>Would somebody kindly try to build the package? JS> The solution you've proposed fails due to at least three different JS> reasons: JS> * The changes to the CC_FLAGS which the patch implements are inside the JS> ifeq ($(UNAME),SunOS) ... endif block, so they would never be triggered on JS> Debian. JS> * The attempt to determine the architecture is done by looking at the JS> result of 'uname -p' command. On Debian uname does not support a '-p' JS> option. JS> * Since you have forgot to create the 00list or 00list.sparc file in the JS> debian/patches directory, the patch you describe does not even get applied JS> during the build :-) Big oooops! Thanks for your patch. I've modified the package accordingly: I'll ask to my usual uploader to upload it, but if somebody is willing to do it please just contact me. Cheers, Free JS> The corrected patch looks like this: JS> --snip-------------------------------------------------------------------- JS> diff -urNad brutefir-1.0/Makefile /tmp/dpep.9PBbmd/brutefir-1.0/Makefile JS> --- brutefir-1.0/Makefile 2004-12-10 02:23:34.545793224 +0000 JS> +++ /tmp/dpep.9PBbmd/brutefir-1.0/Makefile 2004-12-10 02:31:29.741552488 +0000 JS> @@ -73,6 +73,9 @@ JS> ifeq ($(UNAME_M),i686) JS> BRUTEFIR_OBJS += $(BRUTEFIR_IA32_OBJS) JS> endif JS> +ifneq (,$(findstring sparc,$(UNAME_M))) JS> +CC_FLAGS += -Wa,-xarch=v8plus JS> +endif JS> BRUTEFIR_LIBS += -ldl JS> LDMULTIPLEDEFS = -Xlinker --allow-multiple-definition JS> # assume that we have alsa, osss and jack JS> --snip--------------------------------------------------------------------- JS> You can find the corresponding dpatch scriplet at [0]. In order to get JS> it applied, you need to create the debian/patches/00list.sparc file, which JS> is also given there (by the way, if you want the amd64 patch to apply as JS> well, creating debian/patch/00list.amd64 is a good idea). With these JS> changes the package builds fine in a sid chroot, build log and the JS> sparc deb are available at [0] as well. Unfortunately, I cannot test JS> the package, since I do not have sound configured. JS> [0] JS> Best regards, JS> Jurij Smakov [email protected] JS> Key: KeyID: C99E03CC
https://lists.debian.org/debian-sparc/2004/12/msg00096.html
CC-MAIN-2016-30
refinedweb
401
73.07
Created on 2010-02-01 22:12 by cgohlke, last changed 2019-04-26 17:26 by BreamoreBoy.. The last line of my previous post should actually read python.exe setup.py bdist_wininst Anyway, here are three files (also attached) that can reproduce the problem: 1) setup.py from distutils.core import setup, Extension setup(name='testpyd', scripts = ["testpyd_postinstall.py"], ext_modules=[Extension('testpyd', ['testpyd.c'],)], options = {"bdist_wininst": {"install_script": "testpyd_postinstall.py", "user_access_control": "auto"},}) 2) testpyd.c #include "Python.h" PyMethodDef methods[] = {{NULL, NULL},}; void inittestpyd() {(void)Py_InitModule("testpyd", methods);} 3) testpyd_postinstall.py #!python import testpyd Build the installer with python 2.6 and Issue4120 patch applied: python setup.py bdist_wininst Run the installer: dist\testpyd-0.0.0.win32-py2.6.exe The postinstall script fails with: Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: DLL load failed: The specified module could not be found. According to Sysinternals process monitor python26.dll is loaded from the system32 directory, the testpyd.pyd extension is found at the right place, and then the Windows path is searched in vain for MSVCR90.DLL. Tested on Windows 7 Pro 64 bit. I thought one conclusion of the discussion on issue4120 was that any executable, which embeds Python and imports MSVCR9 dependent extensions, must now provide the manifest for the MSVCR9 runtimes, either embedded or as a separate file. See <> and responses. Why shouldn't this apply to wininst executables? Another observation: importing the winsound extension in the postinstall script works even though it also depends on MSVCR90.DLL. Winsound.pyd does not have an embedded manifest. It is linked with /MANIFEST:NO. Apparently there is a difference between an "empty" manifest and no manifest. How about a patch that does: 1) not embed any manifest into PYD files if the manifest contains only a single MSVCR90 dependentAssembly element. 2) remove the MSVCR90 dependentAssembly element from manifests embedded into PYD files if the manifest defines additional dependentAssembly elements (e.g. for MFC or Common Controls). This is essentially what the msvc9compiler_stripruntimes patch does now. 3) not embed manifests into any file (EXE, PYD, or DLL) if '/MANIFEST:NO' is defined in extra_link_args. Extension developers can then provide external manifest files. 4) not touch the default manifests embedded into EXE and DLL files. The msvc9compiler_stripruntimes currently produces DLLs that can not be used standalone, such as pythoncom26.dll from the pywin32 package (I can file a separate bug if requested). Pythoncom26.dll is meant to be placed in the system32 folder and to be used outside of a Python environment, i.e. from the Windows Scripting Host. Several pywin32 tests fail with the pythoncom26.dll built with the msvc9compiler_stripruntimes patch. Placing a MSVCR9 manifest file into the system32 folder next to the pythoncom26.dll is not an option. A tentative patch against the Python 2.6 branch is attached. I will test it further. (1) and (4) will solve the original pywin32 wininstaller problem without changing wininst.exe. As it is now the installer and some functionality of the pywin32 package will likely break if pywin32 is built on Python 2.6.5. There should be no manifest embedded into wininst, because then the cases which Issue 4120 fixed (a CRT installed into a local folder, instead of system-wide, due to limited access rights), will 'break' again: the installer can then no longer work unless there is a system-wide installation of the CRT. Option #1, #2 and #3 all sound reasonable (and #2 is the current situation). I have some doubts about option #4: it is a very specific use case, and then the whole benefit of issue 4120 is lost, because stripping runtimes would have to be removed again. Why is putting a separate manifest file next to the DLL not an option? Combined with #3 (allow extension developer to disable embedding of manifests) a separate manifest can fix the problem.? > I have some doubts about option #4: it is a very specific use case, and then the whole benefit of issue 4120 is lost Pythoncom are pywintypes are indeed special cases: Out of the 170 DLL files in my Python site-packages directory, these seem to be the only ones built with distutils. All other DLLs are apparently built without Python involvement using make, nmake, CMake, or Visual Studio and most of them contain embedded manifests, which is the default when using nmake, CMake, or Visual Studio. Practically, to make a standalone distribution of any Python 2.6/3.1 application with external DLL dependencies likely requires to provide external manifest files. The issue4120 patch does not change this situation and I don't see any sane way to patch Python/distutils that could. The main benefit of the issue4120 patch, as I see it, is that PYD files produced by distutils work in a standalone distribution without any further attention. Msvc9compiler_stripruntimes_revised.patch does not change this. My reasoning for this patch (besides fixing the bdist_wininst installer issue) was to allow the popular pywin32 package to build without changes, and offer a way for other extension packages to exclude manifests from DLL files if required (apparently not that common). Alternatively one could provide a mechanism to embed specific manifests into DLLs. Is that currently possible? Then pywin32 setup.py could be fixed. > Why is putting a separate manifest file next to the DLL not an option? Because the pythoncom dll is currently installed into the Windows system directory. Putting manifest files there will pollute the system directory even more and possibly interfere with other system components if not done right (not tested). But again, pywin32 setup.py could be fixed to not install the DLL/manifest files into the system directory. Which Python packages other than pywin32 build DLL files via distutils? I don't know any. Can anyone provide a minimal setup.py script and C file that produces a DLL file for testing? the pywin32 DLLs have 2 heads. They are Python extension modules as well as regular DLLs. They are built by distutils and therefore have no manifests - I think many packages use distutils to build their extension modules - it is just that they usually don't have a .dll extension. I fear that simply adding a manifest to those DLLs will put them in the same position we have before issue4120 was addressed, and these .dlls do need a way to be installed into System32 (or somewhere else on the global PATH) to function as COM servers etc. I need to experiment with this. The bdist_wininst and DLL build issues also exist in Python 2.7b2. A patch against svn trunk is attached. The pywin32 v214 package fails as reported earlier when built with Python 2.7b2. It installs and functions when built with this patch. I see that this issue mentions --user-access-control option. Can somebody also check if issue8870 and issue8871 are related to this one? issue8870 and issue8871 are not related to this one. There, the UAC elevation fails, here the issue is with the MS runtimes, elevation is working fine. I'm failing to get a new pywin32 out of the door due to this issue. I've spent a few hours playing with this and I think the analysis is generally correct here. The key thing is that when using distutils, the extensions end up with a manifest (albeit one without a reference to the CRT) whereas the extensions shipped with Python have no manifest at all. I agree with Martin that it seems strange the CRT fails to be used even though the CRT is obviously already loaded, but it seems to be a fact. I can't find much on this, but suspect it relates to the different "activation contexts" in use and how the activation contexts are designed to allow side-by-side loading of DLLs; Windows doesn't know if the version of the DLL already loaded is suitable. I also guess that the fact the DLL has *any* manifest means they use a more strict interpretation of things (ie, refuse to use already loaded ones) whereas a dll with no manifest gets given a little more slack. I can confirm that with the attached patch, pywin32 builds and installs fine on a machine without the CRT installed globally - so I'm +1 on this patch with one caveat: The check for '.pyd' should either be expanded to include '.dll', or else the check should just use the 'target_desc == CCompiler.EXECUTABLE' condition already used. I'm happy to make the change once I get some feedback and/or guidance about where I should check this in - I believe it is too late for python 2.6 which is a shame... Thinking more about this, I think the approach of this patch is more complex than necessary. I think a better patch would be one which *unconditionally* removes the manifest from extension modules. For maximum flexibility though, we should probably allow a hook so setup.py can specify the name of (or contents of) a manifest file to use when linking with the default being None. The proposed patch was meant to be backwards compatible. Unconditionally removing the whole assembly/manifest from extensions could break extensions that have additional dependencies, such as MFC or Common Controls. PyQt4 extensions for example depend on Common Controls. Btw, there is a discussion at <> and <> about the need to put the msvcr90 manifest back into the psycopg.pyd file in order to work properly under Apache's mod_wsgi, which is linked against msvcrt. I'm wondering if, in practice, extensions which need a manifest can have the manifest being generated completely by the linker - ie, I expect that in most cases where something other than the CRT or MFC is needed in the manifest, the author will want to specify an explicit manifest file rather than one generated by the linker. And worse, those extensions are going to be screwed anyway - the fact the manifest remains at all will mean they probably fail to load for reasons already discussed in this bug. IOW, not having a manifest at all is about the only way to ensure the module will be loaded correctly. Re adding a manifest back in - I've struck the exact same problem with pywin32 - any DLLs which are "entry points" into Python need to have a manifest - eg, pythoncomxx and a few other pywin32 ones. This left me with a dilemma for pythoncom - as it is both an "entry-point" (ie, COM loads that DLL) and a regular Python module, I simultaneously needed a manifest (to work when loaded by COM) and needed to *not* have one (to work when loaded by Python). My solution has been to introduce another DLL with a manifest and have COM servers register using that. This strategy seems to be working well in all my tests. Actually, PyQt4 was not a good example since it is not build using distutils. Pywin32 and wxPython extensions are the only ones on my system specifying a dependency on Common Controls or MFC. No dependencies other than CRT, MFC, and Common Controls are specified in any assemblies of over 1000 pyd and dll files in my Python27 directory. Just to follow up. My case is an application that is almost all statically linked, but it loads in the python library, and at runtime it needs to load the tables module, and as distributed by Python, I get the load-time error on Windows. Using Christoph Gohlke's exe's works great for me, but I cannot redistribute due to the linking in of szip for tables and MKL for numpy. So I build my own version of tables without szip and numpy without MKL, but that failed until I applied Christoph's patch on Python. (I also have to patch tables' setup.py not to include szip.dll and zlib1.dll in dll_files.) So the result is that with Christoph's patch, all is working for me. I hope it (or something similar) makes it into 2.7. John Cary This is biting people (including me :) so I'm going to try hard to get this fixed. One user on the python-win32 mailing list resorts to rebuilding every 3rd party module he uses with this patch to get things working again (although apps which use only builtin modules or pywin32 modules - which already hacks this basic functionality in - don't see the problem.) I'm attaching a different patch that should have the same default effect as Christoph's, but also allows the behaviour to be overridden. Actually overriding it is quite difficult as distutils isn't setup to easily allow such compiler customizations - but at least it *is* possible. To test this out I hacked both the pywin32 and py2erxe build processes to use those customizations and it works fine and allows them both to customize the behaviour to meet various modules' requirements. Finally, this whole thing is still fundamentally broken for extensions which need a manifest (eg, to reference the common controls or the requestedExecutionLevel cruft). These extension will still need to include the CRT reference in their manifest and thus will need a copy of the CRT next to each of them. I *think* this also means they basically get a private copy of the CRT - they are not sharing the CRT with Python, which means they are at risk of hitting problems such as trying to share FILE * objects. In practice, this means such modules are probably better of just embedding the CRT statically. This is the route I've taken for one pywin32 module so the module can have a manifest and still work without a complete, private copy of the CRT needing to live next to it. But even with that problem I think this patch should land. It would be great if someone can review and test this version of the patch and I'll check it in. Can the patch include regression tests? I'm reluctant to commit to adding test infrastructure for the distutils build commands - if I've missed existing infrastructure and adding such tests would actually be relatively simple, please educate me! Or if someone else would like to help with the infrastructure so I can test just this patch, that would be awesome. But I don't think this fix should block on tests given it can easily be tested and verified manually. There is a already a test for manifest things in Lib/distutils/tests/test_msvc9compiler.py, and higher-level tests for building extension modules in Lib/distutils/tests/test_build_ext.py and Lib/distutils/tests/test_install.py. Let me add that I don’t know the MS toolchain at all, but if you tell me what a test should do (e.g. compile a C extension, open some compiled file and look for some bytes) I can write a test. My apologies Eric - I had completely overlooked those tests. Attaching a new patch with a test. Note the existing test doesn't actually perform a build so the new test also doesn't, but it does check the core logic (ie, that a manifest with only the msvcrt reference gets it scrubbed). > Note the existing test doesn't actually perform a build so the new > test also doesn't, but it does check the core logic Sounds good to me. + def manifest_setup_ldargs I’d make all new methods private ones (i.e. leading underscore). > an embedded manifests Typo: extra s > return None if not temp_manifest else (temp_manifest, mfid) Using a ternary expression runs afoul of PEP 291: distutils should remain compatible with 2.3. (I’m not sure it is right now, we use modern unittest methods in tests and all, but it is no reason for making it worse in new code :) Your patch will also need an entry in Misc/NEWS; at first glance, there is no documentation file to edit. Will you port the patch to packaging in 3.3? I can do it if you don’t have the time, but I’m not set up yet to test on Windows, so I can ask you to test a patch. Also, for the distutils2 backport (which I can do too), we would need to run tests with all versions from 2.4 to 3.3. Thanks for the review. One note: | + def manifest_setup_ldargs | I’d make all new methods private ones (i.e. leading underscore). They aren't strictly private and are designed to be overridden by subclasses (although in practice, subclassing the compiler is much harder than it should be, so pywin32 monkey-patches the instance.) This is actually the entire point of my updated patch - to give setup.py authors some level of control over the manifest behaviour. I do intend forward-porting to 3.3 and also to check it is is too late for 3.2 (a quick check before implied it might be OK, but I'm not sure) > They aren't strictly private and are designed to be overridden by > subclasses OK. > I do intend forward-porting to 3.3 and also to check it is is too late > for 3.2 (a quick check before implied it might be OK, but I'm not sure) 2.7 and 3.2 are open for bug fixes, as indicated by the versions field of this bug (it’s actually a matrix: component distutils + versions 2.7, 3.2, 3.3, component distutils2 + version 3.3 == packaging and distutils2 + third-party == the distutils2 backport :) New version of the patch with the small tweaks requested plus a NEWS entry. Looks good. Style nit: I don’t put backslashes at end-of-lines in parens (in one re.compile call you have that). Also, I use -- where I can’t use —. New changeset fb886858024c by Mark Hammond in branch '2.7': Issue #7833: Ext. modules built using distutils on Windows no longer get a manifest New changeset 9caeb7215344 by Mark Hammond in branch '3.2': Issue #7833: Ext. modules built using distutils on Windows no longer get a manifest New changeset 3073ef853647 by Mark Hammond in branch 'default': Issue #7833: Ext. modules built using distutils on Windows no longer get a manifest I pushed the changes to 2.7, 3.2 and 3.3. I'm happy to help with distutils2/packaging but I'll need to do that later once I work out where to start :) Therefore I'm not yet closing this issue. To port the patch to packaging, go into your CPython 3.3 checkout and edit Lib/packaging/compiler/msvc9compiler.py (and its test :). To port the patch to distutils2, clone and edit distutils/compiler/msvc9compiler.py (same :). Test with Python 2.4, 2.5, 2.6 and 2.7. Then hg update python3, hg merge default, test with Python 3.1, 3.2 and 3.3. Then you can push :) If you don’t have the necessary Pythons or roundtuits, I’ll port the patch when I get a Windows VM. There’s plenty of time before Python 3.3 is out. s/distutils/distutils2/ ! Mark: Possibly a stupid question, but in your commit (see snippet below), why is the processorArchitecture hard coded to "x86"? Is it appropriate to replace this with "amd64" for 64-bit builds, or should it always be "x86"? + <dependentAssembly> + <assemblyIdentity type="win32" name="Microsoft.VC90.CRT" + version="9.0.21022.8" processorArchitecture="x86" + + </assemblyIdentity> + </dependentAssembly> ack - that is a really good point. IIRC it can be "*". I'll try and look at this over the next day or 2. Actually, I think this is OK - the reference to the "x86" is in the tests and those tests don't actually perform a build, just check the manifest is detected and stripped (ie, the test should still work fine on 64bit boxes). Ideally the test could also check a manifest with a 64bit architecture, but I don't think that's really necessary... Just checking in to point out a possible problem with the code that strips the MSVCR dependency from the embedded manifest. The used regexpr is too greedy: the first bit can trigger on an earlier "assemblyIdentity" tag, so that after the removal the manifest is no longer valid XML. The key problem is that the "<assemblyIdentity" and the name attribute are allowed to be on a separate lines. To fix this I removed the re.DOTALL flag and replaced the second occurrence of ".*?" to also allow newlines: pattern = re.compile( - r"""<assemblyIdentity.*?name=("|')Microsoft\."""\ - r"""VC\d{2}\.CRT("|').*?(/>|</assemblyIdentity>)""", - re.DOTALL) + r"""<assemblyIdentity.*?name=("|')Microsoft\."""\ + r"""VC\d{2}\.CRT("|')(.|\r|\r)*?(/>|</assemblyIdentity>)""") It is well possible that the problem does not causes any problems for the intended usage of the code. I'm using the code to strip other DLL's and ran into this issue (with tk85.dll to be precise). - Almar This is referenced from #4431 which has been closed for over six years but keeps getting comments.
https://bugs.python.org/issue7833
CC-MAIN-2020-34
refinedweb
3,509
64.3
The trick to this program is understanding what is it asking and getting it to work as it wants. In my solution I created a second function that holds all the numbers plus the probability function within it. I think this is one of the quickest solution to this exercise. The output is truncated to that of a long int, as I think the book format is quite ugly. Another way I had though of doing this originally was to calculate the odds of 1 in 47, store it. Calculate the 1 in 27 odds, store it, then find the product of those all in their own functions. That idea seemed clumsy when coding it and all roads were pointing to the solution provided for me. Many state lotteries use a variation of the simple lottery portrayed by Listing 7.4. In these variations you choose several numbers from one set and call them the field numbers. For example, you might select 5 numbers from the field of 1–47). You also pick a single number (called a mega number or a power ball, etc.) from a second range, such as 1–27. To win the grand prize, you have to guess all the picks correctly. The chance of winning is the product of the probability of picking all the field numbers times the probability of picking the mega number. For instance, the probability of winning the example described here is the product of the probability of picking 5 out of 47 correctly times the probability of picking 1 out of 27 correctly. Modify Listing 7.4 to calculate the probability of winning this kind of lottery. #include <iostream> using namespace std; // Note: some implementations require double instead of long double long double probability(unsigned numbers, unsigned picks); long int TotalOdds(int, int, int, int, long double(*p)(unsigned, unsigned)); int main() { cout << "Welcome to the Powerball!\n"; cout << "Odds of winning are one in " << TotalOdds(47, 5, 27, 1, probability); cout << "\nThanks for playing!" << endl; return 0; } // the following function calculates the probability of picking picks // numbers correctly from numbers choices long double probability(unsigned numbers, unsigned picks) { long double result = 1.0; // here come some local variables long double n; unsigned p; for (n = numbers, p = picks; p > 0; n--, p--) result = result * n / p; return result; } long int TotalOdds(int FirstSet, int x, int PowerBall, int y, long double(*p)(unsigned, unsigned)) { long double odds; odds = p(FirstSet, x)*p(PowerBall, y); return odds; }
https://rundata.wordpress.com/tag/lottery/
CC-MAIN-2017-26
refinedweb
417
60.65
Need help setting up Application and Component Monitor for a servicekatarina Apr 19, 2013 6:05 PM Hello, I am far from being a computer geek and am just very slowly starting to be able to "read" Orion but I need to set up monitoring of MSMQ services on Windows 2008 server so I can set up an alert based on the count of the items in the MSMSQ queue. (Using APM 4.2) There is actually Application Monitor using Performance Counter Monitor already in place on it but the Performance Counter shows status down with the error Bad input parameter. Category does not exist. Considering that Performance Counter Monitors read Windows Performance Counter data using Remote Procedure Calls (RPC) instead of Windows Management Instrumentation (WMI). I am assuming I need to check if it exists in in the root/CIMV2 namespace. I am not quite sure if I need to look just for the variable defined in the Category field of the Performance Counter Monitor or do I need to have more than that in there. But I may be completely off. Any direction and suggestion you may have will be, as always, very much appreciated. Thanks for your time! Re: Need help setting up Application and Component Monitor for a servicePetr Vilem Apr 22, 2013 2:56 AM (in response to katarina) If you want to use RPC method (Performance Counter Monitor), then you can use perfmon on target machine to make sure that category "MSMQ Queue" exists and gives you information you are interested in (e.g. value of counter "Messages in Queue") for instance with desired queue name. Example of Performance Counter Monitor configuration: Category: MSMQ Queue Counter: Messages in Queue Instance: Computer Queues If you want to use WMI access (WMI Monitor), then you can use wbemtest utility on target machine to ensure that query "SELECT * FROM Win32_PerfRawData_msmq_MSMQQueue" gives you some result. Example of WMI Monitor configuration: WMI Namespace: root\CIMV2 Query: SELECT MessagesinQueue FROM Win32_PerfFormattedData_msmq_MSMQQueue WHERE Name='Computer Queues' Both methods should give you the same information. There should be wizards for both these monitor types that can help you to start monitoring using browsing method. Re: Need help setting up Application and Component Monitor for a serviceaLTeReGo Apr 23, 2013 2:43 PM (in response to katarina) I would recommend upgrading to SAM at your earliest convenience. SAM 5.5 is the current release and SAM 6.0 is already in beta, so APM 4.2 is actually fairly old at this point. To the heart of your question though, have you tried using the following application templates for monitoring MSMQ? They're included in SAM 5.5 so you should see them listed once you upgrade. Microsoft Message Queuing (Performance) Microsoft Message Queuing Events Re: Need help setting up Application and Component Monitor for a servicekatarina May 1, 2013 6:00 PM (in response to aLTeReGo) Thank you guys. I have not been able to work on this in the last week but will get back to it and will keep you posted. Thanks again for suggestions!
https://thwack.solarwinds.com/thread/56012
CC-MAIN-2017-09
refinedweb
515
59.64
Java Notes javax.swing.Timer A javax.swing.Timer object calls an action listener at regular intervals or only once. For example, it can be used to show frames of an animation many times per second, repaint a clock every second, or check a server every hour. Java 2 added another class by the same name, but. To prevent ambiguous references, always write the fully qualified name. To Use a Timer import java.awt.event.*; // for ActionListener and ActionEvent - Create a Timer object, giving a time interval in milliseconds, and an action listener. This would usually be done once in a constructor. The prototype usage is: javax.swing.Timer yourTimer = new javax.swing.Timer(int milliseconds, ActionListener doIt);The following example creates a Timer that calls an anonymous action listener to repaint a panel every second (1000 milliseconds). javax.swing.Timer t = new javax.swing.Timer(1000, new ActionListener() { public void actionPerformed(ActionEvent e) { p.repaint(); } }); - Start the timer by calling the timer's startmethod. For example, t.start(); To Stop a Timer Call the timer's stop method. Example, t.stop(); See the discussion below to see when it's important to stop a timer in an applet. Fixed interval versus fixed rate. Starting and Stopping Animation in an Applet. Writing start and stop methods for Applets Starting and stopping a timer is very simple. For example, just add the following lines to your applet (assuming t is a Timer): public void start() {t.start();} public void stop() {t.stop();} Additional Timer methods t.setRepeats(boolean flag); t.setInitialDelay(int initialDelay); if (t.isRunning()) ... And others.
http://www.leepoint.net/notes-java/other/10time/20timer.html
CC-MAIN-2013-20
refinedweb
267
62.24
span8 span4 span8 span4 Idea by martinkoch · · pythonscriptingfile path I use scripted parameters from a configuration-file, similar to explained in many blog-posts. Te location of the script is passed through a public parameter of the type 'file(existing)'. This parameter sometimes contains an absolute path, when the config-file is stored somewhere on the filesystem, but sometimes part of it is a FME parameter such as $(FME_MF_DIR) when the config-file is stored close to the workspace. Also, on server, I deliberately use a server-parameter such as $(FME_SHAREDRESOURCE_DATA). To cope with this, I have had to hack a quite nasty looking bit of python import fme import ConfigParser import os import re fmeParam = re.compile("\A\$\(FME_[A-Z]+_[A-Z]+\)") fmeParamName = re.compile("FME_[A-Z]+_[A-Z]+") def findAbsolutePath(fileParamName): paramValue = fme.macroValues[fileParamName] fmeParamList = fmeParam.findall(paramValue) if fmeParamList != []: fmeParamKey = fmeParamName.findall(fmeParamList[0])[0] fmeBasePath = fme.macroValues[fmeParamKey] relFilePath = fmeParam.split(paramValue)[1] absFilePath = os.path.join(fmeBasePath,relFilePath) else: absFilePath = paramValue return absFilePath filePath = findAbsolutePath("conffile") conf = ConfigParser.ConfigParser() conf.optionxform = lambda option: option conf.read(filePath) return conf.getint("Parameters","MyBeautifulNumber") testing for the presence of FME-parameters, filtering out the key I need to get them from macroValues, and glueing it all together for an absolute path where I can find my config-file. Could there be a function like fme.getAbsolutePath('[key_of_filepath-parameter]')? Kind regards, Martin fmemeister commented · This has been released as part of FME 2017.0 . Visit to download it and give it a spin. Hi, I tried to implement @david_r solution into a Pythoncaller, sadly the resulting workspace won't run and complains about undefined macro %s. I needed the solution since I read a parameter set containing FME Parameter references and use one of them to pass to a FeatureReader (for windows paths) as a filename. If I pass one of the parameter file strings, it complains that it cannot find the file, all though the file verily exist.. fme.resolveFMEMacros(value) fme.getAbsolutePath(fileName) Hi, Perhaps I am missing something here, but when I run the following in a Startup Python Script: print "The value of FME_MF_DIR is $(FME_MF_DIR)." The output I get is: The value of FME_MF_DIR is C:\Users\SWu\AppData\Local\Temp/. What build of FME is being run that doesn't auto-evaluate the $(FME_MF_DIR)? Thanks, Stephen Yes, I've also noticed that, FME tries to be pretty clever about macros in hard coded strings in your code. But if you read a string containing an FME macro from e.g. a configuration file, it does not get expanded automatically. That is a potential use case for the function above. daleatsafe commented · Feels like we should include @david_r 's functions in the Python FME API. I'll discuss with the appropriate authorities. Hi I've written the following two helper functions to accomplish exactly this: import re import os.path def expand_fme_macros(input): # Expands all FME macro values in input string matches = re.findall("\$\((\w+)\)", input) for group in matches: input = input.replace("$(%s)" % group, FME_MacroValues[group]) return input def get_fully_qualified_path(input): # Expands input string into an absolute path input = os.path.abspath(expand_fme_macros(input)) dir = os.path.dirname(input) if not dir: input = os.path.join(FME_MacroValues['FME_MF_DIR'], input) return os.path.abspath(input) Use expand_fme_macros() for strings in general, and get_fully_qualified_path() for path and file names. Usage, given that the parameter value "FME_MF_DIR" = "c:\my_data_dir\" This works for any value found in FME_MacroValues, notably: Hope this helps. David Thanks, quite an elegant solution for replacing every parameter-like value surrounded by $( ). Will probably use your findall and .replace("$(%s)") to tidy-up at least the double reg-ex and list-indices in my code. Still leaves the clear sign more people are dealing with this matter using regex-wizardry. Makes me a bit uneasy, as there might be a real value which meets the regex, but is no FME-parameter, making the workspace barf at a "key not found" error or something alike when searching the fme.macroValue dictionary. Would like Safe software to take that code-responsibility. Found the cause of this problem this afternoon, cooked up above code, but have to get used to it a little before I trust it in a high-load production environment. Kind regards, Martin edit: Made my reply vegetarian by having 'values meet' in stead of 'meat'. I agree that expand_fme_macros() could use some error checking: def expand_fme_macros(input): # Expands all FME macro values in input string matches = re.findall("\$\((\w+)\)", input) for group in matches: try: input = input.replace("$(%s)" % group, FME_MacroValues[group]) except: pass return input This way it'll just silently ignore any unknown macros $(...) David Share your great idea, or help out by voting for other people's ideas. Expose access to Database|Web Connections via FMEObjects Python FME API Objects: Enhancement for Unicode Support Custom purge script to run when "Purge Temporary Files" is called PythonCaller: simple syntax checking Predefined templates in Python Merge/copy AttributeCreators settings File Geodatabase version option QT integration to create custom dialog boxes in Workbench
https://knowledge.safe.com/idea/23605/python-function-to-extract-absolute-path.html
CC-MAIN-2019-51
refinedweb
857
57.16
How This Works, in Code Let’s imagine you want to display a list of products. You’ve got a backend API that answers to GET /products, so you create a Redux action to do the fetching: productActions.js HTTP errors. This is really confusing if you’re used to something like axios. Read here for more about fetch and error handling. Redux actions that fetch data will typically be written with begin/success/failure actions surrounding the API call. This isn’t a requirement, but it’s useful because it lets you keep track of what’s happening – say, by setting a “loading” flag true in response to the Begin action, and then false after Success or Error. Here’s what those actions look like: productActions.jssError =. productReducer.js_ERROR: // The request failed, but it did stop, so set loading to "false". // Save the error, and we can display it somewhere // Since it failed, we don't have items to display anymore, so set it empty. // This is up to you and your app though: maybe you want to keep the items // around! Do whatever seems right. return { ...state, loading: false, error: action.payload.error, items: [] }; default: // ALWAYS have a default case in a reducer return state; } } Finally, we just need to pass the products into a ProductList component that will display them, and also be responsible for kicking off the data fetching. ProductList.js import React from "react"; import { connect } from "react-redux"; import { fetchProducts } from "productActions"; class ProductList extends React.Component { render() { easy opportunity to make those decisions., there are tradeoffs. Magic data loaders are magic (harder to debug, harder to remember how/when/why they work). They might require more code instead of less. Many Ways To Skin This Cat things you could try: - Move the API call out of the Redux action and into an apimodule, and call it from the action. (better separation of concerns) - Have the component call the API module directly, and then have it dispatch the action when that data is returned, the componentDidMounthook out into its own higher-order wrapper component Like I said, there are a lot of ways to do this :) Action Steps Try implementing this yourself! Practice is the best way to learn. If you don’t have your own backend server, just use Reddit – their URLs will return JSON if you append “.json” to the end, e.g.. Make a tiny React + Redux app that displays the posts from /r/reactjs. Where and When to Fetch Data With Redux was originally published by Dave Ceddia at Dave Ceddia on February 02, 2018. Top comments (3) Nice post! This is something I've struggled with. Thanks for your perspective. Great post, Thanks! Great article! Thanks! I think you'll like this alt to redux as well. dev.to/chadsteele/eventmanager-an-...
https://dev.to/dceddia/where-and-when-to-fetch-data-with-redux-gm1
CC-MAIN-2022-40
refinedweb
473
65.42
Hi together I have started to develop a bean-service. To start with Switchyard i take look on the examples. In all this examples are already WSDL files available, also in the bean-service. In the documentation i found how to refer the wsdl in the switchyard.xml and also how to get the switchyard.xml. But never how i can generate the wsdl from the bean-service source files. Is there a way to generate the wsdl / xsd for this services or i have to implement them self? Regards Michel Hey Michel, I find wsprovide (located in the AS7 bin/ directory) to be pretty useful in this type of situation. Using bean-service as an example: cd bean-service $AS_HOME/bin/wsprovide.sh -w --classpath target/classes org.switchyard.quickstarts.bean.service.OrderService Run that and check the output directory for the generated WSDL and XSD. Let me know if that doesn't satisfy your need. cheers, keith Hello Keith Thank you very much! This is a good entry point to generate the resources i need. After a little bit work by hand (correct namespaces and adding the service in the composite tag in the switchyard.xml), my service runs know and seems to work correctly. Greets Michel
https://developer.jboss.org/message/730698?tstart=0
CC-MAIN-2016-30
refinedweb
209
67.86
Getting reusable API to "discover" the DB connection it needs at runtime - From: "jdonnici" <jdonnici@xxxxxxxxxxxxx> - Date: Fri, 22 Sep 2006 11:11:00 -0600 We're working on an application that has a 'common' project that hosts a variety of 'general desktop app' APIs - user preferences, the non-UI code the app's services uses, etc. As part of that logic, it makes use of another new project which hosts the 'licensing' system. This licensing project contains an ORM wrapper around the licensing database, as well as the licensing/permissions APIs and logic. In this app, the database connections will come from the 'common' project (because they're specified by the user's preferences). However, the licensing project needs that connection also so that the its implementations of the licensing APIs can access the current licensing database. We don't want the licensing project to deal with WHERE it goes for a connection because other apps down the road will use this same licensing libary. For example, this app uses a preference setting while a web app would specify these in a config file. So the question is how to design it so that the licensing system gets the connection details it needs at runtime using logic that will work down the road for other types of apps. We've considered having the licensing system provide an IConnectionProvider interface. Any app that's going to use the licensing system would provide an implementation of that. class DesktopConnectionProvider : IConnectionProvider { public SqlConnection GetConnection() { ... } } Then the code in the licensing system would need some place to go to get a connection public class ConnectionManager { public static IConnectionProvider ConnectionProvider; } At startup, the app would "initialize" or "register" its connection provider implementation with the licensing system. Licensing.ConnectionManager.ConnectionProvider = new DesktopConnectionProvider(); .... I know this would get the job done and work just fine. However, I'm not sure I care for the need to initialize or register something explicitly at startup. I'm wondering if there's a more elegant approach that would give the licensing system more "runtime discoverability" of the connection details it will need. I'm curious what thoughts or ideas others might have. Thanks. . - Prev by Date: Re: ADO.NET 2.0 Love it or die from it!!! - Next by Date: Re: Whats going on with the DAAB and UpdateDataSet? - Previous by thread: BeginEdit is ignored - Next by thread: Using Informix Client SDK I get this error ( The parameter data type of DBNull is invalid ) - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.adonet/2006-09/msg00519.html
crawl-002
refinedweb
417
52.29
Data Formats Supported by the PyNIO module This page provides more detailed information about each of the PyNIO supported data formats. The common interface to all the file formats supported by PyNIO is discussed in the PyNIO reference document and will (mostly) not be repeated here. This document focuses on the individual features of each format, their differences, and what PyNIO does to translate them in a more or less uniform manner. The most detailed discussion involves the GRIB format, because of all the supported formats, it has required the most work to recast into a NetCDF-like model. The data formats currently supported by PyNIO are these: - NetCDF - Network Common Data Format (.nc, .cdf, or .netcdf extensions) - HDF - Hierarchical Data Format (version 4) - Scientific Data Sets (SDS) only (.hdf or .hd extensions) - HDF-EOS - Hierarchical Data Format Earth Observing System (version 2) - GRID and SWATH only (.hdfeos, .he2, or .he4 extensions) - CCM - Community Climate Model History Tape Format (.ccm extension) - GRIB - Gridded Binary (version 1) or General Regularly-distributed Information in Binary Form (version 2) (.grb, .grib, .grb1, .grib1, .grb2, or .grib2 extensions) (GRIB2 support available in version 1.2.0 or later. ) NetCDF - Network Common Data FormatOnline documentation for NetCDF is available at. PyNIO offers read and write access to existing NetCDF files as well as the ability to create NetCDF files from scratch. Almost all features features of NetCDF's classic model (versions 3 and ealier) are supported by PyNIO because it was created using the NetCDF3 data model as a pattern. As of version 1.2.0, PyNIO offers beta-level support for the NetCDF4 "classic" model. This model constrains the interface to the constructs provided by NetCDF3 and earlier. However, the underlying file format, like that of all NetCDF4 files, is HDF 5. Files written in this format can take advantage of the built-in file compression available in HDF 5 and limitations on file and variable size are for practical purposes eliminated. Existing files in this format will be recognized automatically and handled transparently. To create a file with this format set the Format option with the value "NetCDF4Classic". You can turn on compression using CompressionLevel option. More information about the options is given below. Standard NetCDF4 files are NOT currently supported. If a file contains any NetCDF4-only features, you can expect core dumps if you open it in read-only mode and possible file-corruption if you open an existing file in write mode. NetCDF file format options - CompressionLevel (available in version 1.2.0 or later) - Specify the level of data compression as an integer in the range 0 through 9. Increasing values indicate greater compression. Compression is lossless. There are tradeoffs between the time spent compressing the file, versus the amount of compression achieved. Informal tests show that compression level 9 results in a file only a few percent smaller than a compression level 5 file, but it requires 4 or 5 times the amount of time to create it. (This option is ignored unless the Format option is set to NetCDF4Classic.) - Format - This option has an effect only for files opened in "create" mode. It currently has four valid values, two of which are synonyms. The default value, "Classic", indicates that a standard NetCDF file should be created. Standard NetCDF files are more limited with respect to file size. Assuming the underlying file system has support for large files, the total size can exceed 2 GB, but there are severe restrictions regarding the number of large variables and the order in which they are written. In general, because it is more universal, the "classic" format is recommended if the total total file size will be less than 2 GB. Specifying either "LargeFile" or "64BitOffset" results in the creation of a NetCDF file with support for larger variables and a theoretically much larger total size (about 9.22e+18 bytes). Each fixed-size variable, or each 'record' (element of the first dimension) of a variable with an unlimited dimension can have a size of up to 4 GB. Assuming the underlying file system has support for large files, PyNIO reads NetCDF files in either the classic or the 64-bit offset format. For more detailed information about large file support in NetCdf see. In version 1.2.0 or later, you can specify "NetCDF4Classic" to create a file using the NetCDF 4 classic model format.. Use the CompressionLevel option to enable compression. Also the HDF 5 format removes vertually all restrictions on file and individual variable size. PyNIO version 1.2.0 provides beta-level support for this format because NetCDF 4 and the release of HDF 5 that it depends on are both still in the beta-testing phase of development. It should probably not be used for mission-critical file creation and it is not yet available on every system that PyNIO runs on. - HeaderReserveSpace - This option has an effect only for files opened for writing. This option reserves extra space within the header of a NetCDF file. Its value is an integer that specifies the number of bytes to reserve in addition to the bytes used for the currently defined dimensions, variables, and attributes. This option can improve performance when it is likely that new dimensions, variables, or attributes will be added to an already large file. - MissingToFillValue - If set to its default value, True, this option causes a "virtual" _FillValue attribute to be created for any variable that has the attribute missing_value but not _FillValue. The purpose is to more gracefully handle files that use the COARDS-compliant missing_value instead of _FillValue to indicate missing data. Note that if a variable in a file has both a missing_value and a _FillValue, or if it has neither, the option does nothing. The virtual _FillValue attribute is not actually part of the NetCDF file, but only appears to be from within PyNIO. However, If the file is opened for writing and you assign to the attribute, it becomes an actual attribute. - PreFill - This option has an effect only when a file is opened for writing. It is logical-valued with a default value of True. If set False, PyNIO alters the standard behavior of the NetCDF library such that variable element locations in the file are not "pre-filled" with the missing (fill) value associated with the variable. This can noticeably improve performance when writing large datasets. However, if you set this option False, you are responsible for ensuring that all the elements of the variables you have defined are assigned a valid value. - SafeMode - This logical-valued option may be set for any NetCDF file. Its default value is False, meaning that PyNIO only closes a NetCDF file when the close is invoked. If set to True, PyNIO closes the file after each operation it performs, including defining a dimension or variable, adding or modifying an attribute, or reading or writing data from any variable. This helps ensure the file's integrity for writable files if the close method does not get called for some reason. However, it may result in loss of performance, particularly when adding new variables, dimensions, or attributes to files that already have large variables defined. This is because each time a new element is defined, all existing data in the file must be moved to make room for the metadata of the new element in the header. One way to mitigate the performance loss is to use the HeaderReserveSpace option when first creating the file to make room in the header for subsequently defined NetCDF elements. Data model differencesWhile PyNIO has support for string data type, NetCDF3 does not. However, PyNIO maps NetCDF attributes of type character into Python strings for convenience. Likewise, you can set the value of a NetCDF character attribute using a Python string. HDF - Hierarchical Data Format - (version 4) - Scientific Data Sets (SDS) onlyOnline documentation for HDF is available at. PyNIO's HDF interface understands a subset of the content available in HDF4-formatted files. PyNIO can read and write data that uses the SDS (Scientific Data Set) interface. Opening an HDF file is. The HDF format allows variable, attribute, and dimension names with spaces and other non-alphanumeric characters, but for reasons of compatibility with other software, PyNIO's underlying NIO library replaces spaces and non-alphanumeric characters with the underscore '_' character. To compensate for this possible loss of information, PyNIO provides an attribute for each variable, called hdf_name, that contains the name exactly as it appears in the HDF file. This attribute is redundant in cases where the actual HDF name contains only alphnumeric or underscore characters. PyNIO has a read-only ability to understand HDF Vgroups. When a variable that is part of a Vgroup is encountered, PyNIO PyNIO. There is also no access to the HDF VDATA interface. HDFEOS - Hierarchical Data Format Earth Observing System (version 2) - GRID and SWATH onlyOnline documentation for HDFEOS is available at. PyNIO provides read-only access for SWATH and GRID data groups in HDFEOS files. POINT data groups are currently ignored. As with all PyNIO's supported formats, HDFEOS files are read into file variables that use NetCDF-like conventions. Note that since HDFEOS2 files are a type of HDF4 file, it is possible, by appending a valid HDF4 suffix to the name as specified to PyNIO's open_file method, to use the HDF4 interface to open HDFEOS2 files. This view of the file sometimes gives useful information not obtainable through the HDFEOS interface. On the other hand, because HDFEOS files often use the more generic '.hdf' suffix in their names, it is easy to mistakenly use the HDF4 interface when the HDFEOS interface would lead more directly to the relevant information and data. In this case you need to append an HDFEOS-specific suffix to read the data using PyNIO's interface to the HDFEOS library. Non-alphanumeric characters in HDFEOS variable names are replaced the '_' character when listed from PyNIO and are referenced from PyNIO in this fashion. HDFEOS files use groups to specify the specific SWATH or GRID that a variable belongs to. In the HDFEOS2 interface PyNIO appends the SWATH or GRID name, preceded by an underscore, to all variable names that belong to the group in order to ensure that each variable name is unique within the namespace of the NioFile instance variable. As of version 1.2.0 or later, PyNIO's HDFEOS interface supplies an attribute called hdfeos_name that contains the actual variable name as present in the file. Also as of version 1.2.0 or later, PyNIO provides access to the Geolocation variables associated with SWATH data groups. These variables have 1 or 2 dimensions and serve to locate the data variables in time and space. PyNIO also provides supplementary coordinate variables for GRID data that are calculated on the fly using the GCTP (General Cartographic Transformation Package) library that is a required component of the HDFEOS interface. These supplementary variables can be distinguished from true HDFEOS variables by their lack of the hdfeos_name attribute. Since only a few of the possible projected GRID types were available for testing, the coordinate values contained in these supplementary variables are not yet considered to be fully reliable. Users are encouraged to report cases where the coordinate values do not seem to be correct., PyNIO does not support IEEE CCM files due to lack of documentation. It is possible to use the public domain tool called "ccm2nc" (available on almost all SCD computers; "man ccm2nc") to convert these files to NetCDF. PyNIO, PyNIO version 1.2.0 or later. ) Online documentation for GRIB is available at: - GRIB1 - - GRIB2 -. PyNIO provides read-only support for data in GRIB1 and GRIB2 formats. To open a file, you only need to know that the file is GRIB. PyNIO figures out which version of GRIB is in the file and processes the file accordingly. PyNIO's support for GRIB is an evolving process. As an ever more diverse set of GRIB files have been encountered, PyNIO has been improved to handle many more features of the GRIB format. However, since GRIB has many features that are obscure enough that they have never been encountered in practice by the NIO library developers, there are still some aspects of GRIB that PyNIO does not handle properly. Generally, the NIO library developers try to support features that appear in GRIB files that users are actually using. The best way to help improve the GRIB-decoding capabilities of PyNIO is to call attention to files that are important to your work but that PyNIO does not seem to interpret correctly. If you have problems reading a particular set of GRIB files, please contact Mary Haley At some point PyNIO may provide the ability to write GRIB, but, for now, the NIO library PyNIO's treatment of each format, it is worth looking at the common features. These arise partly from basic similarities in the formats, but also significantly from PyNIO's unifying data model. Dimensions in GRIBThe fundamental thing that PyNIO: - ensemble or probability (version 1.2.0 or later) - initial_time - forecast_time - level whose probability is measured. Note that the variable itself contains percentage values indicating the likelihood of the quantity at each coordinate value. Note that, currently, PyNIO allows for only one of the ensemble or probability dimensions for any single variable. If this becomes a problem, it may change with future releases. - initial_time - The initial_time coordinate is expressed in a CF-compliant form as the number of hours since Jan 1, 1800. However, the NioOption InitialTimeCoordinateType allows you to change the type of this coordinate to human-readable strings representing the date and time. But regardless of the type of the initial_time coordinate variable, PyNIO always supplies three auxiliary variables with three, PyNIO. PyNIO uses another scheme to represent GRIB hybrid level types. However, since hybrid levels are currently supported only for GRIB1, the particulars of this scheme are described below in the GRIB1 specific section. GRIB GridsPyNIO supports most of the same grid types in both versions of GRIB. Grids that are fully supported, with one exception, supply one or two dimensional coordinate variables that can be used to locate each point of the grid in latitude and longitude. For other grids, PyNIO still makes the data available as long as it can figure out the dimension sizes of the data. However, these grids provide no coordinate variables and PyNIO file format optionsPyNIO provides the following options for GRIB files: - DefaultNCEPPtable (ignored unless file is in GRIB 1 format) - This option has two valid values: "Operational", the default, or "Reanalysis". It specifies whether to default to the use of the NCEP operational parameter table () or the NCEP reanalysis parameter table (). The option only applies in cases where PyNIO, on its own, cannot definitively determine which of these tables to use because of historical ambiguities in NCEP usage. - InitialTimeCoordinateType - This string-valued option has two valid values: "Numeric", the default, or "String". Note that in PyNIO's representation of a GRIB file the initial time dimension is distinguished from the forecast time dimension, whose coordinate values are numerical offsets from a particular initial time. The default value results in initial time coordinates that are COOARDS and CF compliant, with the time represented in units of hours since 1800-01-01. Setting the option to "String" results in human-readable time coordinates, but with the disadvantage that they are not compliant with standard conventions and are likely not to be understood by many processing and visualization software packages. Note that in either case both the string and numerical coordinates are available as variables -- the only difference is which is considered to be the coordinate dimension. - SingleElementDimensions (available in version 1.2.0 or later) - This option allows the user to specify that variables with only a single initial time, forecast time, level, ensemble or probability value, usually handled as attributes, be treated as containing single element dimensions. It is a string-valued option whose default value "None" means that no single-element dimensions will appear in PyNIO's representation of the GRIB file. Conversely, if the option is given the value "All", then all possible dimensions will be created for each variable. Otherwise, the desired single element dimensions may be specified individually. The valid choices are "Initial_time", "Forecast_time", "Level", "Ensemble", and "Probability". Note that dimensions are not created if the variable does not have an actual value associated with the dimension type, regardless of the value given to this option. For example, variables that are not part of an ensemble forecast will never have an ensemble dimension, and variables whose level type (e.g. Tropopause) does not have a numerical value will never have a level dimension. In the case of level types, it may depend on who wrote the record: files written by some centers may give no value for certain level types where others may use a numerical value such as 0.The intent of this option is to make it easier to concatenate conforming variables from multiple files together. - ThinnedGridInterpolation (Default changed in version 1.2.0.) - This string-valued option has two valid values: "Cubic", the default, or "Linear". It has an effect only for GRIB files that contain data on a thinned grid. The GRIB documentation refers to these grids as "quasi-regular". The option controls the interpolation performed in converting variable data on the grid to the standard rectangular form that is returned by PyNIO. Contrary to the default value, linear interpolation is forced in cases where the variable has bit-mask that used to omit some of the grid points. Bit-masked variables can be recognized by the fact that, as presented by PyNIO, the data contains values equal to the value of the _FillValue attribute. GRIB1 Support DetailsAs of version 1.2.0 PyNIO, PyNIO, PyNIO PyNIO uses to assign names to GRIB1 variables. GRIB1 data variable name encoding (Note: examples show intermediate steps in the formation of the name) if entry matching parameter table version and parameter number is found (either in built-in or user-supplied table): 1.2.0 or later. - PyNIO outputs a warning message when a variable is given the unrecognized parameter prefix "VAR_"., PyNIO PyNIO. PyNIO. PyNIO PyNIO prior to version 1.2.0 may incorrectly specify the forecast_time units as hours when they are not. These same previous versions of PyNIO do not properly handle GRIB files with forecast_time units that vary for a single variable. GRIB1 initial_timeDimensions, coordinates, and associated auxiliary variables are named as follows: if NioOption NioOption 1.2.0 or later. PyNIO uses the GRIB2 g2clib encoder/decoder library from NCEP to perform the low-level decoding of GRIB2 files. Since this library is limited to processing files that are less than 2 GB in size, PyNIO only supports reading GRIB2 files of 2 GB or less. PyNIO) PyNIO PyNIO.PyNIO. PyNIO uses a single grid number that is sequentially incremented as new grids are encountered to label all the dimensions and coordinate variables belonging to a particular grid. PyNIO also uses the same naming scheme for all GRIB2 grid types. This means that PyNIO. PyNIO: Built-in GRIB1 parameter tablesPyNIOMC Ocean Modeling Branch (Parameter table version 128) - US Weather Service - NCEP (Parameter table version 129) - US Weather Service - NCEP (Parameter table version 130) - US Weather Service - NCEP (Parameter table version 131) - 180 - ECMWF Parameter table version 190 - ECMWF Parameter table version 200 - Offenbach (DWD) Parameter table version 2 - Offenbach (DWD) Parameter table version 201 - Offenbach (DWD) Parameter table version 202 - Offenbach (DWD) Parameter table version 203 - Offenbach (DWD) Parameter table version 204 (available in versions 1.2.0 or later.) - Offenbach (DWD) Parameter table version 205 (available in versions 1.2.0 or later.) - Offenbach (DWD) Parameter table version 206 (available in versions 1.2.0 or later.) - Offenbach (DWD) Parameter table version 207 (available in versions 1.2.0 or later.) - Brazilian Space Agency - INPE/CPTEC Parameter table version 254 User-defined GRIB1 parameter tables PyNIO.
http://www.pyngl.ucar.edu/NioFormats.shtml
crawl-001
refinedweb
3,350
53.51
. here is part of an error message that is reported in /var/log/maillog. -------------------------- Apr 28 07:36:01 ip-50-62-164-110 dovecot: pop3(rrb): Error: chown(/home/rrb/mail/.imap Apr 28 07:36:01 ip-50-62-164-110 dovecot: pop3(rrb): Error: mkdir(/home/rrb/mail/.imap -------------------------- As you can see it is complaining Goup level permissions and Group ID. How do I resolve this issue.. I think this will take care of my needs.. Is there a setting that I need to set which allows Chroot?? Rick namespace { separator = / prefix = "#mbox/" location = mbox:~/mail:INBOX=/var/mai inbox = yes hidden = yes list = no } namespace { separator = / prefix = location = maildir:~/Maildir }
https://www.experts-exchange.com/questions/28421307/How-do-I-tell-Dovecot-and-Sendmail-to-use-the-Same-Directory-for-User-Email.html
CC-MAIN-2018-05
refinedweb
115
60.51
18.2.1 Problem You want to write a script that gathers input from a user. 18.2.2 Solution Create a fill-in form from within your script and send it to the user. The script can arrange to have itself invoked again to process the form's contents when the user submits it. 18.2.3 Discussion Web forms are a convenient way to allow your visitors to submit information, for example, to provide search keywords, a completed survey result, or a response to a questionnaire. Forms are also beneficial for you as a developer because they provide a structured way to associate data values with names by which to refer to them. A form begins and ends with and tags. Between those tags, you can place other HTML constructs, including special elements that become input fields in the page that the browser displays. The tag that begins a form should include two attributes, action and method. The action attribute tells the browser what to do with the form when the user submits it. This will be the URL of the script that should be invoked to process the form's contents. The method attribute indicates to the browser what kind of HTTP request it should use to submit the form. The value will be either GET or POST, depending on the type of request you want the form submission to generate. The difference between these two request methods is discussed in Recipe 18.6; for now, we'll always use POST. Most of the form-based web scripts shown in this chapter share some common behaviors: This approach isn't the only one you can adopt, of course. One alternative is to place a form in a static HTML page and have it point to the script that processes the form. Another is to have one script generate the form and a second script process it. If a form-creating script wants to have itself invoked again when the user submits the form, it should determine its own pathname within the web server tree and use that value for the action attribute of the opening tag. For example, if a script is installed as /cgi-bin/myscript in your web tree, the tag can be written like this: Each API provides a way for a script to obtain its own pathname, so you don't have to hardwire the name into the script. That gives you greater latitude to install the script where you want. In Perl scripts, the CGI.pm module provides three methods that are useful for creating elements and constructing the action attribute. start_form( ) and end_form( ) generate the opening and closing form tags, and url( ) returns the script's own path. Using these methods, a script can generate a form like this: print start_form (-action => url ( ), -method => "POST"); # ... generate form elements here ... print end_form ( ); Actually, it's unnecessary to provide a method argument; if you omit it, start_form( ) supplies a default request method of POST. In PHP, a simple way to get a script's pathname is to use the $PHP_SELF global variable: print (" "); # ... generate form elements here ... print (" "); However, that won't work under some configurations of PHP, such as when the register_globals setting is disabled.[1] Another way to get the script path is to access the "PHP_SELF" member of the $HTTP_SERVER_VARS array or (as of PHP 4.1) the $_SERVER array. Unfortunately, checking several different sources of information is a lot of fooling around just to get the script pathname in a way that works reliably for different versions and configurations of PHP, so a utility routine to get the path is useful. The following function, get_self_path( ), shows how to use $_SERVER if it's available and fall back to $HTTP_SERVER_VARS or $PHP_SELF otherwise. The function thus prefers the most recently introduced language features, but still works for scripts running under older versions of PHP: [1] register_globals is discussed further in Recipe 18.6. function get_self_path ( ) { global $HTTP_SERVER_VARS, $PHP_SELF; if (isset ($_SERVER["PHP_SELF"])) $val = $_SERVER["PHP_SELF"]; else if (isset ($HTTP_SERVER_VARS["PHP_SELF"])) $val = $HTTP_SERVER_VARS["PHP_SELF"]; else $val = $PHP_SELF; return ($val); } $HTTP_SERVER_VARS and $PHP_SELF are global variables, but must be declared as such explicitly using the global keyword if used in a non-global scope (such as within a function). $_SERVER is a "superglobal" array and is accessible in any scope without being declared as global. The get_self_path( ) function is part of the Cookbook_Webutils.php library file located in the lib directory of the recipes distribution. If you install that file in a directory that PHP searches when looking for include files, a script can obtain its own pathname and use it to generate a form as follows: include "Cookbook_Webutils.php"; $self_path = get_self_path ( ); print (" "); # ... generate form elements here ... print (" "); Python scripts can get the script pathname by importing the os module and accessing the SCRIPT_NAME member of the os.environ object: import os print " " # ... generate form elements here ... print " " In JSP pages, the request path is available through the implicit request object that the JSP processor makes available. Use that object's getRequestURI( ) method as follows: <%-- ... generate form elements here ... --%> 18.2.4 See Also The examples in this section all have an empty body between the opening and closing form tags. For a form to be useful, you'll need to create body elements that correspond to the types of information that you want to obtain from users. It's possible to hardwire these elements into a script, but Recipe 18.3 and Recipe 18.4 describe how MySQL can help you create the elements on the fly based on information stored in your database.
https://flylib.com/books/en/2.305.1/creating_forms_in_scripts.html
CC-MAIN-2019-39
refinedweb
943
62.38
Applies to: Exchange Server 2010 Topic Last Modified: 2010-01-28 This topic provides information about the concept of disjoint namespaces and the supported scenarios for deploying Microsoft Exchange Server 2010 in a domain that has a disjoint namespace. First, some background. Every computer that is on the Internet has a Domain Name System (DNS) name. This is also known as the machine name or host name. Every computer running the Microsoft Windows operating system with networking capabilities also has a NetBIOS name. A computer running Windows in an Active Directory directory service domain has both a DNS domain name and a NetBIOS domain name. The DNS domain name consists of one or more subdomains separated by a dot (.) and is terminated by a top-level domain name. For example, in the DNS domain name corp.contoso.com, the subdomains are corp and contoso, and the top-level domain name is com. Typically, the NetBIOS domain name is the subdomain of the DNS domain name. For example, if the DNS domain name is contoso.com, the NetBIOS domain name is contoso. If the DNS domain name is corp.contoso.com, the NetBIOS domain name is corp. A computer in an Active Directory domain also has a primary DNS suffix and can have additional DNS suffixes. By default, the primary DNS suffix is the same as the DNS domain name. For detailed steps about how to change the primary DNS suffix, see the procedures later in this topic. You define the DNS domain name and NetBIOS domain name of an Active Directory domain when you configure the first domain controller in the domain. For more information about configuring domain controllers, see Domain controller role: Configuring a domain controller. The procedures in this topic describe how to view the following items on a computer that is running Windows Server 2008 or Windows Server 2003: In most domain topologies, the primary DNS suffix of the computers in the domain is the same as the DNS domain name. In some cases, you may require these namespaces to be different. This is called a disjoint namespace. For example, a merger or acquisition may cause you to have a topology with a disjoint namespace. In addition, if DNS management in your company is split between administrators who manage Active Directory and administrators who manage networks, you may need to have a topology with a disjoint namespace.. In Microsoft Exchange 2010, there are three supported scenarios for deploying Exchange in a domain that has a disjoint namespace. The supported scenarios are as follows: These scenarios are detailed in the following sections. In this scenario, the primary DNS suffix of the domain controller isn't the same as the DNS domain name. The domain controller is disjoint in this scenario. Computers that are members of the domain, including Exchange servers and Microsoft Outlook client computers, can have a primary DNS suffix that either matches the primary DNS suffix of the domain controller or matches the DNS domain name. In this scenario, the primary DNS suffix of a member computer on which Exchange 2010 is installed isn't the same as the DNS domain name, even though the primary DNS suffix of the domain controller is the same as the DNS domain name. In this scenario, you have a domain controller that isn't disjoint and a member computer that is disjoint. Member computers that are running Outlook can have a primary DNS suffix that either matches the primary DNS suffix of the disjoint Exchange server or matches the DNS domain name. In this scenario, the NetBIOS domain name of the domain controller isn't the same as the DNS domain name of the same domain controller. NetBIOS domain name does not match DNS domain name To allow Exchange 2010 servers to access domain controllers that are disjoint, you must modify the msDS-AllowedDNSSuffixes Active Directory attribute on the domain object container. You must add both of the DNS suffixes to the attribute. For detailed steps about how to modify the attribute, see The computer's primary DNS suffix does not match the FQDN of the domain where it resides. In addition, to make sure that the DNS suffix search list contains all DNS namespaces that are deployed within the organization, you must configure the search list for each computer in the domain that is disjoint. The list of namespaces should include not only the primary DNS suffix of the domain controller and the DNS domain name, but also any additional namespaces for other servers with which Exchange may interoperate (such as monitoring servers or servers for third-party applications). You can do this by setting Group Policy for the domain. For more information about Group Policy, see the following topics: For detailed steps about how to configure the DNS suffix search list Group Policy, see Configure the DNS Suffix Search List for a Disjoint Namespace.
http://technet.microsoft.com/en-us/library/bb676377(EXCHG.140).aspx
crawl-003
refinedweb
817
51.68
what is the meaning of this or explain this what is the meaning of this or explain this List<Object[]> list=query.list(); Hi Friend, It will return the query values in the form of list. For more information, visit the following link: Hibernate can u plz explain the http request methods - JSP-Servlet can u plz explain the http request methods can u plz explain http... for HTTP servlets. The servlet container creates an HttpServletRequest object and passes it as an argument to the servlet's service methods (doGet, doPost, etc What is outer join?Explain with examples. What is outer join?Explain with examples. What is outer join?Explain with examples What is the difference between JSF, Servlet and JSP? What is the difference between JSF, Servlet and JSP? What is the difference between JSF, Servlet and JSP Explain - LDAP Explain LDAP Any one explain about LDAP ? and also explain about JNDI what relation b/w this two what the use of taglib - JSP-Servlet what the use of taglib what is the use of taglib where we exactly use it? Hi Friend, It is used to create custom tag in jsp. For more information, visit the following link: EL in jsp - JSP-Servlet ? in case please let me know What version of JSP ru using?? see.... The default mode for JSP pages delivered using a Servlet 2.3 or earlier descriptor... for JSP pages delivered with a Servlet 2.4 descriptor is to evaluate EL What is a vector in Java? Explain with example. What is a vector in Java? Explain with example. What is a vector in Java? Explain with example. Hi, The Vector is a collect of Object that implement AbstractList class. It autometically increases the list length Explain ServletContext. Explain ServletContext. Explain ServletContext. Hi, Here is the answer, ServletContext interface is a window for a servlet to view it?s environment. A servlet can use this interface to get information servlet - JSP-Servlet ){ pw.println(e); } } } Hi friend, Tell me what the problem with this code explain in details. Thanks Explain WML Explain WML hii, What is WML? hello, WML stands for Wireless Markup Language. It is a simple language used to create applications for small wireless devices like mobile phones. WML is analogous to HTML jsp and servlet jsp and servlet what is the difference between jsp and servlet ? what is the advantages and disadvantages of jsp and servlet JSP Actions JSP Action Tags in the JSP application. What is JSP Actions? Servlet container...; In this section we will explain you about JSP Action tags and in the next section we...:include The jsp:include action work as a subroutine, the Java servlet HTML - JSP-Servlet HTML To process a value in jsp project I have to set the text box as non editable. what is the attribute and value to be submitted for both of this. Thanks. Hi, Please explain in detail and send me about jsb - JSP-Servlet it from my device (as server) what i do? need help Hi friend, Please explain in detail. Read for more information. Thanks on servlet - JSP-Servlet on servlet deployment What we'll get on servlet deployment what is web .config method with the actual servlet... Greetings... JSP server process Explain...what is web .config method servlet container what is difference... for a jsp. There is a servlet called pagecompilor , which first parses the .jsp What are events? Explain how Event handling in Java? What are events? Explain how Event handling in Java? What are events? Explain how Event handling in Java? Hi, The Events are the integral part of the java platform. Sometimes we want one object to perform shorten logout problem.. - JSP-Servlet done in the code i have used session object in project's servlet and jsp for trsferring info between various servlet & jsp.and made a logout servlet.which... is reffered to logout servlet.And what i have done on that logout servlet i have What are constructors ? explain different types of constructor with example What are constructors ? explain different types of constructor with example Hi, What are constructors ? explain different types of constructor... To know more related to What are constructors with examples servlets - JSP-Servlet the entire Student Project in Tomcat/Webapps. can u plzz explain me what... an application using Servlets or jsp make the directory structure given below link JSP Servlet update patient data - JSP-Servlet JSP Servlet update patient data Hi Friend, I'm attaching my inserting patient data servlet as requested. I tried your posted code, its not working in my case, perhaps of the following things which I can explain to you so - JSP-Servlet JSP What is the meaning of extending JSP? Hi friend, A jsp can be made to extend our own servlet instead of container generated servlet by using. <%@ page Thanks servlet - JSP-Servlet and JSP Examples. Servlet and JSP Examples...servlet hiii.....!!! pls help me answering this.... what can be the xml code which will describe a servlet and mapps an instance java script - JSP-Servlet java script what is the use of hidden fields in java script? if a page has 3 links the first link should be enabled and others should be disabled... problem then send me source code and explain in detail explain servletconfig with programiing example? explain servletconfig with programiing example? explain servletconfig with programiing example? ServletConfig is a servlet configuration object used by a servlet container used to pass information to a servlet during servlet and jsp servlet and jsp how can i get a form question from a database... question in the form? And what is the best way to bind a user id with the form answers and form questions in a database? what i want to make is a survey system Explain struts.jar file - Struts Explain struts.jar file Hi friends am new to java. I read jar file means collection of java files. For executing struts application what are the necessary jar files. " struts.jar " file contains what. can u explain Exception handling - JSP-Servlet trace of the root cause is available in the Apache Tomcat/6.0.14 logs. Plz correct the code and explain what I have done wrong? Hi Friend..." and then try the JSP. I hope that this will help you in solving your problem jsp - JSP-Servlet JSP Scope Objects What is Scope Object in JSP Jsp - JSP-Servlet JSP Servlets Advantages What are the advantages of JSP and Servlets thread with in the servlet..? - JSP-Servlet thread with in the servlet Please explain me the concept of Java thread ..and it's use with Servlet.Thanks in advance servlet - JSP-Servlet servlet hi sir,plz provide what is a servlet ,how to run the servlet and what r the good books for understanding servlets,plz provide me Hi Friend, Please visit the following link: form.submit() - JSP-Servlet friend, Plz tell me about the problem what you want to do and explain multipart/form-data - JSP-Servlet this encryption type what are will be the contents in the encoded output. Explain me with the example available in Please explain @interface with an example explain what does all these mean? How all these work...Please explain @interface with an example Here is the code snippet: @Retention(RUNTIME) @Target({ FIELD }) public @interface InjectProperty Please explain @interface with an example runnable; where Runnable is an interface. Could you please explain what does...Please explain @interface with an example Here is the code snippet: @Retention(RUNTIME) @Target({ FIELD }) public @interface InjectProperty Explain Linked List Explain Linked List hello, What is Linked List ? hello, Linked List is one of the fundamental data structures. It consists of a sequence of nodes, each containing arbitrary data fields pointing to the next 1]what is CGI in jsp ? 2]advantege and disadvantage of CGI ? 3]what is servlet ? 4]what is threads ? 5]what is java server page technology...:// Thanks - JSP-Servlet jsp get post method What is the get post method in JSP? And when i need to call these methods java servlet - JSP-Servlet and request. what to do? Is any plugins required ? plz tell me . Hi Friend, Put servlet-api.jar into the lib folder of apache tomcat server. Thanks... created new servlet in eclipse its open but without including any code am getting Hi this is kalai, Can u please explain me the difference between send redirect and forward in jsp? Thanks in advance.......... Hello Kalai the defference between send and redirect and forward of JSP Servlet Redirecting - JSP-Servlet explain step wise and plz remember...Servlet Redirecting Hi I have made a main page with User name...*; public class Slogin extends HttpServlet implements Servlet { public void JSP - JSP-Servlet JSP Hi Friends, What is the difference between doget() and dopost() in JSP? Thanks in Advance Explain the persistence class in hibernate? Explain the persistence class in hibernate? What is persistence class in hibernate? Persistence class are simple POJO classes in an application. It works as implementation of the business application for example JSP - JSP-Servlet Difference between jsp forward and sendRedirect What is difference... forward the control to an HTML file, another JSP file, or a servlet. It should...;is used to forwards a client request to an HTML file, JSP file, or servlet what is taglibrary? use of the tag library? please give one example (database=MS-ACCESS) Hi friend, Please visit for taglibrary in Jsp : 403 Forbidden error - What is 403 Forbidden error, please explain... 403 Forbidden error - What is 403 Forbidden error, please explain... Can anyone please explain, What is 403 Forbidden error. And how can i remove this error, which is restricting my page from loading it. Thanks!! Hi jsp - JSP-Servlet jsp what is servlet ? Hi Friend, Servlets are server side components that provide a powerful mechanism for developing server side... of Servlet examples which illustrates you more clearly. Thanks JSP - JSP-Servlet difference. If different then what are the differences? Hi friend... one JSP file to another file.This will redirect to the different page without... or forwarding. This is the servlet API equivalent to SSI includes. The uri jsp error - JSP-Servlet jsp error Hello, my name is sreedhar. i wrote a jsp application.... -------------------------------------------- If you have any problem then send me source code and explain in detail. Visit for more information. Thanks Servlets vs JSP - JSP-Servlet Servlets vs JSP What is the main difference between Servlets and JSP... and servlet act as a controller. 2)JSP pages contain a mixture of HTML, Java scripts, JSP elements, and JSP directives while servlet is totally uses java code. 3 Java Servlet Problem - JSP-Servlet Java Servlet Problem I have a servlet class that implemets..., but in the attributeReplaced() method, i want to perform a redirect to another servlet... method. Please help!! Hi friend, Please explain problem jsp servlet hosting jsp servlet hosting Hi, What is jsp servlet hosting? Thanks In case of jsp servlet hosting, hosting companies are providing hosting environment where client can host Java based web application on the server error - JSP-Servlet jsp error HTTP Status 404 - /jsp... message /jsp/ description The requested resource (/jsp/) is not available... could not find what was requested, or it was configured not to fulfill the request servlet - JSP-Servlet is my servlet program output,what is the solution...' ,you ve to set the path to servlet.jar or servlet-api.jar which is under common JSP - JSP-Servlet ; } } Hi friend, Plz explain problem in details to solve it and visit to : jsp - JSP-Servlet Tags in JSP Please Explain the Tags in JSP Hi friend,<html><head><title>Use Of Dot operator in EL</title><.../jsp/simple-jsp-example/index.shtml-Servlet in one JSP page. in this page i have 2 select boxes.when ever i select first..., what u have selected in first select box value). means when ever i select first... and return values. i want to show these return values into the second box of jsp page jsp tag - JSP-Servlet jsp tag i am basic jsp learner,so i cann't understand th taglibrary.please you tell what is the taglibrary and use of the taglibrary. ... stream.A custom action is invoked by using a custom tag in a JSP page. A tag-EL - JSP-Servlet JSP-EL Dear Sir, You gave me that code: Home.html A simple JSP application EL Uisng In JSP Name... of Expression Language in jsp Hello ${vij.nameSP-EL - JSP-Servlet JSP-EL Dear Sir, I am not able to run that code on my System.On... System. Code is: Home.html A simple JSP application EL Uisng In JSP Name declarative tags - JSP-Servlet into the instance variable in the servlet where as means the scriptlet in the jsp...Jsp declarative tags Hi sir, I want to know the difference between declaring a variable in declaration tag and scriptlet tag? what JSP - JSP-Servlet in one JSP page. in this page i have 2 select boxes.when ever i select first select..., what u have selected in first select box value). means when ever i select first... and return values. i want to show these return values into the second box of jsp page j2ee - JSP-Servlet j2ee please provide code for search options(example codes) or please tell me where can i get exaple codes of searh options Hi friend, Please explain which technologies you want to use: JSP Servlet small query - JSP-Servlet small query how to set value to textbox on browser side which is retrived from the database? Hi friend, Plz explain problem in details which technology you have used e.g. JSP/Servlet/Java etc... For more XML - JSP-Servlet XML Hello have a nice time, how to write an web.xml which can access both jsp page and servlet file can you explain with example.. thanking you. Hi Friend, Do you want the web.xml file? Please clarify JSP - JSP-Servlet JSP what are the other options to connect the dbase from JSP page except beans. I dont want to write connection in each and every page.. Is there any other way. Pls... help me........ Hi friend, Use Hibernate - JSP-Servlet :: JSP Page...;Hi Friend, Please clarify your problem. What do you want to do in your JSP code - JSP-Servlet JSP code hi i want to clear text box fields after click on submit button (after the value get stored). how can i do that . plz help me out. thanking u......... This is what my code is. Jam Name 403 Forbidden error - What is 403 Forbidden error, please explain... 403 Forbidden error - What is 403 Forbidden error, please explain... Can anyone please explain, What is 403 Forbidden error. And how can i remove this error, which is restricting my page from loading it. Thanks!! Ok How to create Discussion Forum? - JSP-Servlet , Which technologies you want to use. Please explain (1.) JSP (2.) Servlets...How to create Discussion Forum? Hi, Can u tell me what do you... What is your purpose and which type you want to make jsp -sql - JSP-Servlet jsp -sql Hello Sir/Madam, Please send me the error... by entrydate" What is the data type of "entrydate" if "entrydate" is Date type...:// Thanks - JSP-Servlet JSP Retrieve value from Database Please explain with the help of an example about how to Retrieve value from Database using JSP? Retrieve value from DatabaseMethods and ways through which you can retrieve values servlets - JSP-Servlet servlets thanks deepak for ur help.. but still i`m confused.. u had send me e servlet for employee details.. based on that can u explain me where.... To develop an application using servlet or jsp make the directory structure Read jsp,javascript - JSP-Servlet jsp,javascript Hi, I want to retrieve record and move it. Can any body tell me what is the problem with this code Deparment function update(s) { alert(s); if (s==0) { alert("pr jsp - JSP-Servlet Full Form of JSP What is the full form of JSP? Hi friend...;& pass.equals(password)){ %> <jsp:forward page="... for more information, Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://roseindia.net/tutorialhelp/comment/11776
CC-MAIN-2016-18
refinedweb
2,718
76.01
The original cd_clint.dll is a part of Cydoor spyware. The source code in this node was included with the archive at. When compiled, the DLL this source creates mimics Cydoor enough to fool Grokster v1.5 and probably other applications that require Cydoor to be installed in order to function. Usage: Put the cd_clint.dll this creates in c:\windows\system after you have installed the afflicted program (replacing the existing file). You will probably need to do this again should you update any software that uses the DLL (as the fake one will be overwritten by a real one). See also: cd_load.exe #include <windows.h> #pragma hdrstop #include <condefs.h> USERES("Project1.res"); //--------------------------------------------------------------------------- #pragma argsused // This is the source of cexx.org's Cydoor-compatible spyware condom DLL, // cd_clint.dll. This was created in Borland's c++ builder, remember to // remove Borland-isms for other compilers. Compile as a Win32 console // DLL (no GUI stuff necessary). // Credits: // Thanks CYDOOR for providing their SDK on their anonymous FTP server(!) // Thanks Wiz for letting me borrow his compiler // Thanks Dave for betting me there was some ultra-bogus verification // routine to keep people from doing this // (ps. Dave, you owes me $5) //Note from Mike Dombrowski([email protected]): //I found this source and modified it to work with the newest Cydoor //API with the addition of the DescWrite function //This works fine for iMesh and I've not tested it with any other programs. //Remember, I'm _NOT_ the original author, he should receive all the credit. //The url lists the //current API function for Cydoor //The project file I included works nice for Borland C++ Builder 5, in //order to get it to compile you have to change the calling method to //normal in project options. //imp-exp-view.EXE is a fantastic little program that I found that shows //the imports/exports of a dll. // Exported functions. Syntax to do this may differ depending on your // compiler? // Authentic CD_CLINT.DLL has more exports, but nobody's supposed to know about // them.) { // Nothing to do here. } void ChannelRead(int AdwrCode, char* ChannelIn, int Resv1,int Resv2) { // Nothing to do here. } int ServiceShow(int AdwrCode, int LoctNum, int LoctIndx, HWND hWnd, int X, int Y, int LenX, int LenY, int Mode, void *General1, void *General2) { // Return true to tell the host application the call succeeded. return 1; } int ServiceClose(int LoctIndx, HWND hWnd, void* General2) { // Return true to tell the host application the call succeeded. return 1; } void DescWrite(int BitStart, int BitLen, int Val, int Resv1, int Resv2) { // Nichts zu tun } int WINAPI DllEntryPoint(HINSTANCE hinst, unsigned long reason, void*) { return 1; } Need help? [email protected]
https://everything2.com/user/Erik+Fish/writeups/cd_clint.dll
CC-MAIN-2018-51
refinedweb
450
65.42
Search Create How can we help? You can also find more resources in our Help Center . Select a category Something is confusing Something is broken I have a suggestion Other feedback What is your email? What is 1 + 3? Send Message Advertisement Upgrade to remove ads 26 terms maddogdil Reg 2 Questions STUDY PLAY Which expense, both incurred and paid in the same year, can be claimed as an itemized deduction subject to the two percent-of-adjusted-gross-income floor? a. Employee's unreimbursed business car expense. b. One-half of the self-employment tax. c. Employee's unreimbursed moving expense. d. Self-employed health insurance Explanation Choice "a" is correct. Employee business expenses, including unreimbursed car expense, are deductible as itemized deductions subject to the 2% floor. The Browns borrowed $20,000, secured by their home, to pay their son's college tuition. At the time of the loan , the fair market value of their home was $400,000, and it was unencumbered by other debt. The interest on the loan qualifies as: a. Deductible personal interest. b. Deductible qualified residence interest. c. Nondeductible interest. d. Investment interest expense Choice "b" is correct. Interest paid on a debt secured by a home mortgage is classified as deductible qualified residence interest. The Browns would be able to deduct the interest paid as an itemized deduction . The limit is $100,000 of mortgage interest since the loan was not to buy, build, or improve the home. Insurance against loss of income is not payment for medical care and therefore is not deductible. Davis, a sole proprietor with no employees, has a Keogh profit-sharing plan to which he may contribute and deduct 25% of his annual earned income. For this purpose, "earned income" is defined as net selfemployment earnings reduced by the:For Keogh plans, earned income is defined as net self-employment earnings reduced by the amount of the allowable Keogh deduction and Y, the self-employment tax. Charitable contributions subject to the 50-percent limit that are not fully deductible in the year made may be: a. Neither carried back nor carried forward . b. Carried back two years or carried forward twenty years. c. Carried forward five years. d. Carried forward indefinitely until fully deducted. Choice "c" is correct. Charitable contributions subject to the 50% limit that are not fully deductible in the year made may be carried forward five years. In Year 10, Farb, a cash basis individual taxpayer, received an $8,000 invoice for personal property taxes. Believing the amount to be overstated by $5,000, Farb paid the invoiced amount under protest and immediately started legal action to recover the overstatement. In November, Year 11 , the matter was resolved in Farb's favor, and he received a $5,000 refund . Farb itemizes his deductions on his tax returns. Which of the following statements is correct regarding the deductibility of the property taxes? a. Farb should deduct $8,000 in his Year 10 income tax return and should report the $5,000 refund as income in his Year 11 income tax return. Choice "a" is correct. Under the tax benefit rule, Farb should report the $5,000 refund as income in Year 11 since Farb itemizes deductions and would have received a tax benefit from deducting the $8,000 paid in Year 10. During the year, Barlow moved from Chicago to Miami to start a new job, incurring costs of $1 ,200 to move household goods and $2,500 in temporary living expenses. Barlow was not reimbursed for any of these expenses. What amount should Barlow deduct as itemized deduction for moving expense? a. $0 b. $2,700 c. $3,000 d. $3,700 Explanation Choice "a" is correct. There is no itemized deduction for temporary living expenses, and the direct moving expenses (such as the costs to move the goods and the costs to move the taxpayer's family from the old to the new location) are deductible before adjusted gross income, not as an itemized deduction. In the current year, Drake, a disabled taxpayer, made the following home improvements: Pool installation , which qualified as a medical expense and increased the value of the home by $25,000 $100,000 Widening doorways to accommodate Drake's wheelchair 10,000 (the improvement did not increase the value of his home) Cost For regular income tax purposes and without regard to the adjusted gross income percentage threshold limitation, what maximum amount would be allowable as a medical expense deduction in the current year? a. $110,000 b. $85,000 c. $75,000 d. $10,000 Explanation Choice "b" is correct. A capital expenditure for the improvement of a home qualifies as a medical expense if it is directly related to the prescribed medical care. However, it is deductible to the extent that the expenditure exceeds the increase in value of the home. Thus, Drake may only deduct $75,000, the difference between the cost of improvement ($100,000) and the increase in market value ($25,000) of the home. In addition , the full cost of home-related capital expenditures to enable a physically handicapped individual to live independently and productively qualifies as a medical expense. The widening of hallways qualifies as this type of expense and , therefore, the entire $10,000 is deductible. A calendar-year individual is eligible to contribute to a deductible IRA. The taxpayer obtained a six-month extension to file until October 15 but did not file the return until November 1. What is the latest date that an IRA contribution can be made in order to qualify as a deduction on the prior year's return? a. October 15. b. April 15. c. August 15. d. November 1. Explanation Choice "b" is correct. For IRAs, the adjustment is allowed for a year ONLY if the contribution is made by the due date of the tax return for individuals (April 15). The due date for filing the tax return under a filing extension is NOT allowed (i.e., filing extensions are NOT considered). The charitable contribution deduction for contributions of property is normally the lesser of the property's basis or the fair market value of the property, on the date of the donation, or the lesser of $14,000 or $25,000 in this question. However, contributions of appreciated property, as in this question, are deducted at fair market value. That deduction might be limited to 50% of AGI ($45 ,000) or 30% of AGI for long-term appreciate property ($27,000), but the $25,000 is the maximum deduction in this case. The "lesser or' rule really applies to depreciated property and keeps a taxpayer from taking a fair market value deduction for such property.Family has two kids, spend 4k on each kid for child care. Dads wages are 60k Mary wages are 2,500 What amount of child and dependent care credit may the woods claim on their joint tax return? The max eligible for 1 dependent is 3000. The it is further limited b/c it is limited to the lowest earned income of either spouse. That would be mary's 2,500. Due to their combined income level they are in 20% credit range, the credit is 20% of 2,500 or 500 t/p has 70k in taxalb eincome before personal exemptions in the current year Mills had no tax preferences. His itemized deductions were as follows; State and local income taxes 5,000 Home mortgage interest on loan 6,000 Misc deductions 2,000 What amount did mills report as AMT tax income before AMT exemptions 77k How long can amt credit be carried forward? Forward indefinitely Robert had current year adjusted gross income of 100k and potential itemized deductions as follows: Medical expenses before percentage limitations 12,000 State income taxes 4000 Real estate taxes 3,500 Qualified housing and residence mortgage interest 10,000 Home equity mortgage interst 4,500 Charitable contributions (cash) 5000 What are Roberts itemized deductions for amt? Medical exp 2000 Qualified housing and residence interest 10,000 Charitable contributions 5,000 Total amt deductions 17k t/p w/h 16,000 in federal income taxes and made no estimated payments for 2008 April 15 year 2009 t/p timely filed extension request to file her individual tax return and paid 300 of additional taxes T/p tax liability was 16,500 for 2008 What amount would be subject to the penalty for the underpayment of estimated taxes? Provided the taxes due after w/h were not over 1000 there is no penalty for underpayment of estimated taxes. Note that there would be failure to pay penalty on the 200 that was not paid until April 30 but that is separate penalty T/p has agi of 160k, 2008 In 2009 what amount must t/p pay to avoid penalties for underpayment? (for 2009 return answer can be in %) 1. 90 % of tax on the return for the current year paid in 4 equal installments 2. 110% of prior year tax liability paid in 4 equal installments * lesser of the two is called safe harbor Tp filed timely on April 15 but accidentally omitted 30% of his income, how many years can the IRS assess penalties? For a 30% understatement of gross income anything over 25% the statute of limitations is 6 years.. Is limited to $100,000 on a joint income tax return Choice "d" is correct. Home equity indebtedness is limited to $100,000 on a joint income tax return (or single return), but only $50,000 if married filing separately An individual's losses on transactions entered into for personal purposes are deductible only if: a. The losses qualify as casualty or theft losses. b. The losses can be characterized as hobby losses. c. The losses do not exceed $3,000 ($6,000 on a joint return ). d. No part of the transactions was entered into for profit. Choice "a" is correct. An individual's losses on transactions entered into for personal purposes are deductible only if the losses qualify as casualty or theft losses. In addition, the individual must itemize deductions and the loss must exceed 10% of AGI plus 500 per casualty Jimet, an unmarried taxpayer, qualified to itemize deductions. Jimet's adjusted gross income was $30,000 and he made a $2,000 cash donation directly to a needy family. During the year, Jimet also donated stock, valued at $3,000, to his church. Jimet had purchased the stock four months earlier for $1 ,500. What was the maximum amount of the charitable contribution allowable as an itemized deduction of Jimet's current year income tax return? a. $0 b. $1,500 c. $2,000 d. $5,000 Choice "b" is correct. $1,500. Deductible amount is lower of: Stock at cost (short term property) AGI limit (30% of $30,000) 'Allowable contribution $1,500' 9,000 Rule: Contributions of long-term property are generally deductible at fair market value at the date of the gift. Contributions of short-term property are generally deductible at the lower of cost or fair market value. If an individual paid income tax in the current year but did not file a current year. Choice "a" is correct. Two years from the date the tax was paid .. Reimbursement of such excess from his employers, if that excess resulted from correct withholding by two or more employers. c. The excess as a credit against income tax, if that excess resulted from correct withholding by two or more employers. d. The excess as a credit against income tax, if that excess was withheld by one employer. Explanation Choice "c" is correct. An employee who has had social security tax withheld in an amount greater than the maximum for a particular year, may claim the excess as a credit against income tax, if that excess resulted from correct withholding by two or more employers. Choice "a" is incorrect. The excess resulting from the correct withholding by two or more employers may only be claimed as a credit against income tax. Mrs. Vick's substantiated cash donation to the American Red Cross. Tax Treatment A. Not deductible. B. Deductible on Schedule A - Itemized Deductions, subject to a threshold of 7.5% of adjusted gross Income. C. Deductible on Schedule A - Itemized Deductions, subject to a threshold of 2% of adjusted gross income. D. Deductible on page 1 of Form 1040 to arrive at adjusted gross income. E. Deductible in full on Schedule A - Itemized Deductions. F. Deductible on Schedule A - Itemized Deductions, subject to threshold of 50% of adjusted gross income. Choice "F" is correct. The substantiated cash donation to the American Red Cross would be deductible on Schedule A - Itemized Deductions, subject to the threshold of 50% of AGI. Determine whether the statement is true or false regarding the Vicks' Year 15 income tax return. The funeral expenses paid by Mr. Vick's estate is a Year 15 itemized deduction Choice "F" is correct. Funeral expenses are non-deductible on the Form 1 040. Determine whether the statement is true or false regarding the Vicks' Year 15 income tax return. The Vicks' income tax liability will be reduced by the credit for the elderly or disabled. F. False T. True Choice "F" is correct. The Vick's income tax liability will not be reduced by the credit for the elderly or disabled since their AGI amount in excess of the threshold effectively eliminates the credit. (The calculation for the credit is complicated. In short, the credit is 15% of an adjusted "initial amount," which is $7,500 for married filing joint and quickly goes to zero with earned income.) The Vicks paid alternative minimum tax in Year 14. The amount of alternative minimum tax that is attributable to "deferral adjustments and preferences" can be used to offset the alternative minimum tax in the following years. F. False T. True Choice "F" is correct. The amount of alternative minimum tax that is attributable to "deferral adjustments and preferences" can be used to offset the regular tax liability in the future years, not the alternative minimum tax. Green is self-employed as a human resources consultant and reports on the cash basis for income tax purposes. Listed below is one of Green's business or nonbusiness transactions, as well as possible tax treatments. Select the appropriate tax treatment for Green's transaction. Transaction Qualifying contributions to a simplified employee pension plan. Qualifying contributions to a simplified employee pension (SEP) plan are fully deductible to arrive at AGI In the current year, Oate paid $900 toward continuing education courses (not at a qualified higher education institution and not qualified for any tax credits) and was not reimbursed by her employer Employee un-reimbursed business expenses are deductible as miscellaneous deductions subject to the 2% threshold on Schedule A. During the current year, Oate had investment interest expense that did not exceed her net investment Income. Tax Treatments Investment interest expense is deductible up to net investment income. Unused expenses can be carried forward indefinitely. For the current year, Oate paid $1 ,500 to an unrelated baby-sitter to care for her child while she worked. Tax Treatments Choice "G" is correct. Child care credits are allowable for payments made to care for children while both spouses work. Advertisement Upgrade to remove ads
https://quizlet.com/13059261/reg-2-questions-flash-cards/
CC-MAIN-2017-22
refinedweb
2,568
53.31
The Importance of Procedural Content Generation In Games 160 Gamasutra reports on a talk by Far Cry 2 developer Dominic Guay in which he discussed why procedural content generation is becoming more and more important as games get bigger and more complex. He also talks about some of the related difficulties, such as the amount of work required for the tools and the times when it's hard to retain control of the art direction. Quoting: "Initially, the team created a procedural sky rendering approach based on algorithms — which led to a totally unconvincing skybox that was clearly inferior to what a hand-authored skybox would be. 'We considered it to be a total failure,' he said. He explained that a great deal of focus must be put on the tools that surround the algorithms, to allow the systems to be properly harnessed. In the end, the game shipped with a revamped procedural sky system that ended up much more effective than the first attempt." Absolutely (Score:5, Interesting) If you've never made a game yourself, you'd be amazed at how much work it is to create content. Speaking as someone who has made a game (see Game! - The Witty Online RPG [wittyrpg.com]), I'll tell you it's way way more work to create content than it is to create game engines or anything else code related. Try looking at the credits for most any game created in the last 10 years and you'll probably find at least 5 content creators (artist, story editor, copy writer, map/model creator, etc) for every 1 programmer, if not more. So, absolutely, procedural content generation is a huge boon if you do it correctly, much like the "social" aspect of Web 2.0 was a boon for the web, when done correctly. Of course, it's also tricky to do correctly, very few games are there at the moment, and it'll probably take awhile until we have lots of good examples. Re: (Score:2) Ahhh but you forget that it costs a lot less to hire a few decent artists than a few decent programmers. Re: (Score:2, Insightful) Ahhh but you forget that it costs a lot less to hire a few decent artists than a few decent programmers. Citation needed. My personal experience conflicts with your statement. Re:Absolutely (Score:4, Interesting) Trust me, art is expensive. A good procedural system has tweakable values and seeds, and near-instant results (great for prototyping) so an artist won't spend several days or weeks developing a snazzy model only to see their hard work on the cutting room floor. For professional games, art is much more expensive than code. Re:Absolutely (Score:4, Funny) Sorry, this might take a bit longer, I'm still working on that Procedural Game Generator. (but I think EA beat me to it) Re: (Score:3, Funny) I'm still working on that Procedural Game Generator. (but I think EA beat me to it) s/2008/2009/g? Or I guess they have created a true prodcedural generator that backreferences last year and adds one, so they avoid the tedious yearly modifications. Re: . (That's 0.047 megabytes in today's terminology.) It was simply amazing to explore all that territory because it felt so realistic. It was not so fun when the pirates attacked you 5 against 1. ;-) You migh Elite . TFA says Elite had 8 galaxies of 256 planets each. I thought is was 8 galaxies of 1000 planets each. Whatever it was, it was a lot. And the Acorn Electron version had to fit it into only 32kB. And Frontier was even bigger. Well, it was only a single galaxy, but it had literally millions of stars, most of them orbited by several planets. And now they come with a lame story about sky. That's not procedural content generation, it's procedural eye candy generation. It was not so fun when the pirates attacked you 5 against 1. ;-) You might as well just quit the game at that point. Get military lasers! Unfortunately the Acorn El Re: (Score:2) I guess you've never seen Beat'Em and Eat'Em. Or Custers' Revenge. Re: (Score:2) At least the old games didn't move like a grandma stuck in a spider web. Modern games like Vice City take forever to finally "get to the point". (half an hour later). You mean I haven't reached the first mission yet?!?!? Re: (Score:2) Yup, Elite was pretty amazing. The star locations, names, and commodity prices at every location were all generated algorithmically. Locations aren't too hard - it's just a 2D grid with a decision function weighted towards 'no' to decide if there's a star there. Names, as I recall, were composed from a shortish list of phonemes in a random order (just shuffling the order so you didn't get the same name for two stars). Commodity prices were generated from a small number of attributes like political struc Re: (Score:2) if anyone is interested, Nova has a pretty good documentary (Nova - Hunting the Hidden Dimension, aired on 2008/10/28) on the history and applications of fractals, which is often used for procedural content. for instance, they frequently use fractal algorithms for movie special effects. in the documentary they talk to a guy at ILM who explains how the lava fight scene between Anakin and Obiwan uses fractals to generate the splashes of lava seen in the background. they also discussed the first time fractals w Re: (Score:2) Or have there be a star at each intersection of the grid, but tweak the position with a function which takes the x and y coordinates as input (and the seed, if you want to be able to generate different galaxies). As long as you limit the tweaking to less than half the size of a grid unit, there will neve be overlap, and it'll be random enough. Re: (Score:3, Interesting) EVE online used a proceedural world generator. Sometime midway through development, they realized they had to tear it all down so that the answer (seed) to the life the universe and everything would indeed be 42. Thus EVE was reborn, and it was... well its EVE. Re:Absolutely (Score:5, Insightful) "If you've never made a game yourself, you'd be amazed at how much work it is to create content." And it's so much work because of the art, animation and effects, lets face this fact. Back in the 8-bit days the amount of work required was a lot but managable, you could put a lot more content into a 2D game in a lot less time then you can in a modern 3D rendered game in which the resolution is much higher and the more artists and designers you need just to get the most basic and mundane things done that existed in the 2D area. The same content in 2D that was easy, takes infinitely more time in the 3rd dimension just to get anywhere near the artistic quality of a good team of 2D artists. But this is something companies brought on themselves with their technolust, as we've seen with MegaMan 9 there are still people out there that like games for more then just their looks. Fractal Generation (Score:5, Interesting) Ever look at a city from the sky? No, not some 707 that jumps to 30,000 feet in a matter of seconds, but in a small plane? Google Earth is an approximation - but you lose the depth of everything, and all you see is rooftops. Go up in a small plane at 2000 feet above a medium-sized city (100,000 and up) or bigger. As a private pilot, I get plenty of chances to do so. One of the things that never ceases to surprise me is just how... fractal most cities are. Houses are lined up in neat rows along streets that are usually either straight or follow some landmark, EG: a river. Most towns (in California) have an older "downtown" that is always a grid with closely packed, multi-story buildings, alongside an "uptown" that widely spread out, grid-shaped buildings with large parking lots, surrounded by the "burbs", older homes on wide, grid-shaped streets and newer homes on windy, curved streets that tend to roughly follow landmarks. New cities (built in the last 50 years or so) don't have a "downtown", just an "uptown", but they all have an uptown. Freeways mostly go between the downtown/uptown areas, and then spread out in a roughly bicycle-wheel shape, towards the nearest large community. Like I said, it's not a great substitute, but ste here's a stereotype [google.com] California city. I don't know, but these basic development patterns hold true down to the very substance of the buildings themselves... Older buildings use lots of brick or wood, newer buildings tend to be stucco and wood-based plywood paneling. Larger new buildings tend to be steel and concrete, larger old buildings tend to be... brick. If you created a pattern based on the age of the parts of town, and then applied a fractal pattern based on age, you could probably come up with an extremely realistic-looking city with very little effort. Automatically, with roads that make sense (EG: don't lead nowhere) and houses that look like real neighborhoods, etc. Combined with a bit of a "noise factor" and the results would likely be indistinguishable from a real city. Heck, you might not even need to save the actual city - if the results are generated by a fractal math function, you'd just need to store a seed, an integer or somesuch so that the city can be auto-generated on demand. Re: (Score:3, Funny) ...an extremely realistic-looking city with very little effort. Automatically, with roads that make sense (EG: don't lead nowhere) roads that lead somewhere...hmmm...it wouldn't be much good for simulating Ireland, then. Re: (Score:1, Offtopic) Re: (Score:2) roads that lead somewhere...hmmm...it wouldn't be much good for simulating Ireland, then. There's a purpose for that, though. It confuses the snakes. Re: (Score:2) Roundabouts aren't that common in Canada. It's been talked about for that particular intersection before but it would probably cause more frustration than needed for a city that's never had a roundabout. Re:Fractal Generation (Score:4, Interesting) I hear what you're saying we'll get to heavily auto-generated content sooner or later with simply advancements in math, science and technology. Your post reminds me of midtown madness... [microsoft.com] [microsoft.com] Midtown madness had a large play area and you could drive around and do whatever you wanted but it quickly got boring, we will still need developers to tune the experience and the content. I imagine autogenerated content will not be enough, there will always be tuning required and always the need for artists and animators for art direction, and such like. But just generating content for contents sake today is not what is going to make a game, you have to have interesting things to do and interesting characters that inhabit them. There is one thing that makes game memorable are hitting all the notes, some games get it all right (god of war comes to mind) but most others get a few right (mainly visuals) and everything else barely passable. I can only imagine how long it took to tune the game mechanics in a game like god of war, the settings, the camera angles, etc. It's not merely content generation, it's the experience that the developers create for the gamers themselves. Why should a gamer care about game x/y/z? what's the hook? whats the draw? The thing I loved about 8-bit games was that developers had to find the fun and expression of meaning within constraints and not depend on merely flash to sell games. Re:Fractal Generation (Score:5, Funny) Automatic plot content? Let me help. def generate_mmorpg_quest(): from random import choice as c Compliment = c(['brave','noble','1337']) PC = c(['warrior','chevalier','hunk']) McGuffin = c(['baby','necklace','iPhone']) Lost = c(['stolen','dropped','forgotten']) Arena = c(['cave','forest','library']) Reward = c(['baby','necklace','iPhone'].remove(McGuffin)) Enemy = c(['orcs','terrorists','street mimes']) Dialogue = """Oh, %s %s. My %s has been %s was %s in the %s! If you can bring it back safely, I fill grant you this %s. Be careful, I fear there may be %s!"""%(Compliment,PC, McGuffin,Lost,Arena,Reward,Enemy) return Dialogue Real programmers will insist that a domain specific language gets used, so the interpreter can abstract the context handling, but it's good enough for the first version... Re: (Score:2) That reminds me of then automatic map generation of the early Elite versions. The C64 version did not only generate the map procedurally, but also the background info for the worlds on the map. This led to such things as "Planet XXX is known for its edible arts graduates" ;-) Re: (Score:2) Rewards = ['baby','necklace','iPhone'] Rewards.remove(McGuffin) Reward = c(Rewards) There's a TypeError in the dialogue, I have changed the second line to: %s in the %s! If you can bring it back Open source ftw. Re: (Score:2) Thanks, I saw the typo after I posted. I didn't catch the remove method ... that's embarrassing. I guess I should have tested it before committing. Re: (Score:2, Interesting) Oh hell, don't remind me. In history class we had to learn how to identify the age of city areas by their road organization style. I don't think organizing the houses is the hard part in generating a city for a game, painting the individual houses is. Of course you can probably use procedural generation to increase the variety but you'll still have to feed a large number of basic elements into the system to get a good range of looks instead of copypasta buildings. The player won't see a large part of the cit Useful as far as it goes (Score:2) That's fine if you want to model a generic American city, but it's less useful if you want to set your story in a specific city (although maybe the broad strokes can be done manually and then the detail fractally) or in a European city, which being so much older tend to have pockets of non-gridded streets where villages or minor towns have been swallowed up by expansion. ISTM, anyway, that fractal generation of the models of the houses themselves would be far more valuable. Laying out roads is a relatively m Re: (Score:2) Yes, any European city with some history behind it looks "organic" rather than designed. Think about it. I'm living in a small city that was founded in 1190 and that for centuries had to withhold attacks from the Turks. This doesn't mean that you can't generate cities using some fractal procedures though, it's just that depending on the setting, it might be more difficult. Re: (Score:1) Re:Fractal Generation (Score:5, Informative) The guys at Introversion are already trying this: [youtube.com] Re: (Score:2) People can see patterns everywhere, but that doesn't mean it's easy to recreate the same effect just with a few basic maths equations. What you'd probably end up with is something that looks far too much like a fractal pattern, unless you added back in all the stuff that takes a lot of effort and looks real. Starflight (Score:2) The classic game Starflight used a fractal generation technique to populate the galaxy and create planet surfaces as I recall. They fit an incredible amount of depth into a game that fit on two low density 5.25" floppies and that was part of the way they did it. Re: (Score:3, Interesting) Ever look at a city from the sky? [...] One of the things that never ceases to surprise me is just how... fractal most cities are. If this sort of thing holds your interest, you might want to spend a quarter of an hour on Ron Eglash's TED talk on African fractals [ted.com]. Just a thought. Practical applications (Score:2) You, sir, have just described my dream for Sim City 9000. It is a dream that will almost certainly never come from EA/Maxis (due in no small part to the the new direction represented by Sim City Societies), but you lay out a very plausible methodology for procedural urban synthesis. Combine this organic growth with user-directed constraints, and you could have a very compelling simulation. Christopher Alexander's Patterns (Score:2) Re: (Score:2) Here are some screenshots and info [introversion.co.uk]. Re: (Score:2) A real city in north america maybe (Score:2) But most cities in the rest of the world arn't built on a grid system - they grew up at random and have completely random street patterns. Re: (Score:1) I don't think MM9 proves that there's a real market for 8bit stuff, just for big name brand stuff that was popular in the 8bit era being made in that style. A dev team that's not working with an existing, popular 8bit IP would have trouble selling an 8bit-like game simply because noone wants that when there's no nostalgia associated with it. Of course from what I see even one man dev teams can handle 32 bit 2d graphics just fine. Re: (Score:3, Informative) You're partly right and partly wrong. Originally, 3D was actually much better in terms of time requirements. It was much easier to create a 3D model and animate it than it was to draw sprites and every single animation for them frame by frame, particularly if you dared to have your character/whatever "turn around". This problem was exagerated for games like desert strike and command and conquer where things weren't straight 2D but were on a slight angle so you couldn't get away with simply rotating sprites a Re: (Score:3, Informative) I disagree on the point of 2D reaching its limit. There have been plenty of technologies that have been developed for 3d graphics that make significant improvements for 2D games as well. If you haven't seen it before check out the game Oden Sphere. This is one have that took 3d technologies and using then to great effect in a 2D environment. Oden shpere used polygons on a 2 dimensional plane to stretch and distort the sprites textures allowing the game to have a more dynamic motion. it then used that technol Re: (Score:2) I'm talking about the type of game and not just merely the process of creating art. I'm talking about overall content creation for 2D games, not the process of creating 2D art and textures for a 3D game (which still requires more work then a 2D game). Things also have advanced considerably since then in terms of hardware horsepower and graphical resolution, thereby increasing demands in both domains. Consider the level of detail of old 8-bit and 16-bit games, you don't need anywhere near the texture deta Re: (Score:1) I'll tell you it's way way more work to create content than it is to create game engines or anything else code related. I once programmed a simple MUD game from scratch, as part of a school project. I only bothered to add about 15 locations to the game. The creativity is more tedious than the coding. Wow. Re: (Score:2) > (...) > I only bothered to add about 15 locations (...) Yay, mine had 9, but had support for alternate realities, teleports, ground of varying height (hills, etc) and big open spaces ("you can see a tall tower on the north-east" -- created procedurally). My sister was supposed to add more content but I've lost interest in the project before it seriously took off from the ground. Re:Absolutely (Score:4, Interesting) If you've never made a game yourself, you'd be amazed at how much work it is to create content. Oh yes! In two or three working nights*, you can go from no OpenGL programming experience to having a working, near-complete version of tetris [and you could probably get the scoring and acceleration in there if you had coded things the right way in your first try]. *Unless you wait 15-30 minutes for your computer to boot and shutdown ;) But blocks each in a single color, against a black background and some dark gray well walls... nuh-uh. You need a background texture, you need textures for all the pieces. You need to consider having multiple background textures---with and without drop hints, and with different kinds of drop hints. If you want wells of more than one size, you need a background texture, plus some way of generating drop hint textures from some abstract description and sprites combined to form the real well texture. And maybe a nice low-alpha image of the current block at its fast-dropped position. I'm doing this in 3d. Really it's 2d, but with a slightly tilted view so one can see the undersides of the blocks. Should the undersides have different textures? How should the camera control work--fixed, auto or manual? Do I need to make special overside textures as well? If you want to turn it into a tetrinet client, you need special textures (maybe overlaid textures) to indicate all the special weapon blocks. And I need to consider what the background of the menu should be... some sort of demo mode, perhaps? Tetris Holding LLC has a sound trademark on using the song Korobeiniki in a video game. Should I find copyright-expired versions of some of the other songs used as background music for tetris? Should I record some myself? Should I cop out and just run ~/.mytetris/{pre,host}hook from a wrapper script, such that the user can easily load tetris.m3u, but on their own and not as a part of the game? And that's just for frigging tetris, one of the simplest games imaginable. Re: (Score:2) [Plenty of decisions] And that's just for frigging tetris, one of the simplest games imaginable. Tell me about it [pineight.com]. Re: (Score:2) Yeah, one of the interesting things about procedural content generation is that it moves the emphasis from quantity of staff working on a project to quality. While it requires fewer lines of code, unless a programmer is at a certain level (PhD-capable, I'd say), they simply can't make a genuine contribution to the project. While not needing quite the same level, the other positions in the project also need higher levels of intellectual capability compared to their peers simply to be able to work with and ta Re: (Score:2) Of course, it's also tricky to do correctly, very few games are there at the moment Come on, it's not that difficult to rotate the palette of the monsters. Or you can just give the same monsters a load of more hit points as you progress from normal to nightmare and hell. ;) Another interesting idea is to create combinatorial mechanics. For instance, in One Must Fall, you can choose between 10 pilots and 10 robots, giving you 100 unique fighting styles for the cost of 20 pieces of artwork. And there's of course the possibility of player-generated content in the style of Spore. I imagine it Re: (Score:2) Those same tricks, if overused, can make the game lifeless and boring. Palette swapping and combinatorial mechanics are ok, but you have to make sure that things remain fresh for the player. Re: (Score:1) Procedural content requires someone to have a grasp of both programming and art, those skills are usually mutually exclusive (different brain wiring) so finding someone who can work with procedural generation and make sure it looks good is going to be hard and expensive. Re: (Score:2) I'm both a (hobbyist) programmer and a (hobbyist) artist (I play guitar, draw comics, etc). Programming was always a kind of art to me, because coming up with a good solution to a problem often needs a lot of creative thinking. The copyright law applies to both programs and other works of art. Once a girl (who was a cello player) asked me, why do I play guitar. I was very surprised by this question... I never really thought about it before. "Because I like it? Re: (Score:1) Personally, I don't find procedural content to be a savior here. It's most promising for automatic generation of large universes (see Elite, Elite Frontier, Re: (Score:2) You are aware of DarkTree [darksim.com], right ? Re: (Score:2) Re: (Score:2) You have _clearly_ not used the program, you are _clearly_ misinterpreting the user interface screenshot, and you _clearly_ dont work as a 3D texture artist ( neither do i but i know a few of them who all consider procedural texture generators like DarkTree and Genetica valuable tools ). A small clue: Just because something is on the left side of the screen in the user interface do Re: (Score:2) Its already being done too in simpler forms, to add modulated detail to texture etc. with fragment shaders essentially becoming part texture generators. There are different balance point on how much you want to let be pregenerated by CPU and cached, how much of it always recalculated on the fly, and how much simply streamed from th Re: (Score:2) Re: (Score:2) What game project? I trained in art until I realized I could make good money mucking with computers (and enjoyed it almost as much), but I'd like to get back into some of the more arty stuff. Re: (Score:2) What game project? I trained in art until I realized I could make good money mucking with computers (and enjoyed it almost as much), but I'd like to get back into some of the more arty stuff. Re: (Score:3, Insightful) Man, I can't wait for "Average Mid-Sized City the RPG!" Procedural content is alright for some things, but where's the gameplay in it? People loved Grand Theft Auto in large part to the amount of care that went into each area that most players would never notice - such as the mural in an out of the way subway station. Also, the little differences in the various boroughs from the design to the locals that made it feel authentic. You could generate a thousand square miles of procedural city, but it will make Re: (Score:2) proceedural isnt synonymous with homogenous and flat. you can seed different attitude & parameters for different areas. rather than imagining a world comprised of a single function, it'd be closer to a world of aggregated functions where each area has stronger or weaker impact from the various functions. each "function" just describes a particular flavor of terrain. Re: (Score:2) Left4Dead (Score:2) Left4Dead (4-player zombie survival co-op, released yesterday by Valve) deserves a mention. While not exactly procedural "content", it is perhaps one of the first examples of "procedural gameplay" (at least in a modern shooter). For those who don't know, Valve built an algorithmic "pacing engine" into L4D called the Director, which has complete control of what enemies and items you encounter. What this means is that every playthrough is completely different and you never know what to expect. It's exhilarati Re: (Score:2, Flamebait) You got paid for that copy already. Get over yourself. I'll keep buying used DVDs and games, just like I buy used cars. Maybe if you charged less than $50 for 4-5 hours of gameplay I'd consider buying new. Quantity Vs Quality (Score:2, Interesting) Re: (Score:3, Interesting) [barrys-rig...eviews.com] Also, introversion is working on something: [youtube.com] Re: (Score:1) Yeah, and every single "piece" of a tree is exactly the same as the rest and moves at the same time and with the same speed as the rest. The end result is pretty disconcerting and not a little ugly. Procedural content has a strong tendency right now to also be ugly content that sticks out like a sore thumb compared to anything with even the remotest degree of human involvement. Re: (Score:2, Interesting) woo Handcrafting is a procedure, too (Score:3, Insightful) Handcrafting a scenery is just another form of procedural generation, at the core. The crafter follows his or her own heuristics and combine them with specific content models and elements as a source, while remaining within the technical constraints projected for the end-result. How do you make, say, a RTS map ? You start by stating your goals: there will be N starting bases, the landscape will include M ridges so that the length of path between each base is balanced, and each will have access to pretty much Re: (Score:2) While the RTS map layout could probably be easily generated it takes a human to make it look good. Computers tend to generate very unrealistic looking maps. Re: (Score:3, Interesting) I haven't played Oblivion, but as far as I know the terrain is all heightmap based. I think a large part of the boringness of the terrain is a direct result from that and not so much the procedural nature itself. A heightmap doesn't allow sudden changed in elevation and neither does it allow overhangs, so all the basic terrain looks pretty smooth and uninteresting and also pretty much all the same. Gothic2 is one of the few games I have seen with real 3D terrain, you have overhangs, cliffs, valleys and moun Re: (Score:1) Re: (Score:1) [infinity-universe.com] Re: (Score:2) A ten year W.I.P isn't a good example of procedual content generation in games. Yes it's a game, but if no one gets to play it then it's kinda pointless. Re:Quantity Vs Quality (Score:4, Interesting) It's generally a three stage process. You have something like a perlin noise function as input. Then you have a mapping function, which turns it in to something in your problem space, then you have a weighting function, which makes some things more likely than others (often the mapping and weighting functions are combined). The hard bit to get right is the weighting function. A perlin noise function and a mapping function will give you a universe, randomly selected from the set of all possible universes. Making universes which look good and are fun to play in more likely to appear is the tricky bit. This requires a lot of fine tuning. Often it's just a matter of tweaking a few constants when you have the basic algorithm implemented, but it can be really time-consuming to do. Re: (Score:2) Using a single noise input and a single function is what I blame for proceedural generation getting a bad name: it makes terrain incredibly homogenous and bland in character. Real land is made up of faults and upswells and erosion and gulleys. Even quarter way decent proceedural generation needs the ability to mix and match different factors and features, otherwise you arrive at the same bland failure we've had for a dozen years. Spell it out for me please (Score:1) So just for a joke pretend that I'm an idiot for a second: This procedural generation is about generating landscapes or textures using mathematical algorithms or something? So it's like Terragen [wikipedia.org] for they do more than landscapes... they generate buildings and clumps of grass and trees? And the textures on dirt have random mess on them to make less uniform and more believable? Something like that? Are there screenshots of this? Re: (Score:3, Interesting) ... they generate buildings and clumps of grass and trees? And the textures on dirt have random mess on them to make less uniform and more believable? Something like that? (..) Are there screenshots of this? Just for kicks: a nice example came out of the demoscene a few years back: kkrieger [theprodukkt.com] I'd call it a 'proof of concept' 3D shooter. Nothing challenging, just a few levels you can easily walk through. Nothing exceptional on the graphics side. Runs on Windows like so many games. But: a true HW-accelerated 3D shooter. Has enemies that jump / try to hurt you if you let 'em. A few big spaces to walk around in and admire the artwork on the walls. And all of this packed in an amazing 97,280 bytes! Hell, each scree Re: (Score:2) That's not the kind of procedural generation that was talked about. kkrieger uses hand-designed assets but stores them in a format that takes only a tiny amount of space, the kind we're talking about gets some input parameters and generates a huge variety of assets to populate a world with. The goal is to reduce workload, not filesize. Re: (Score:2) The goal is to reduce workload, not filesize. Tell that to Xbox 360 developers who bitch about having one-seventh the storage of a PS3 disc. Re: (Score:2) Yeah, pretty much. Here's [infinity-universe.com] a project that's attempting to create an entire universe in a similar fashion. But don't think it's restricted to just landscapes and textures. Any content can potentially be generated procedurally - buildings, creatures, music, vehicles, whatever. Re: (Score:1, Offtopic) Dictionary.com: For: by reason of; because of. Procedural? (Score:2, Informative) Sounds like by "procedural" you mean "algorithmic". I guess the algorithm might be defined procedurally, but that's not really what is discussed here. Re: (Score:3, Informative) Yes, the actual term is "procedural", as in procedural textures [wikipedia.org] and procedural content generation [wikipedia.org]. No, this is not a discussion of procedural programming [wikipedia.org] vs object oriented vs functional, etc. Re: (Score). One of the coolest procedurally-generated demonstrations I have seen is .kkrieger [theprodukkt.com], which is a first-person shooter whose content is almost entirely procedurally generated. The effe Re: (Score). In fact, there's a wiki [wikidot.com] for this stuff. Re: (Score:2) Re: (Score:2) Re: (Score:2) Re: (Score:2) Re: (Score:2) That's what I said. It done by procedural programming = writing procedures. The term has absolutely nothing to do with procedural programming vs. OOP. That would be an implementation issue that is far down into the weeds. The perm "procedure" here is meant much more generically to mean simply "not done by hand". As an example, you could do procedural content generation using only OOP. Point that the parent was making is that it doesn't have to be procedural; there are other ways of specifying algorithms Re: (Score:2) As in: the procedure we usually follow is that we get a graphics designer to make some sketches on paper, and if we like them, we give them to the 3D modelling people? Algorithmic is a much better term, I think. I would agree, but for disambiguation purposes, I was trying to at least make clear to the confused that the term has nothing to do with the programming style used to generate the content. Which seems to be a point of great confusion. Re: (Score:2) Cloud rendering (Score:1) I seem to remember somewhere in the commentary for Final Fantasy: The Spirits Within that they also spent a lot of effort on creating a computer generated sky, but it looked unconvincing. Eventually they gave up and did the clouds by hand. Procedural Muscle (Score:2) If you really want to see the extreme of what procedural generation can do, check out this 3D demo [pouet.net] of a tunnel fly-through written in 256 bytes (YES, THAT'S BYTES, not kilobytes)!!! LS Re: (Score:3, Interesting) Procedural Generation: The latest thing (Score:3, Funny) World simulation and crafting items (Score:2) I find this aspect particularly interesting, because it promises more variety in games where you can build things. Like many MMORPGs. You could experiment with various materials and get believable results, maybe at a level of realism where it actually has educational value. This could be done today at a simplified level, where the materials you use affect the stats of the items. Plus maybe textures that are swapped in depending on the material. But with serious simulation (maybe finite element analysis of ho Procedural Generation wiki (Score:2, Interesting) Procedural worlds from the creator of Bryce. (Score:2, Interesting) Take a look to this [uisoftware.com] Those are completely procedural worlds from the creator of Bryce. It's based on Artmatics that can be used to generate fantastic textures and then there is a ray tracer that can use ray casting to create the landscape. The latest move of Eric Wenger was to use that to generate procedural cities and the result to me is fantastic.
http://games.slashdot.org/story/08/11/19/0329249/the-importance-of-procedural-content-generation-in-games?sdsrc=prevbtmprev
CC-MAIN-2014-10
refinedweb
6,263
61.77
The Flask Mega-Tutorial, Part V: User Logins This is the fifth (this article) - chapter of the series we created our database and learned how to populate it with users and posts, but we haven't hooked up any of that into our app yet. And two chapters ago we've seen how to create web forms and left with a fully implemented login form. In this article we are going to build on what we learned about web forms and databases and write our user login system. At the end of this tutorial our little application will register new users and log them in and out. To follow this chapter along you need to have the microblog app as we left it at the end of the previous chapter. Please make sure the app is installed and running. Configuration As in previous chapters, we start by configuring the Flask extensions that we will use. For the login system we will use two extensions, Flask-Login and Flask-OpenID. Flask-Login will handle our users logged in state, while Flask-OpenID will provide authentication. These extensions are configured as follows (file app/__init__.py): import os from flask.ext.login import LoginManager from flask.ext.openid import OpenID from config import basedir lm = LoginManager() lm.init_app(app) oid = OpenID(app, os.path.join(basedir, 'tmp')) The Flask-OpenID extension requires a path to a temp folder where files can be stored. For this we provide the location of our tmp folder. Python 3 Compatiblity Unfortunately version 1.2.1 of Flask-OpenID (the current official version) does not work well with Python 3. Check what version you have by running the following command: $ flask/bin/pip freeze If you have a version newer than 1.2.1 then the problem is likely resolved, but if you have 1.2.1 and are following this tutorial on Python 3 then you have to install the development version from GitHub: $ flask/bin/pip uninstall flask-openid $ flask/bin/pip install git+git://github.com/mitsuhiko/flask-openid.git Note that you need to have git installed for this to work. Revisiting our User model The Flask-Login extension expects certain methods to be implemented in our User class. Outside of these methods there are no requirements for how the class has to be implemented. Below is our Flask-Login friendly User class (file app/models.py): class User(db.Model): id = db.Column(db.Integer, primary_key=True) nickname = db.Column(db.String(64), index=True, unique=True) email = db.Column(db.String(120), index=True, unique=True) posts = db.relationship('Post', backref='author', lazy='dynamic') def is_authenticated(self): return True def is_active(self): return True def is_anonymous(self): return False def get_id(self): try: return unicode(self.id) # python 2 except NameError: return str(self.id) # python 3 def __repr__(self): return '<User %r>' % (self.nickname) The is_authenticated method has a misleading name. In general this method should just return True unless the object represents a user that should not be allowed to authenticate for some reason. The is_active method should return True for users unless they are inactive, for example because they have been banned. The is_anonymous method should return True only for fake users that are not supposed to log in to the system. Finally, the get_id method should return a unique identifier for the user, in unicode format. We use the unique id generated by the database layer for this. Note that due to the differences in unicode handling between Python 2 and 3 we have to provide two alternative versions of this method. User loader callback Now we are ready to start implementing the login system using the Flask-Login and Flask-OpenID extensions. First, we have to write a function that loads a user from the database. This function will be used by Flask-Login (file app/views.py): @lm.user_loader def load_user(id): return User.query.get(int(id)) Note how this function is registered with Flask-Login through the lm.user_loader decorator. Also remember that user ids in Flask-Login are always unicode strings, so a conversion to an integer is necessary before we can send the id to Flask-SQLAlchemy. The login view function Next let's update our login view function (file app/views.py): from flask import render_template, flash, redirect, session, url_for, request, g from flask.ext.login import login_user, logout_user, current_user, login_required from app import app, db, lm, oid from forms import LoginForm from models import User @app.route('/login', methods=['GET', 'POST']) @oid.loginhandler def login(): if g.user is not None and g.user.is_authenticated(): return redirect(url_for('index')) form = LoginForm() if form.validate_on_submit(): session['remember_me'] = form.remember_me.data return oid.try_login(form.openid.data, ask_for=['nickname', 'email']) return render_template('login.html', title='Sign In', form=form, providers=app.config['OPENID_PROVIDERS']) Notice we have imported several new modules, some of which we will use later. The changes from our previous version are very small. We have added a new decorator to our view function. The oid.loginhandler tells Flask-OpenID that this is our login view function. At the top of the function body we check if g.user is set to an authenticated user, and in that case we redirect to the index page. The idea here is that if there is a logged in user already we will not do a second login on top. The g global is setup by Flask as a place to store and share data during the life of a request. As I'm sure you guessed by now, we will be storing the logged in user here. The url_for function that we used in the redirect call is defined by Flask as a clean way to obtain the URL for a given view function. If you want to redirect to the index page you may very well use redirect('/index'), but there are very good reasons to let Flask build URLs for you. The code that runs when we get a data back from the login form is also new. Here we do two things. First we store the value of the remember_me boolean in the flask session, not to be confused with the db.session from Flask-SQLAlchemy. We've seen that the flask.g object stores and shares data though the life of a request. The flask.session provides a much more complex service along those lines. Once data is stored in the session object it will be available during that request and any future requests made by the same client. Data remains in the session until explicitly removed. To be able to do this, Flask keeps a different session container for each client of our application. The oid.try_login call in the following line is the call that triggers the user authentication through Flask-OpenID. The function takes two arguments, the openid given by the user in the web form and a list of data items that we want from the OpenID provider. Since we defined our User class to include nickname and The OpenID authentication happens asynchronously. Flask-OpenID will call a function that is registered with the oid.after_login decorator if the authentication is successful. If the authentication fails the user will be taken back to the login page. The Flask-OpenID login callback Here is our implementation of the after_login function (file app/views.py): @oid.after_login def after_login(resp): if resp.email is None or resp.email == "": flash('Invalid login. Please try again.') return redirect(url_for('login')) user = User.query.filter_by(email=resp.email).first() if user is None: nickname = resp.nickname if nickname is None or nickname == "": nickname = resp.email.split('@')[0] user = User(nickname=nickname, email=resp.email) db.session.add(user) db.session.commit() remember_me = False if 'remember_me' in session: remember_me = session['remember_me'] session.pop('remember_me', None) login_user(user, remember = remember_me) return redirect(request.args.get('next') or url_for('index')) The resp argument passed to the after_login function contains information returned by the OpenID provider. The first if statement is just for validation. We require a valid email, so if an email was not provided we cannot log the user in. Next, we search our database for the email provided. If the email is not found we consider this a new user, so we add a new user to our database, pretty much as we have learned in the previous chapter. Note that we handle the case of a missing nickname, since some OpenID providers may not have that information. After that we load the remember_me value from the Flask session, this is the boolean that we stored in the login view function, if it is available. Then we call Flask-Login's login_user function, to register this is a valid login. Finally, in the last line we redirect to the next page, or the index page if a next page was not provided in the request. The concept of the next page is simple. Let's say you navigate to a page that requires you to be logged in, but you aren't just yet. In Flask-Login you can protect views against non logged in users by adding the login_required decorator. If the user tries to access one of the affected URLs then it will be redirected to the login page automatically. Flask-Login will store the original URL as the next page, and it is up to us to return the user to this page once the login process completed. For this to work Flask-Login needs to know what view logs users in. We can configure this in the app's module initializer (file app/__init__.py): lm = LoginManager() lm.init_app(app) lm.login_view = 'login' The g.user global If you were paying attention, you will remember that in the login view function we check g.user to determine if a user is already logged in. To implement this we will use the before_request event from Flask. Any functions that are decorated with before_request will run before the view function each time a request is received. So this is the right place to setup our g.user variable (file app/views.py): @app.before_request def before_request(): g.user = current_user This is all it takes. The current_user global is set by Flask-Login, so we just put a copy in the g object to have better access to it. With this, all requests will have access to the logged in user, even inside templates. The index view In a previous chapter we left our index view function using fake objects, because at the time we did not have users or posts in our system. Well, we have users now, so let's hook that up: @app.route('/') @app.route('/index') @login_required def index(): user = g.user posts = [ { 'author': {'nickname': 'John'}, 'body': 'Beautiful day in Portland!' }, { 'author': {'nickname': 'Susan'}, 'body': 'The Avengers movie was so cool!' } ] return render_template('index.html', title='Home', user=user, posts=posts) There are only two changes to this function. First, we have added the login_required decorator. This will ensure that this page is only seen by logged in users. The other change is that we pass g.user down to the template, instead of the fake object we used in the past. This is a good time to run the application. When you navigate to you will instead get the login page. Keep in mind that to login with OpenID you have to use the OpenID URL from your provider. You can use one of the OpenID provider links below the URL text field to generate the correct URL for you. As part of the login process you will be redirected to your provider's web site, where you will authenticate and authorize the sharing of some information with our application (just the email and nickname that we requested, no passwords or other personal information will be exposed). Once the login is complete you will be taken to the index page, this time as a logged in user. Feel free to try the remember_me checkbox. With this option enabled you can close and reopen your web browser and will continue to be logged in. Logging out We have implemented the log in, now it's time to add the log out. The view function for logging out is extremely simple (file app/views.py): @app.route('/logout') def logout(): logout_user() return redirect(url_for('index')) But we are also missing a link to logout in the template. We are going to put this link in the top navigation bar which is in the base layout (file app/templates/base.html): <html> <head> {% if title %} <title>{{ title }} - microblog</title> {% else %} <title>microblog</title> {% endif %} </head> <body> <div>Microblog: <a href="{{ url_for('index') }}">Home</a> {% if g.user.is_authenticated() %} | <a href="{{ url_for('logout') }}">Logout</a> {% endif %} </div> <hr> {% with messages = get_flashed_messages() %} {% if messages %} <ul> {% for message in messages %} <li>{{ message }} </li> {% endfor %} </ul> {% endif %} {% endwith %} {% block content %}{% endblock %} </body> </html> Note how easy it is to do this. We just needed to check if we have a valid user set in g.user and if we do we just add the logout link. We have also used the opportunity to use url_for in our template. Final words We now have a fully functioning user login system. In the next chapter we will be creating the user profile page and will be displaying user avatars on them. In the meantime, here is the updated application code including all the changes in this article: Download microblog-0.5.zip. See you next time! Miguel #1 buxur said : Hey miguel so rush but when are you going to post the next chapter? Thx #2 Miguel Grinberg said : buxur, I need a few more days to complete it. I was hoping I would get into a rhythm of one article per month, but I'm a bit behind. #3 JonoB said : There are a few mistakes on this page: 1. its flask.ext. not flaskext. 2. You did not define a login_view from the LoginManager in __init-.py__ as follows lm.login_view = 'login' #4 Miguel Grinberg said : @JonoB: thanks for your detailed review. Regarding #1 I confirmed that at least in my installation (as I described in the first tutorial post) I have login.py inside site-packages/flaskext, not site-packages/flask/ext. Regarding #2 you are absolutely correct. I have the change in the code, but forgot to mention it. This is now corrected. #5 drew said : Greetings again Minguel, first and foremost thanks so much for providing this resource. I was hoping you might have insight as to why i get this error: /microblog/app/views.py", line 2, in <module> from flaskext.login import login_user, logout_user, current_user, login_required ImportError: No module named login Secondly, I’m new to web design and have a great interest in flask, are you aware of any resources which might better help me understand how to get off the ground? #6 Miguel Grinberg said : @drew: the error likely means that the flask-login module isn't installed, or that it isn't installed in the right place. As far as learning resources, there isn't much out there that is specific to Flask, unfortunately. Since Flask is actually a pretty thin layer on top of regular Python, learning about general Python programming is very helpful, as is to learn about the HTTP protocol. Good luck! #7 Jaco said : Hello there. Firstly let me concur. Thank you for highly informative, well written and accurate tutorial. Really enjoying finding my way through it. I also got the "No module named login" import error. Had a look, found the following. Under site-packages there is now a flask_login module. Changed the code to "from flask_login import LoginManager" and it worked. Looking forward to next installment. Thank you. #8 Miguel Grinberg said : @Jaco: for some reason the flask-login sources got installed in a different place for you than for me. I have a login.py file inside "site-packages/flaskext", and this seems to agree with the Flask-Login documentation. In any case, I don't think it matters much, as long as you can import the module the rest should work in the same way. Thanks! #9 jaco_ said : Hello again Miguel. All was going great up to this point. I checked the file flask_login and it contains the docs from the flask login extension as well as the LoginManager class. I'm running on Ubuntu 12.04 btw. However adding the view (with decorator) @lm.user_loader I get NameError 'lm' is not defined. Error occurs in last line of __init__.py when importing views. Not sure where to look now? #10 jaco said : Hello Miguel. Doing some research i found this link = right at the bottom there is a discussion on Extension Import Transition that you might find handy. Explains the problem I encountered. Still trying to figure out the rest though. Thanks again for the efforts and informatinve tutorials. #11 jaco said : Hello again. Apologies for all the comments. Happy if you want to remove some. My error above "'lm' is not defined. Error occurs in last line of __init__.py when importing views." refers. I had the debug server up and running and it crashed as soon as I entered the @lm.user_loader (because I have not added the import lm at the top yet. Maybe you could just 'add' that import where you show that view? Anyway. looking forward to the next installments. Thank you. #12 Miguel Grinberg said : @jaco: do you have a "from app import lm" at the top of views.py? #13 Siros said : Thank you, a little notice here , flaskext.login , flaskext.openid does not work any more. it have to be flask.ext.login , flask.ext.openid. Thanks again for the details tut #14 pod said : Flask 0.8 or later version from flaskext.login import LoginManager from flaskext.openid import OpenID change -> from flask.ext.login import LoginManager from flask.ext.openid import OpenID #15 Andrey said : After the last listing you say: "We have also used the opportunity to use url_for in our template." Actually, you haven't. At least in snippet on this page, as zip archive contains correct version. Also, as it was already said, in latest versions of flask extension import system has been changed. This import problems are beacuse you and your readers are using different versions of flask and other modules. And they will be eliminated, if you require readers to download the same module versions as you have. This is very easy: just run "pip freeze > requirements.txt" and publish resulted file. Your users can recreate exact environment with "pip install -r requirements.txt". You could publish req file in the first step of your tutorial. Many thanks for this tutorial series! #16 Miguel Grinberg said : Andrey, thanks for pointing out the url_for problem, I have corrected the template. As for the module import problems I don't think requiring a specific version of Flask and its extensions is the way to go, it does not make sense to me to require people to run old software just because that's what works for me. If you go read the documentation for the most recent versions of some of these extensions they still indicate flaskext is their root namespace. I know this is going to change at some point given that flaskext has been deprecated, but at least up until a couple of weeks ago all the most recent versions of the extensions I'm using in my project worked with the code as I have it published (on Windows, the platform I spend most of my time on, I guess I should check the others as well but I haven't). I routinely upgrade my flask virtual environment and try to keep things working on the latest stuff. Thanks for your comment. #17 Edwin said : I'm getting an "ImportError: cannot import name lm"...Can anyone help me with that?. Thanks. #18 Miguel Grinberg said : @Edwin: did you read all the comments in this article regarding the location of flask-login? #19 Edwin said : Ok, sorry, just realized comment #12 was there...that was the issue. Thanks! Excellent tutorial by the way. Congrats. #20 Adam said : I'm having the same problem as Jaco. Although I have successfully run... pip install flask-login All three of the following produce the "lm is not defined" error: from flask_login import LoginManager from flaskext.login import LoginManager from flask.ext.login import LoginManager despite having the same code in my __init__ and views. #21 Miguel Grinberg said : @Adam: are you sure you are running run.py in the virtualenv's Python interpreter? The more likely cause for your problem is that you installed flask-login in a virtualenv but are running the application under another, or the global one. #22 Don said : I'm at User Logins and so far this is one of the best tutorials I've ever done. I mean that. You took the time to do this right and it's great! Thank you. #23 Steven Elliott said : setup_app() is deprecated. Instead: lm = LoginManager() lm.init_app(app) oid = OpenID(app, os.path.join(basedir, 'tmp')) #24 Peter said : Hi Miguel. Awesome blog, but I get an error when I try to use OpenID to log in: AttributeError: 'NoneType' object has no attribute 'split' nickname = resp.email.split('@')[0] Looks like OpenID is not returning an email address, which is a bit of an issue. Any suggestions? #25 joshua said : Good job! There are a few mistakes on this page: 1. from flaskext.login import LoginManager from flaskext.openid import OpenID 2. from flaskext.login import login_user, logout_user, current_user, login_required
http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-v-user-logins
CC-MAIN-2014-42
refinedweb
3,644
67.15
NAME setreuid -- set real and effective user ID's LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <unistd.h> int setreuid(uid_t ruid, uid_t euid); DESCRIPTION The real and effective user IDs. If the real user ID is changed (i.e. ruid is not -1) or the effective user ID is changed to something other than the real user ID, then the saved user ID will be set to the effective user ID. The setreuid() system call has been used to swap the real and effective user IDs in set-user-ID programs to temporarily relinquish the set-user- ID value. This purpose is now better served by the use of the seteuid(2) system call. When setting the real and effective user IDs to the same value, the standard setuid() system call is preferred. RETURN VALUES The setreuid() function returns the value 0 if successful; otherwise the value -1 is returned and the global variable errno is set to indicate the error. ERRORS [EPERM] The current process is not the super-user and a change other than changing the effective user-id to the real user-id was specified. SEE ALSO getuid(2), issetugid(2), seteuid(2), setuid(2) HISTORY The setreuid() system call appeared in 4.2BSD.
http://manpages.ubuntu.com/manpages/oneiric/man2/setreuid.2freebsd.html
CC-MAIN-2014-35
refinedweb
209
51.89
Tcl Programming/expr Contents Overview[edit] Arithmetic and logical operations (plus some string comparisons) are in Tcl concentrated in the expr command. It takes one or more arguments, evaluates them as an expression, and returns the result. The language of the expr command (also used in condition arguments of the if, for, while commands) is basically equivalent to C's expressions, with infix operators and functions. Unlike C, references to variables have to be done with $var. Examples: set a [expr {($b + sin($c))/2.}] if {$a > $b && $b > $c} {puts "ordered"} for {set i 10} {$i >= 0} {incr i -1} {puts $i...} ;# countdown The difference between Tcl syntax and expr syntax can be contrasted like this: [f $x $y] ;# Tcl: embedded command f($x,$y) ;# expr: function call, comma between arguments In another contrast to Tcl syntax, whitespace between "words" is optional (but still recommended for better readability :) And string constants must always be quoted (or braced if you wish): if {$keyword eq "foo"} ... Then again, Tcl commands can always be embedded into expressions, in square brackets as usual: proc max {x y} {expr {$x>$y? $x: $y}} expr {[max $a $b] + [max $c $d]} In expressions with numbers of mixed types, integers are coerced to doubles: % expr 1+2. 3.0 It is important to know that division of two integers is done as integer division: % expr 1/2 0 You get the (probably expected) floating-point division if at least one operand is double: % expr 1/2. 0.5 If you want to evaluate a string input by the user, but always use floating-point division, just transform it, before the call to expr, by replacing "/" with "*1./" (multiply with floating-point 1. before every division): expr [string map {/ *1./} $input] Brace your expressions[edit] In most cases it is safer and more efficient to pass a single braced argument to expr. Exceptions are: - no variables or embedded commands to substitute - operators or whole expressions to substitute The reason is that the Tcl parser parses unbraced expressions, while expr parses that result again. This may result in success for malicious code exploits: % set e {[file delete -force *]} % expr $e ;# will delete all files and directories % expr {$e} ;# will just return the string value of e That braced expressions evaluate much faster than unbraced ones, can be easily tested: % proc unbraced x {expr $x*$x} % proc braced x {expr {$x*$x}} % time {unbraced 42} 1000 197 microseconds per iteration % time {braced 42} 1000 34 microseconds per iteration The precision of the string representation of floating-point numbers is also controlled by the tcl_precision variable. The following example returns nonzero because the second term was clipped to 12 digits in making the string representation: % expr 1./3-[expr 1./3] 3.33288951992e-013 while this braced expression works more like expected: % expr {1./3-[expr 1./3]} 0.0 Operators[edit] Arithmetic, bitwise and logical operators are like in C, as is the conditional operator found in other languages (notably C): - c?a:b -- if c is true, evaluate a, else b The conditional operator can be used for compact functional code (note that the following example requires Tcl 8.5 so that fac() can be called inside its own definition): % proc tcl::mathfunc::fac x {expr {$x<2? 1 : $x*fac($x-1)}} % expr fac(5) 120 Arithmetic operators] If operands on both side are numeric, these operators compare them as numbers. Otherwise, string comparison is done. They return a truth value, 0 (false) or 1 (true): - == equal - != not equal - > greater than - >= greater or equal than - < less than - <= less or equal than As truth values are integers, you can use them as such for further computing, as the sign function demonstrates: proc sgn x {expr {($x>0) - ($x<0)}} % sgn 42 1 % sgn -42 -1 % sgn 0 0 String operators[edit] The following operators work on the string representation of their operands: - eq string-equal - ne not string-equal Examples how "equal" and "string equal" differ: % expr {1 == 1.0} 1 % expr {1 eq 1.0} 0 List operators[edit] From Tcl 8.5, the following operators are also available: - a in b - 1 if a is a member of list b, else 0 - a ni b - 1 if a is not a member of list b, else 0 Before 8.5, it's easy to write an equivalent function proc in {list el} {expr {[lsearch -exact $list $el]>=0}} Usage example: if [in $keys $key] ... which you can rewrite, once 8.5 is available wherever your work is to run, with if {$key in $keys} ... Functions[edit] The following functions are built-in: - abs(x) - absolute value - acos(x) - arc cosine. acos(-1) = 3.14159265359 (Pi) - asin(x) - arc sine - atan(x) - arc tangent - atan2(y,x) - ceil(x) - next-highest integral value - cos(x) - cosine - cosh(x) - hyperbolic cosine - double(x) - convert to floating-point number - exp(x) - e to the x-th power. exp(1) = 2.71828182846 (Euler number, e) - floor(x) - next-lower integral value - fmod(x,y) - floating-point modulo - hypot(y,x) - hypotenuse (sqrt($y*$y+$x*$x), but at higher precision) - int(x) - convert to integer (32-bit) - log(x) - logarithm to base e - log10(x) - logarithm to base 10 - pow(x,y) - x to the y-th power - rand() - random number > 0.0 and < 1.0 - round(x) - round a number to nearest integral value - sin(x) - sine - sinh(x) - hyperbolic sine - sqrt(x) - square root - srand(x) - initialize random number generation with seed x - tan(x) - tangent - tanh(x) - hyperbolic tangent - wide(x) - convert to wide (64-bit) integer Find out which functions are available with info functions: % info functions round wide sqrt sin double log10 atan hypot rand abs acos atan2 srand sinh floor log int tanh tan asin ceil cos cosh exp pow fmod Exporting expr functionalities[edit] If you don't want to write [[expr {$x+5}]] every time you need a little calculation, you can easily export operators as Tcl commands: foreach op {+ - * / %} {proc $op {a b} "expr {\$a $op \$b}"} After that, you can call these operators like in LISP: % + 6 7 13 % * 6 7 42 Of course, one can refine this by allowing variable arguments at least for + and *, or the single-argument case for -: proc - {a {b ""}} {expr {$b eq ""? -$a: $a-$b}} Similarly, expr functions can be exposed: foreach f {sin cos tan sqrt} {proc $f x "expr {$f($x)}"} In Tcl 8.5, the operators can be called as commands in the ::tcl::mathop namespace: % tcl::mathop::+ 6 7 13 You can import them into the current namespace, for shorthand math commands: % namespace import ::tcl::mathop::* % + 3 4 ;# way shorter than [expr {3 + 4}] 7 % * 6 7 42 User-defined functions[edit] From Tcl 8.5, you can provide procs in the ::tcl::mathfunc namespace, which can then be used inside expr expressions: % proc tcl::mathfunc::fac x {expr {$x < 2? 1: $x * fac($x-1)}} % expr fac This is especially useful for recursive functions, or functions whose arguments need some expr calculations: % proc ::tcl::mathfunc::fib n {expr {$n<2? 1: fib($n-2)+fib($n-1)}} % expr fib(6) 13
https://en.wikibooks.org/wiki/Tcl_Programming/expr
CC-MAIN-2015-35
refinedweb
1,202
53.34