id
int64
2
42.1M
by
large_stringlengths
2
15
time
timestamp[us]
title
large_stringlengths
0
198
text
large_stringlengths
0
27.4k
url
large_stringlengths
0
6.6k
score
int64
-1
6.02k
descendants
int64
-1
7.29k
kids
large list
deleted
large list
dead
bool
1 class
scraping_error
large_stringclasses
25 values
scraped_title
large_stringlengths
1
59.3k
scraped_published_at
large_stringlengths
4
66
scraped_byline
large_stringlengths
1
757
scraped_body
large_stringlengths
1
50k
scraped_at
timestamp[us]
scraped_language
large_stringclasses
58 values
split
large_stringclasses
1 value
42,048,835
LearningToWalk
2024-11-05T05:30:17
null
null
null
1
null
[ 42048836 ]
null
true
null
null
null
null
null
null
null
train
42,048,843
RahulBodana
2024-11-05T05:32:37
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,048,880
laurex
2024-11-05T05:42:40
null
null
null
9
null
[ 42048930 ]
null
true
null
null
null
null
null
null
null
train
42,048,883
NaOH
2024-11-05T05:43:19
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,048,890
ChenFeng123
2024-11-05T05:44:21
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,048,922
Vadim_samokhin
2024-11-05T05:51:43
Loneliness Epidemy in South Korea
null
https://www.cnn.com/2024/10/24/asia/south-korea-loneliness-deaths-intl-hnk/index.html
1
0
null
null
null
null
null
null
null
null
null
null
train
42,048,925
volume988
2024-11-05T05:52:15
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,048,941
handfuloflight
2024-11-05T05:56:36
Duckie AI
null
https://www.duckie.ai/
1
2
[ 42050006 ]
null
null
null
null
null
null
null
null
null
train
42,048,949
geox
2024-11-05T05:59:16
Man arrested for attempting to destroy Nashville power site
null
https://www.justice.gov/usao-mdtn/pr/man-arrested-and-charged-attempting-use-weapon-mass-destruction-and-destroy-energy
15
19
[ 42049211, 42050683 ]
null
null
null
null
null
null
null
null
null
train
42,048,969
jslpc
2024-11-05T06:05:34
Toward a Practical Perceptual Video Quality Metric (2016)
null
https://netflixtechblog.com/toward-a-practical-perceptual-video-quality-metric-653f208b9652
1
0
null
null
null
null
null
null
null
null
null
null
train
42,048,972
null
2024-11-05T06:07:36
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,048,981
tesamoll
2024-11-05T06:10:22
Ask HN: How can I improve my chess game?
I want to improve my chess game. Can you recommend me some resources (websites, books...)
null
3
3
[ 42049071, 42049436, 42050640 ]
null
null
null
null
null
null
null
null
null
train
42,048,983
cmaruz
2024-11-05T06:10:38
The Alternative to Performance Reviews for Software Engineers (2023)
null
https://betterprogramming.pub/the-alternative-to-performance-reviews-for-software-engineers-7b6d1c9537dd
1
0
null
null
null
Failed after 3 attempts. Last error: Quota exceeded for quota metric 'Generate Content API requests per minute' and limit 'GenerateContent request limit per minute for a region' of service 'generativelanguage.googleapis.com' for consumer 'project_number:854396441450'.
The Alternative to Performance Reviews for Software Engineers
2023-02-09T12:59:11.435Z
Mario Caropreso
Learning and development reviewsPhoto by Glenn Carstens-Peters on UnsplashTable of ContentsIntroductionWhy Do We Measure Performance?The Dysfunctions of Measuring PerformanceWhy Dysfunctions AriseMoving Beyond Measurement-Based Performance ReviewsCommon QuestionsConclusionIntroductionWith the increasing reliance on software in every aspect of our lives, using management techniques from traditional industries in software engineering teams has gained momentum.One practice that has recently seen widespread adoption is using performance measurements. Despite the promises made by such methodologies, many leaders are still struggling in their quest to measure the performance of their engineering teams: at the beginning, the adoption of such systems seems to bring a performance improvement, but shortly after, it becomes clear these improvements don’t translate into increased customer value.One particular use of performance measurements is to assess and evaluate the performance of individual engineers. Many engineers are now familiar with the concept of performance reviews, but in case you want to refresh your knowledge, Gergely Orosz has published an excellent piece on how the performance review process works at large tech companies.Performance reviews are so widespread that a world without them seems inconceivable. Nonetheless, many people are familiar with or have experienced the shortcomings of traditional performance measurement systems, yet they bear with it, either because it seems no alternative exists or because they think things will get better if the current system is improved.But as with anything in business and management, performance reviews are just a tool introduced to solve a problem. We should look for alternatives when the tool is not only not beneficial but harmful.In this article, I would like to show that:In software engineering, performance measurement systems are naturally bound to introduce dysfunctions due to the inability to observe performance across all critical dimensionsAny organization that uses measurement-driven performance assessments is at a greater risk of destroying customer valueAn alternative, optimal system can be built based on intrinsic motivationWhy Do We Measure Performance?Companies that use performance assessments usually do it to incentivise engineers to spend more effort in achieving the company’s goals. The basic assumption of this model is that if the increased effort can be directed towards predetermined targets, more value will be produced.Companies try to introduce explicit measures of performance to objectively assess the performance of engineers, reduce bias, and increase consistency. In a measurement-based performance assessment system, the performance of a software engineer is assessed against a finite set of dimensions, each with its own set of measurements. Based on the output of the measurements, the engineer is given a rank, which typically corresponds to a bonus.While this looks like a good system on paper — the more value you produce, the better the reward — the reality is that such a system introduces distortions that can subvert the goal of the system itself: the organisation ends up performing worse than it would without the system being in place.Before looking into why this happens, it is worth providing a brief summary of the typical dysfunctions we observe when introducing performance measurement systems.The Dysfunctions of Measuring PerformanceAnyone who has been part of companies with measurement-based performance assessments is familiar with the following dysfunctions introduced by the system:undermining teamworkattention to individual performance rather than the system performancefocus on quantity more than qualityoutdated performance standardsinaccuracy of measurementsplay-it-safe mindsetconstant dissatisfactionlimiting pride in the workLet’s look at each of them.Undermining teamwork. When assessing the performance of an individual contributor, we need to extract the individual’s specific contributions from the team’s total outcome, while making sure we don’t attribute to them something to which they did not contribute. This is easier said than done for two reasons:This assumes all effort is observable and the manager has perfect knowledge of which factors contribute to which outcomes. In practice, a lot of the production activity is mostly mental, or it happens in the interactions between people, both of which are difficult to observe and measure. Moreover, most of the time, it’s difficult to be certain about the link between inputs (such as effort) and outputs (such as outcomes)The number of people contributing to an outcome is bigger than we think. In many cases, when someone is making an extraordinary contribution to a project, they neglect some other aspects of the job that someone else is picking up. How do we evaluate the second person’s contributions to the project? For example, imagine the case where an individual contributor A needs to focus on a project, and coworker B decides to pick their on-call duties so A can focus on the project. A’s project becomes very successful, but B’s effort made it possible. How much should B be rewarded?Since performance assessments are based on individual performance, individuals are often faced with the choice of doing what is best for their own salary or rewards versus attending to the team’s needs. In most cases, what will happen is that the needs of the team get sacrificed.Attention to individual performance rather than the system performance. The system in which work happens plays a bigger role in the performance of an individual contributor than the individual contributor themselves. To improve their own performance, the individual contributor is faced with three potential options:a. Improve the systemb. Make the numbers look betterc. Game the systemImproving the system is the best option from a customer’s point of view since it improves not only the individual’s performance but also the performance of everybody else. But it is also a daunting task, often requiring cross-functional collaboration and work across several layers of management.The other two options are cheaper, and while they don’t deliver any value to the customer — on the contrary, they can destroy value — they can result in a better performance assessment.It is easy to understand why, most of the time, people will go with one of the last two options.Focus on quantity more than quality. Quantity and quality are usually two of the most common aspects of work that companies try to measure. But while quantity is easier to measure — there is always something that you can count at any stage of your production process — quality is more difficult to assess, as in knowledge work, the production activity is largely mental, hence difficult to observe. This difference usually pushes people to maximise their efforts in the dimension that can be easily measured and lower their focus on the dimension that can’t be easily measured.Outdated performance standards. To assess individuals, an organisation must first set the standards against which people will be evaluated. These standards can be explicitly set. For example, there is a document describing what the employee is expected to achieve by the end of the performance cycle — or implicitly set — individuals learn what the company expects them to do by looking at what gets rewarded.In both cases, these standards are deeply rooted in the past: they describe what the company thinks it will need in the upcoming cycle or what has been rewarded in the past. As such, they limit the ability of individuals to react to opportunities and do what is best for the customer when new circumstances arise. We know that conditions constantly change in today’s business climate, and the ability to catch up with changing conditions is a competitive advantage.Inaccuracy of measurements. In classical engineering control theory, one can design a controller that monitors the controlled process, compares it with a specific set point, and implements an algorithm that, through the application of system inputs, can drive the system to a desired state, achieving some degrees of optimality. While it is tempting to think we can apply the same theory to control an engineering team, a small difference makes it impossible to guarantee the accuracy of a measurement system.When we apply measurement systems to a system that consists of people, we must consider that the control system’s components have a self-interested behaviour. In other terms, people tend to react to a measurement system when they know about it.The purpose of a measurement system is to close the gap between the measured performance and the desired target. If the people who are under the measurement system know about that, it is a rational behaviour to subvert the system to ensure the system measures no gap. In complex systems like product development, individuals can control the flow of information, and it is easy to conceal measures that will put their own or their team’s performance under the spotlight. It is also easy to make the numbers look better without this corresponding to increased customer value.Play-it-safe mindset. If people’s pay depends on meeting or achieving some standards, it is a corollary that people will try to set these standards to make them achievable in the context of the system they work within and their current capabilities.There is a lot of rhetoric on setting ambitious goals, or things like “if you are achieving more than 70% of your OKRs, you have not been ambitious enough,” but the truth is that it’s very difficult to ascertain how much ambitious a goal is, and people setting the goals can always find a way to depict their goals as more challenging or more ambitious than what they really are. The cumulative result is that the whole organisation starts to target easy or achievable goals, and the hard targets that would likely result in a miss are seldom set.Constant dissatisfaction. Because of human nature, the majority of people who go through performance assessments experience a state of dissatisfaction with the system. Consider, for example, the following scenarios:You are rated in the top 50% of the company, but someone else is rated in the top 20%You are rated in the top 20%, but someone else is rated in the top 10%You are rated in the top 20%, but last year you were rated in the top 5%When this happens, people can react in several different ways that can destroy the value of the organisation. For example, they can become cynical and distrust the system. Or they can start having doubts about themselves and start lowering their own expectations. Even the people who get the best ratings are not immune from these effects and might develop imposter syndrome. For example, they can start attributing their good results to luck and start developing a fear that their “true” incompetence will be revealed at some point in the future.Limiting pride in the work. When people are measured and rewarded according to an external system, performance control is removed from the individual contributor to the organisation. This means that the individual contributor is not working anymore according to their own standards and expectations, but they are following someone else’s standards. This, in turn, creates a separation between the individual contributor and their work — it is not theirs anymore; it is not work crafted according to their skills and experience.Having described the most common dysfunctions, we can now look into why they arise.Why Dysfunctions AriseRobert D. Austin has developed a useful model to describe how motivational measurements affect performance. For anyone interested in a deep dive into his theories, I suggest reading his book “Measuring and Managing Performance in Organisations.”In Austin’s model, the customer and the company would like employees to allocate their effort in a way that maximises the customer value at a cost that allows the company to make a profit. The employee has only a limited capacity for effort, and the employee needs to spend this effort across several activities — for simplicity, we will assume two activities.The customer will derive some value for each combination of effort spent in each activity. If we connect all the combinations that deliver the same value, we obtain a set of curves called “same-value curves.” Similarly, we can connect all the points where the employee spends the same amount of effort into a set of lines called effort capacity.The point at which the effort capacity line is tangent to the same value curve represents the preferred allocation: it is the point at which the choice of effort distribution by the employee maximises the customer value. We obtain the best-mix path if we connect all the preferred allocation points. The best-mix path is the set of allocations that, for any given level of effort, maximises the return to the customer.Example of a best-mix path based on effort capacity and customer valueThe goal of an incentive system is to increase the level of effort spent by the employee and align the effort distribution in a way that maximises value for the customer.When a company adopts a performance measurement system, its goal is to implement an incentive system defined as Full Supervision.Full Supervision happens when measuring every dimension critical to the employee’s performance is possible. When this can be done, the employee knows exactly how to allocate their efforts because that combination will give them the highest reward. But while full supervision is very attractive, it is not always possible, and in practice, it’s much rarer than one would think.When a critical dimension of performance can’t be measured, using a measurement-based performance assessment system will cause the individual to optimise their effort according to the dimensions that can be measured — to the detriment of the dimensions that can’t. This situation is called Partial Supervision.In the Partial-Supervision scenario, since only some dimensions are observable, the employee is free to decide how much effort to spend on the non-measurable dimensions, while trying to maximise their reward by spending effort on the measured dimensions. The resulting effort allocation is not optimal since the employee can set a target for the non-measurable dimensions that provide less value to the customer.Dysfunctions arise when a company thinks they are applying full supervision, but in reality, they are in a partial supervision situation.Partial-supervision scenarioLet’s look at what happens in the picture above, where the horizontal axis represents an activity that is measured, and the vertical axis one that can’t. Initially, the employee might allocate their effort while staying on the best-mix path (Point P1). But under the pressure of the measurement system, the employee will start to spend more time on Activity 1. Initially, this is still advantageous for the company, as the employee will increase their total effort in both activities (P2), but at some point, the employee realises that they can get better rewards by focusing only on Activity 1 (P3).The company is still getting some value from the employee’s work, but not as much as they could. But if left unchecked, the pressure of the performance measurement system can lead to a situation when the employee is optimising completely for their own reward (P4) and not delivering any value. In this situation, the choices made by the employee — the choice path — is not aligned with the best-mix path.Many people have seen this happening. Let’s say the company adopts a performance assessment system where engineers are evaluated according to lines of code (LOC) written, number of tasks closed, and production incidents caused. Everybody who has been doing this job for a couple of years knows that these are hardly the right things to measure, but they are among the easiest.Let’s suppose that the company recognises the importance of other activities but chooses not to measure them because the cost would be too high — for example, time spent helping other engineers ramp up.Under these assumptions, the system predicts that the engineer will keep increasing their level of effort in those dimensions that can be measured and will spend less on those that can’t be measured. By the end of the performance cycle, we can expect that:individuals will write lots of Lines of Code without any guarantee about their quality or whether those lines are neededa lot of tasks will be created and closedthere will be a tendency to hide production incidents for fear of impacting the individual’s metricsspending time helping others is going to be discouragedIf the current way we do performance reviews introduces such distortions, how can we do better?Moving Beyond Measurement-Based Performance ReviewsTo answer this question, we need to remind ourselves what the true purpose of a performance assessment system is. The true purpose of a performance assessment system is to align the effort spent by employees in a direction that achieves the goals of the organisation, which in turn translates to improving one of the key measures of organisational performance — most of the time, profit.Full Supervision and Partial Supervision are two control mechanisms that an organisation can implement based on the ability to observe performance in all or part of the critical dimensions. Dysfunctions arise when we believe we have achieved Full Supervision, but in reality, we can only achieve Partial Supervision because some aspects of performance are not easily measured.If Full Supervision is not achievable, we must ask ourselves a simple question: what happens if we remove all kinds of measurements? This scenario is called No Supervision.What happens when there is no supervision?In a No-Supervision scenario, the employee decides how to spend their effort based on their total effort capacity and the mix that optimise customer’s value.If we make the following assumptions:The employee knows what the customer wants. In other terms, it is possible to know the value that the customer places on each allocation of efforts by the employeeThe employee gains utility from satisfying the customer’s needsThen it is possible to demonstrate that the resulting effort allocation lies on the best-mix path, which means that for a given level of effort, the choice delivers maximum value to the customer.No-Supervision ScenarioInitially, the employee will start spending some effort and benefit from satisfying the customer’s needs (point P0). If the utility gained is greater than the disutility gained by spending effort, the employee will accept that the move is good.But shortly after, the employee recognises that by spending the same effort, they can achieve more customer value by rebalancing their effort allocation (point P1). At this point, the employee can keep increasing their effort and the utility they gain from satisfying the customer’s needs until they reach a point (P5). At this point, further increasing effort would create more disutility from the total effort expenditure than utility from satisfying customers’ needs. This is the optimal allocation in the No-Supervision scenario.If we compare these results with the ones achieved by partial supervision, it’s easy to see that no supervision can achieve the same results as full supervision, and it actually produces better results than partial supervision. As the employee increases their effort capacity, they reach a higher same-value line in a no-supervision scenario than in a partial-supervision scenario.Comparing No Supervision and Partial SupervisionIt is becoming clear now that we can build an alternative to measuring-driven approaches based on the No-Supervision scenario. But before proceeding, we must look again at this model’s assumptions.The assumptions underlying the no-supervision scenarioThe model we have just described is predicated on two assumptions:The employee knows what the customer wants. In other terms, it is possible to know the value that the customer places on each allocation of efforts by the employeeThe employee gains utility from satisfying the customer’s needsIn a system based on No Supervision, the employee is motivated by the desire to do a good job, learn and grow as an individual, and do something that matters. This is what we call intrinsic motivation. Intrinsic motivation is defined as motivation that is driven by internal rewards.On the contrary, supervised systems like the ones discussed in the previous paragraph rely on the use of extrinsic motivation: employees are rewarded for achieving and exceeding targets of organisational performance. Extrinsic motivation is defined as motivation that occurs when external influences drive an individual.This means that it is possible to build a high-performing organisation by not having any measurements at all if we build an environment where employees are motivated by the desire to do a good job, and know what a good job looks like from a customer’s point of view. How can we build such an environment, and what do performance reviews look like in it?Building an environment that fosters intrinsic motivationThere are many excellent books you can have a look at to learn how to build such a system. The top three books I would suggest are:Turning the Ship Around by David L. MarquetDrive: The Surprising Truth About What Motivates Us by Daniel H. PinkEmpowered by Marty CaganThe key principles of building a system based on intrinsic motivation are the following:Hire for intrinsic motivationFocus on excellenceDevelop customer empathyPromote trustHire for intrinsic motivation. Not everybody is motivated by the desire to do a good job, grow in the process, and work well with others, which is fine. But for the success of this system, the people you hire must be driven by intrinsic motivation. If your interview process already has a behavioural interview step, it is possible to enhance it with some questions that probe for examples from the candidate’s experience of the following:Achieving results through collaboration with other people on the teamRecognising gaps in their own performance and making improvementsTaking action based on feedbackGoing the extra mile to make a customer happyBased on the type of answers the candidate provides, you can assess whether they are internally or externally motivated and only move to the next step, candidates you believe will be successful in this environment.Focus on Excellence. It is important that the team is proud of what they are delivering and they feel encouraged to aim higher. To do this, you need to create a culture where the team continually pursues excellence in what they do. There are many ways a team can manifest excellence, but the important thing is that every time you see it, you should recognise it and celebrate it.Develop Customer Empathy. To align the intrinsic motivation of the individual contributor with what the customer needs, it is important to ensure that engineers are connected with customers, understand their needs and wants, and know the impact of their decisions on customer satisfaction. One way to do this is by fostering customer empathy. Many techniques can be used to develop customer empathy. Some of my favourites include the Follow Me Home technique developed by Intuit or Everyone on Support by 37 Signals.Promote trust. Trust is a sort of buzzword, and especially in the current economic climate, it is difficult to foster a trusted relationship between companies and employees. From my point of view, trust means respect. You can demonstrate respect and build trust in many ways, and you need to pay attention to all these key moments to ensure you are increasing trust instead of diminishing it. For example, one way to show trust is to accept that people will make mistakes from time to time and convey the message that it’s OK as long as we learn from them.The Alternative to Performance Reviews: Learning and Development ReviewsSo far, we have described how you can create an environment based on fostering intrinsic motivation, but we haven’t addressed the role that performance reviews play in it. If you want to build a performance management system based on intrinsic motivation, you can replace traditional performance reviews with a learning and development review.A learning and development review focuses on helping the individual on their path to mastery. It is not tied to any incentives or bonuses but provides feedback to help the individual improve and grow. What does it look like?Let’s take a step back and look at product development as a learning problem.A product’s job is to satisfy a customer’s needs in a way that works for the business. If we look at this problem through the lenses of continuous learning, achieving this objective requires us to answer two fundamental questions:How well are we serving the needs of the customer?Which capabilities do we need to build or improve at the individual and system levels to keep serving their needs?This means that we can redesign performance reviews around these two questions with the addition of a third one:3. How well are we cooperating between ourselves in serving the needs of the customer?Following these principles, a learning and development review can be structured around these five sections:Summary of impact: this is a description of the main contributions that the person has made in the past periodFeedback related to the core competencies: actionable and specific examples related to specific competencies or a rubricContributions to the success of others: how well the individual has contributed to the success of others in the companyTop strengths: a summary of the top three strengths that the individual has demonstrated in the past periodAreas of growth: top three areas where the individual can improve and do betterLooking forward: what the priorities for the individual should be in terms of keeping growing and developing as an engineerIf you are looking for a template, I have put together one here that you can use as a starting point.What should you watch out for when implementing a system based on intrinsic motivation?As with traditional measurement-based performance reviews, implementing a system based on intrinsic motivation does not come free or without any risks. Here are the things you should watch out for:Building an environment where individual contributors feel they can be driven by their skills and experience takes time. While it takes a lot of time to build, it can take a surprisingly short amount of time to destroy it.This happens because such an environment is based on trust. Every interaction between employees and the organization contributes to that trust, but interactions are not created equal. Initially, it takes a while for a group of people to reach the state of psychological safety that allows trust to be established. Once that state is reached, the team keeps looking for signals that confirm that they are in an environment where they feel safe to take risks and be vulnerable with each other.As long as the signals keep coming, the team will be in a state of trusting each other. But even a single misstep by a management team member might cause the team to take many steps back. This happens because of the negativity bias, which is the propensity of humans to learn from and use negative information far more than positive information.Building an environment based on intrinsic motivation requires a high bar for anyone occupying a leadership position, so it is particularly susceptible to the effects of ineffective leadership and people management.Common QuestionsQ: How do you do X?One of the common questions that people ask when considering moving beyond performance reviews is how the company would achieve X without them, where X can be pay increases, bonuses, promotions, and performance management. The fact that X can take so many values points to the fact that performance reviews are a tool that is overloaded: many different outcomes are driven through it. The generic answer to this question is that you can still do all these things even without doing performance reviews.Q: How do you manage performance?One of the biggest fear of moving beyond performance reviews is how a company would manage the performance of employees. But in reality, performance reviews are not the best place to address performance issues: addressing performance issues during performance reviews is delegating or postponing a manager’s job.The fact that something can’t be measured does not mean it can’t be managed. Managers are still accountable for managing the performance of the individuals they support, helping them when there are gaps, and taking action when they can’t be addressed. In this new model, performance management becomes a continuous alignment process between the manager and the individual.Q: How do you pay bonuses?Many companies that do performance reviews tie bonuses to achieving a specific performance rating: for example, someone meeting expectations gets the standard bonus for their level, while someone exceeding them gets a 15% increase on the standard bonus. Since learning and development reviews don’t assign ratings, how can a company keep paying bonuses?The short answer is that you don’t need to pay bonuses. Bonuses are part of a system that relies on extrinsic motivation, but if you build your system on intrinsic motivation, bonuses can harm motivation because you are paying a reward to employees for something they could have done just because of internal motivation. If you are not paying bonuses, there are two things you can do instead:Modify your base compensation to match what your peer group is doing in terms of base salary + target bonus. Basically, you consider the bonus already part of the base salaryProvide incentives that are not linked to individual performance, but they are the same for every employee. For example:You can have a profit-sharing mechanism where employees get a share of profits based on tenureYou can provide an Employee Stock Purchase Plan where employees can buy shares in the company at a discountOne of the common objections to any incentive system based on tenure is that it encourages people to stick with a company just to reap the benefits of the reward. This behavior is called coasting.I see two main problems when someone designs a system to discourage coasting:You are basically abdicating from your responsibility as a manager. Instead of fostering an environment where people can be driven by their internal motivation and do performance management when goals are not aligned anymore, we create an environment where we make it difficult for people to stay around. Some companies, for example, are known to engineer their compensation package in a way where there is a total compensation cliff after three or four years that forces people to leave unless they have got a promotionYou are optimizing your system for the small number of people who might coast instead of the majority who want to come to work to do a good jobThere is also another aspect to consider — promoting tenure can actually increase the quality of knowledge work. Software engineering takes time: developing domain knowledge, building healthy teams, and understanding the customers’ needs.ConclusionCompanies that adopt a high-performance culture mindset have traditionally been employing a pay-for-performance approach to management, which works under the belief that employees can obtain more performance by rewarding them for the extra effort.In software engineering, this approach will backfire, and instead, it will deteriorate performance. This happens because many dimensions of a software engineer’s job are not observable, and introducing a performance measurement system will force engineers to optimise for the dimensions which can be easily measured, to the detriment of the overall delivery of customer value.The alternative is to move away from measurement-based performance approaches and build a workplace where employees’ intrinsic motivation can prevail. In such an environment, performance reviews are replaced by learning and development reviews, which focus on helping the employee builds the strengths the company requires to be successful.
2024-11-08T14:23:37
null
train
42,049,002
sangeeth96
2024-11-05T06:14:27
Custom domains with HTTPS for your localhost servers on macOS
null
https://blog.sangeeth.dev/posts/custom-domains-with-https-for-your-localhost-servers-on-macos/
2
0
null
null
null
no_error
Custom domains with HTTPS for your localhost servers on macOS
2024-11-04T23:55:47+05:30
null
If you’ve ever worked on multiple web projects locally, you might be familiar with the pain when it comes to serving them over localhost addresses: assigning and remembering those damn port numbers. If it’s just one project, you can use a port like 3000 and call it a day. But if you’re switching between projects or running them in parallel, you got to start giving them unique ports and remember them when typing in the address bar. And what about being able to use https:// with those localhost addresses and getting that to work without scary warning pages? More pain. What if you could simplify all of this and just make it work? What if you could type https://react.test and have it point to your Vite server? Or https://api.test point to your Node server? Or https://preprod.project.test and have it point to the pre-prod server running locally? Buckle up. We need two tools for this. First up—Caddy. It’s one of my favorite pieces of software and I use it to host and proxy everything under sangeeth.dev including this very blog. I got drawn towards Caddy many years ago seeing the simplicity of the configuration file and the promise of an extremely low-config HTTPS setup that it offered out of the box. While you can use Caddy to host public sites, you can also utilize it for your local projects to serve them over HTTPS as well as to avoid mucking around with port numbers. For those unfamiliar, Caddy is a web server as well as a reverse proxy written in Go, much like Apache or Nginx that you might have worked with. But there’s a lot more you can do with it and if that’s not enough, can even extend it with community-made modules. Check out the excellent beginner-friendly docs to learn more. Next, we need to run a local DNS server which tells our programs including the web browser to go to 127.0.0.1 whenever they need to talk to https://<something>.test much like how google.com resolves to a Google IP address when we type it out in the address bar. We’ll use dnsmasq for this which like Caddy, does a lot more than what I described. Let’s see how we can use these tools on macOS to achieve our end goal. Installing dnsmasq You can install dnsmasq and Caddy in a number of different ways but it is convenient to use Homebrew for this which I assume most macOS users have installed. If you don’t, go to brew.sh and come back once you’re done—it takes a minute tops. With brew installed, enter the following command to install dnsmasq: brew install dnsmasq Configuring local DNS Let’s configure dnsmasq to reroute all requests for *.test to localhost. To do this in macOS, we can add files inside /etc/resolver to configure a nameserver for the .test domain: sudo mkdir -p /etc/resolver sudo echo "nameserver 127.0.0.1" > /etc/resolver/test Next, we’ll configure dnsmasq to answer with 127.0.0.1 for any DNS requests ending with .test which includes subdomains. Open the file $HOMEBREW_PREFIX/etc/dnsmasq.conf in an editor of your choice using sudo. I’m using vim so I’ll be entering the following command in my terminal: sudo vim $HOMEBREW_PREFIX/etc/dnsmasq.conf Note: If you’re seeing an empty file when you open the above path in your editor, check that you have Homebrew installed and configured in your shell correctly by running echo $HOMEBREW_PREFIX which will output /opt/homebrew or /usr/local/homebrew. If it doesn’t output anything, you probably should check the docs for troubleshooting. You’ll see a lot of commented out text in this file. Go to the end of the file and append the following lines: address=/.test/127.0.0.1 With that, we’ve told both macOS and dnsmasq to point to 127.0.0.1 for all requests to *.test. Only thing left is to configure dnsmasq to run in the background and on boot: sudo brew services start dnsmasq Note: It is necessary to start the dnsmasq service as root, hence the sudo. Installing Caddy Enter the following commands to install and configure Caddy to start in the background and on boot: brew install caddy brew services start caddy Note: You don’t need sudo here. You can verify that Caddy has been started by running brew services: ❯ brew services Name Status User File bind none caddy started sangeeth ~/Library/LaunchAgents/homebrew.mxcl.caddy.plist unbound none Trusting CA certificates To get HTTPS to work without our browsers throwing that error message with scary red icons, we need to trust the certificates that Caddy automatically generates. Caddy generates its own Certificate Authority (CA) which you can think of like the motor vehicles department of a country that issues driver licenses. Except in our case, Caddy is that department (or the authority), and it issues TLS certificates which is the equivalent of a driver’s license that allows for communication over https:// much like how one can drive on highways legally with a valid driver’s license. Except, this new authority we set up is alien to macOS. So, macOS will throw a tantrum since we haven’t told about this new thing which makes sense cause otherwise, anyone can create their authority and it’ll be like the wild west. So, we need to tell macOS that we know this authority and we can trust the licenses or certificates that it hands out. Run the following command to trust the CA root certificate that’s generated by Caddy: security add-trusted-cert \ -r trustRoot \ -k ~/Library/Keychains/login.keychain-db \ $HOMEBREW_PREFIX/var/lib/caddy/pki/authorities/local/root.crt You might be asked for your password or fingerprint. If the command runs successfully, it’ll not print anything which means we can proceed to configuring Caddy. Note: The sudo version of the above command with the -d flag also works but it adds the certificate to the System keychain for all users. I like to limit privileges wherever possible. Configuring sites Let’s create a hello world site first to test that everything is working. Run the following command to create a folder in your home directory and an index.html file inside it that says “Hello world”: mkdir ~/hello-caddy echo "<h1>Hello world</h1>" > ~/hello-caddy/index.html Caddy uses a file called Caddyfile for configuration. If you installed Caddy using Homebrew, we can create our Caddyfile at $HOMEBREW_PREFIX which will be used whenever Caddy boots up: touch $HOMEBREW_PREFIX/etc/Caddyfile Open this file in an editor of your choice and enter the following and save the file: { local_certs } hello.test { root {$HOME}/hello-caddy file_server browse } We define sites in our Caddyfile like the hello.test line above. You can define multiple sites one below another. Inside the curly brace, you define directives which tell Caddy what to do. In this case, we’re telling Caddy to use the root directive to set the working directory of this site to ~/hello-caddy and then serve the files inside it using the file_server directive. That’s how easy it is to serve static websites with Caddy. The block without a site at the top is for common Caddy configuration that affects it as a whole. The local_certs line is telling Caddy to use only the locally generated CA for issuing TLS certificates for all the sites which is what we need. To ensure we didn’t make any typos, we can run the following command to validate the Caddyfile: caddy validate --config $HOMEBREW_PREFIX/etc/Caddyfile You can ignore the INFO and WARN messages but you’ll need to check your Caddyfile if you’re seeing ERRORs. Once validated, run the following command to restart Caddy and load the changes: brew services restart caddy Open your browser and visit https://hello.test. If you didn’t make any typos or mistakes in the preceding steps, you’ll see a web page with “Hello world” displayed without any warnings, errors or scary red icons. Most modern browsers consider websites with HTTPS as normal these days and thus don’t go for the green flair or padlock icons. Here’s how mine looks on Safari, Chrome and Firefox: Redirecting to localhost servers More often, we’re running running some kind of development server and not testing static websites. And that’s fine since Caddy can also behave like a reverse proxy. To see this in action, we’ll configure Caddy to redirect all requests to a local Vite dev server. You can follow along with any other servers though, even a simple python http.server would do but that’s not as dramatic. Let’s scaffold a new React project and run it: npm create vite@latest caddy-react -- --template react cd caddy-react npm install npm run dev In my case, this starts a dev server on port 5173. I want to create a new site react.test that will point to localhost:5173. For that, we’ll append the following into our Caddyfile: react.test { reverse_proxy localhost:5173 } Save it and restart caddy from a separate terminal using brew services restart caddy like we did before. Now, visit https://react.test and you should see the default vite-react web page: Voila! You can repeat this for any projects you’re working on and add them as new sites into your Caddyfile once and have them accessible over {name}.test thereafter. My recommendation is to add a few for the ports associated with common types of project like I did for Vite above and then add new entries as you work on new projects. Here’s my local Caddyfile as an example: { local_certs } # # Vite / React # vite.test, react.test { reverse_proxy localhost:5173 } # # Next # next.test { reverse_proxy localhost:3000 } # # Hugo site: blog.sangeeth.dev # dev.blog.test { reverse_proxy localhost:1313 } staging.blog.test { root {$HOME}/dev/gitea/blog.sangeeth.dev/public file_server browse } Some alternatives I explored When writing this, I was hoping to discuss the mDNS services macOS ships with which allows for *.local domains that work across devices in the same local network (at least, Apple devices) but it didn’t turn out to be that easy. First, you can’t have subdomains like bar.foo.local which is a mild inconvenience. Next, you’ll need to add each site to both your Caddyfile and to the mDNS service via a command which is the bigger pain. I wanted to add it in one place and be done with it which is not something this system affords out of the box at present. Though, there’s a paid software called LocalCan which simplifies and provides this functionality in a neat GUI that you might want to check out. It also has other tricks up its sleeve. Another option was using .localhost label which is something almost globally defined to resolve to 127.0.0.1. This works and I could have foo.localhost defined in Caddy and it would work without needing any additional macOS configuration like we need for .local and .test domains above. The problem is that this isn’t guaranteed to work in Safari at the moment and is an open Webkit issue. It certainly didn’t work for me when I tried which made me look into the dnsmasq approach. I’m not sure if there are simpler (and free) solutions out there which achieves similar end results but if you know of any, let me know in the comments.
2024-11-08T09:14:06
en
train
42,049,018
luu
2024-11-05T06:19:10
Statistical challenges and misreadings of literature create unreplicable science [pdf]
null
https://stat.columbia.edu/~gelman/research/unpublished/healing.pdf
3
2
[ 42049185 ]
null
null
null
null
null
null
null
null
null
train
42,049,020
ibobev
2024-11-05T06:19:16
Compilers, Interpreters and Formal Languages
null
https://pikuma.com/courses/create-a-programming-language-compiler
5
1
[ 42049193, 42050680 ]
null
null
null
null
null
null
null
null
null
train
42,049,030
djaygour
2024-11-05T06:20:24
KuwarPay- Buy anything from social media
Hey fellow fintech enthusiasts and entrepreneurs,<p>I created something exciting that will revolutionize commerce industry. Introducing KuwarPay, our bold attempt to simplify transactions and ability to shop directly from your social media posts.<p>About KuwarPay<p>KuwarPay is a user-friendly payment solution designed for individuals and small and medium businesses. I made a MVP, how it&#x27;s gonna work:<p>1. A customer finds your post and clicks on the link.<p>2. They&#x27;re redirected to a checkout page, where they fill in their details and click &#x27;Buy Now&#x27;.<p>3. Next, they&#x27;re redirected to PhonePe to complete the payment using their preferred method<p>4. After payment, the customer is redirected to a feedback page.<p>5. Once payment is confirmed, you receive an order confirmation email with all order details, enabling you to fulfill the order.<p>This is Version 1 of KuwarPay.<p>Taking it to the Next Level<p>Iam exploring innovative features to enhance customer experience. Our next goal: enabling customers to shop directly from social media posts!<p>Imagine:<p>- Shopping seamlessly from Instagram, Facebook and Twitter posts - Exclusive discounts and offers - Simplified checkout with KuwarPay<p>I want YOUR unfiltered feedback. Tell me:<p>1. What Iam doing right? 2. What Iam doing wrong? 3. How to make KuwarPay unstoppable?<p>Share your thoughts!
null
2
0
null
null
null
null
null
null
null
null
null
null
train
42,049,042
ilyasforefront
2024-11-05T06:23:22
Nimbus Suite – All-in-One Tool for Selling and Managing Cybersecurity Services
null
https://nimbussuite.com/
1
1
[ 42049043 ]
null
null
null
null
null
null
null
null
null
train
42,049,046
mikhael
2024-11-05T06:24:54
Japan launches first wooden satellite into space
null
https://timesofindia.indiatimes.com/world/rest-of-world/japan-launches-worlds-first-wooden-satellite-into-space/articleshow/114963352.cms
6
0
null
null
null
null
null
null
null
null
null
null
train
42,049,055
bkyan
2024-11-05T06:26:27
A Friendly Introduction to Container Queries
null
https://www.joshwcomeau.com/css/container-queries-introduction/
2
1
[ 42049159 ]
null
null
null
null
null
null
null
null
null
train
42,049,073
paulpauper
2024-11-05T06:29:58
Speaking Ill of the Dead (2006)
null
https://freakonomics.com/2006/11/speaking-ill-of-the-dead/
1
0
null
null
null
null
null
null
null
null
null
null
train
42,049,093
FinMursk
2024-11-05T06:33:25
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,049,119
r-brown
2024-11-05T06:39:00
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,049,144
walterbell
2024-11-05T06:45:03
U.S. laptop, smartphone, and game console prices could soar after the election
null
https://arstechnica.com/tech-policy/2024/11/laptop-smartphone-and-game-console-prices-could-soar-after-the-election/
1
1
[ 42049598 ]
null
null
no_error
Laptop, smartphone, and game console prices could soar after the election
2024-11-04T11:00:19+00:00
Ashley Belanger
"At that point, it's prohibitive" to do business with China, Brzytwa told Ars, suggesting that Trump's proposed tariffs are about "blocking imports," not safeguarding American tech. How soon would tech prices increase? It's unclear how quickly prices would rise if Trump or Harris expanded tariffs. Lovely told Ars that "it's really up to the manufacturers, how fast they pass through the prices." She has spoken to manufacturers using subcontractors in China who "say they're in no position to move their business" "quickly" to "someplace else." Those manufacturers would have a difficult choice to make. They could "raise prices immediately" and "send a very clear signal to their customers" that "this is because of the tariffs," Lovely said. Or they could keep prices low while scaling back business that could hamper US innovation, as the CTA has repeatedly warned. "I think I would just say, 'Hey everybody, you elected this guy, here's the tariff,'" Lovely said. "But some might decide that that's not the best thing." In particular, some companies may be reluctant to raise prices because they can't afford triggering a shift in consumer habits, Lovely suggested. "Demand is not infinitely elastic," Lovely told Ars. "People will say, 'I can use my cell phone a little longer than every three years' or whatever." Tech industry strategist and founder of Tirias Research, Jim McGregor, told Ars that if Trump is elected and the tariffs are implemented, impacts could be felt within a few months. At a conference this month, Trump's proposed China tariffs were a hot topic, and one tech company CEO told McGregor that it's the global economic X-factor that he's "most worried about," McGregor told Ars. On top of worrying about what tariffs may come, tech companies are still maneuvering in response to Biden's most recently added tariffs, analysts noted. In May, McGregor warned in Forbes that Americans will likely soon be feeling the crunch from those tariffs, estimating that in the "short term," some tariffs "will drive up prices to consumers, especially for consumer electronics, due particularly to the tariffs on chips, batteries, and steel/aluminum." Staring down November 5, it appears that most tech companies can't avoid confronting the hard truth that US protectionist trade policies increasingly isolating China are already financially burdening American consumers and companies—and more costs and price hikes are likely coming. "It just doesn't look good," Brzytwa told Ars.
2024-11-08T13:51:23
en
train
42,049,146
leizhan
2024-11-05T06:45:06
null
null
null
1
null
[ 42049147 ]
null
true
null
null
null
null
null
null
null
train
42,049,153
soheilpro
2024-11-05T06:45:29
The Surprising Truth About Pixels and Accessibility
null
https://www.joshwcomeau.com/css/surprising-truth-about-pixels-and-accessibility/
1
0
null
null
null
no_error
The Surprising Truth About Pixels and Accessibility: should I use pixels or rems? • Josh W. Comeau
null
Josh W. Comeau
IntroductionShould I use pixels or ems/rems?! This is a question I hear a lot. Often with a dollop of anxiety or frustration behind the words. 😅 It's an emotionally-charged question because there are a lot of conflicting opinions out there, and it can be overwhelming. Maybe you've heard that rems are better for accessibility. Or maybe you've heard that the problem is fixed and pixels are fine? The truth is, if you want to build the most-accessible product possible, you need to use both pixels and ems/rems. It's not an either/or situation. There are circumstances where rems are more accessible, and other circumstances where pixels are more accessible. So, here's what we're going to do in this tutorial: We'll briefly cover how each unit works, to make sure we're all building on the same solid foundation. We'll look at what the accessibility considerations are, and how each unit can affect these considerations. We'll build a mental model we can use to help us decide which unit to use in any scenario. I'll share my favourite tips and tricks for converting between units. By the end, you'll be able to use your intuition to be able to figure out which unit to use in any scenario. 😄 Link to this headingUnit summaries Link to this headingPixels The most popular unit for anything size-related is the px unit, short for “pixel”: .box { width: 1000px; margin-top: 32px; padding: 8px; } In theory, 1px is equal to a single dot in a computer monitor or phone screen. They're the least-abstract unit we have in CSS, the closest "to the metal". As a result, they tend to feel pretty intuitive. Link to this headingEms The em unit is an interesting fellow. It's a relative unit, based on the element's calculated font size. Fiddle with these sliders to see what I mean: Essentially, em is a ratio. If our paragraph has a bottom margin of 1.5em, we're saying that it should be 1.5x the font size. This allows us to “anchor” one value to another, so that they scale proportionally. Here's a silly example. Each word in the following sentence uses a smaller em value, giving the impression of a sentence fading into the distance. Try tweaking the paragraph's font-size, and notice how everything “zooms in”: Link to this headingRems It's old news now, but there was a time when the rem unit was a shiny new addition to the CSS language. It was introduced because there's a common frustrating issue with the em unit: it compounds. For example, consider the following snippet: <style> main { font-size: 1.125em; } article { font-size: 0.9em; } p.intro { font-size: 1.25em; } </style> <main> <article> <p class="intro"> What size is this text? </p> </article> </main> How large, in pixels, is that .intro paragraph font? To figure it out, we have to multiply each ratio. The root font size is 16px by default, and so the equation is 16 × 1.125 × 0.9 × 1.25. The answer is 20.25 pixels. What? Why?? This happens because font size is inheritable. The paragraph has a font size of 1.25em, which means “1.25x the current font size”. But what is the current font size? Well, it gets inherited from the parent: 0.9em. And so it's 1.25x the parent, which is 0.9x its parent, which is 1.125x its parent. Essentially, we need to multiply every em value in the tree until we either hit a "fixed" value (using pixels), or we make it all the way to the top of the tree. This is exactly as gnarly as it sounds. 😬 To solve this problem, the CSS language designers created the rem unit. It stands for “Root EM”. The rem unit is like the em unit, except it's always a multiple of the font size on the root node, the <html> element. It ignores any inherited font sizes, and always calculates based on the top-level node. Documents have a default font size of 16px, which means that 1rem has a “native” value of 16px.This value, however, is user-configurable! More on this shortly We can re-define the value of 1rem by changing the font-size on the root node: Hello WorldThis is a paragraph with some words and things. We can do this, but we shouldn't. In order to understand why, we need to talk about accessibility. Link to this headingAccessibility considerations The main accessibility consideration when it comes to pixel-vs-em/rem is vision. We want people with limited vision to be able to comfortably read the sentences and paragraphs on our websites and web applications. There are a few ways that folks with limited vision can increase the size of text. One method is to use the browser's zoom functionality. The standard keyboard shortcut for this is ⌘ + on MacOS, ctrl + on Windows/Linux. I'll call this method zooming in this tutorial. The Web Content Accessibility Guidelines (WCAG) state that in order to be accessible, a site should be usable at 200% zoom(opens in new tab). I've heard from accessibility advocates that this number is really a minimum, and that many folks with vision disorders often crank much higher than that. Finally, there's another method, one that fewer developers know about. We can also increase the default font size in our browser settings: I'll call this method font scaling in this tutorial. Font scaling works by re-defining the “baseline” font size, the default font size that all relative units will be based on (rem, em, %). Remember earlier, when we said that 1rem was equal to 16px? That's only true if the user hasn't touched their default font size! If they boost their default font size to 32px, each rem will now be 32px instead of 16. Essentially, you can think of font scaling as changing the definition of 1 rem. Here's where we hit our first accessibility snag. When we use a pixel value for a font-size on the page, it will no longer be affected by the user's chosen default font size. This is a paragraph with some words and things. This is why we should use relative units like rem and em for text size. It gives the user the ability to redefine their value, to suit their needs. Now, the picture isn't as bleak as it used to be, thanks to browser zooming. When the user zooms in or out, everything gets bigger. It essentially applies a multiple to every unit, including pixels. It affects everything except viewport units (like vw and vh). This has been the case for many years now, across all major browsers. So, if users can always zoom to increase their font size, do we really need to worry about supporting font scaling as well? Isn't one option good enough? The problem is that zoom is really intended to be used on a site-by-site basis. Someone might have to manually tinker and fuss with the zoom every time they visit a new site. Wouldn't it be better if they could set a baseline font size, one that is large enough for them to read comfortably, and have that size be universally respected? (Let's also keep in mind that not everyone can trigger a keyboard shortcut easily. A few years ago, I suffered a nerve injury that left me unable to use a keyboard. I interacted with the computer using dictation and eye-tracking. Suddenly, each “keystroke” became a lot more taxing!) As a general rule, we should give the user as much control as possible, and we should never disable or block their settings from working. For this reason, it's very important to use a relative unit like rem for typography. Link to this headingStrategic unit deployment Alright, so you might be thinking: if the rem unit is respected by both zooming and font-scaling, shouldn't I always use rem values? Why would I ever use pixels? Well, let's see what happens when we use rem values for padding: This is a paragraph containing many words in a specific, intentional order. Remember that rem values scale with the user's default font size. This is a good thing when it comes to typography. Is it a good thing when it comes to other stuff, though? Do I actually want everything to scale with font size? There's an implicit trade-off when it comes to text size. The larger the text is, the fewer characters can fit on each line. When the user cranks up the text by 250%, we can only fit a few words per line. When we use rem values for horizontal padding, though we amplify this negative side-effect! We're reducing the amount of usable space, further restricting how many words can fit on each line. This is bad because paragraphs like this one with only a few words per line are unpleasant to read. Similarly, how about border widths? It doesn't really make sense for a border to become thicker as the user scales up their preferred text size, does it? This is why we want to use these units strategically. When picking between pixels and rems, here's the question you should be asking yourself: Should this value scale up as the user increases their browser's default font size? This question is the root of the mental model I use. If the value should increase with the default font size, I use rem. Otherwise, I use px. That said, the answer to this question isn't always obvious. Let's look at some examples. Link to this headingMedia queries Should we use pixels or rems for our media query values? /* Should we do this: */ @media (min-width: 800px) { } /* …Or this: */ @media (min-width: 50rem) { } It's probably not obvious what the distinction is here, so let's break it down. Suppose a user sets their default text size to 32px, double the standard text size. This means that 50rem will now be equal to 1600px instead of 800px. By sliding the breakpoint up like this, it means that the user will see the mobile layout until their window is at least 1600px wide. If they're on a laptop, it's very likely they'll see the mobile layout instead of the desktop layout. At first, I thought this seemed like a bad thing. They're not actually a mobile user, so why would we show them the mobile layout?? I've come to realize, however, that we usually do want to use rems for media queries. Let's look at a real-world example. On my course platform, I have a left-hand navigation list, with the content shown on the right: On smaller screens, I want to maximize the amount of space for the content, and so the navigation becomes toggleable: Let's see what happens when the user visits with a 32px default font size, using both pixels and rem media queries: Pixel Media QueryRem Media Query The left-hand navigation uses a rem-based width, in order to prevent longer lesson names from line-wrapping if the user cranks up their default font size. When we use a pixel-based media query, this means that the sidebar takes up most of the window, on smaller laptop screens! When we use a rem-based media query, however, we drop back down to the “mobile” layout. As a result, the content becomes much more readable, and the experience is much improved. We're so used to thinking of media queries in terms of mobile/tablet/desktop, but I think it's more helpful to think in terms of available space. A mobile user has less available space than a desktop user, and so we design layouts that are optimized for that amount of space. Similarly, when someone cranks up their default font size, they reduce the amount of available space, and so they should probably receive the same optimizations. And so, I recommend using rems for media queries. It means that users who crank up their default font size will see the “mobile” view even on a desktop computer, but this is generally a better user experience than a super-crowded desktop layout with huge text. Link to this headingVertical margins Let's look at another scenario. Vertical margins: Lorem ipsum dolor sit amet, consectetur adipiscing elit.Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text since the 1500s.HistoryIt was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.For more information, visit the Lorem Ipsum museum. Vertical margins on text (assuming we're working in a horizontally-written language like English) are typically used to improve its readability. We add extra space between paragraphs so that we can quickly tell where one paragraph ends and the next one begins.Interestingly, the web is somewhat unique in terms of paragraph spacing. Print media, like books, indent the first line of each new paragraph, without adding any additional vertical space between paragraphs. This space has a “functional” purpose when it comes to text. We aren't using it aesthetically. For these reasons, I think it does make sense to scale these margins with the user's chosen root font size. Link to this headingWidths and heights Alright, let's consider one more scenario. Here we have a button with a fixed width: So, we know that the button's font-size should be set in rems… but what about its width? There's a really interesting trade-off here: If we set the width to be 240px, the button won't grow with font size, leading to line-wrapping and a taller button. If we set the width to be 15rem, the button will grow wider along with the font size. Which approach is best? Well, it depends on the circumstances! In most cases, I think it makes more sense to use rems. Multi-line buttons look a bit funny to me. By using rems, we preserve the button’s proportions / aspect ratio. In some cases, though, pixels might be the better option. Maybe if you have a very specific layout in mind, and vertical space is more plentiful than horizontal space. Link to this headingTest your intuition Alright, so we've learned that rem values should be used when we want to scale a value with the user's default font size. What if it isn't obvious which option is best, though? Like with the button width? The best thing to do in these cases is to test it. Change your browser's default font size to 32px or 48px, and see how your app feels. Try using pixels, and then try using rems. Which option produces the best user experience, the most readable content? Over time, you'll develop a stronger and stronger intuition, as you see for yourself what to do in specific circumstances. Not sure how to change your browser's default font size? Here's the documentation for the most commonly-used browsers: If your browser isn't listed here, a quick Google search should turn it up! Link to this headingQuick tricks vs. mental models I have a philosophy when it comes to learning: It's better to build an intuition than it is to rely on rote practice and memorization. This blog post could have been a quick list of rules: “Use pixels for X, use rems for Y”. But how useful would it actually have been? The truth is, the real world is messy and complicated. No set of rules can possibly be comprehensive enough to cover every possible scenario. Even after writing CSS for 15 years, I still find myself facing novel layout challenges all the time! When we focus on building an intuition, we don't need to memorize rules. We can rely on our mental model to come up with the right answer. It's wayyy more practical. And yet, most of us learn from “quick tricks”. We pick up an interesting nugget on Twitter. We memorize a lil’ snippet to center a div or apply a flexible grid. And, inevitably, we hit snags where the snippet doesn't work as we expect, and we have no idea why. I think this is why so many developers dislike writing CSS. We have a patchy mental model, and those holes make the language feel brittle and unpredictable, like a house of cards that is always on the verge of collapse. When we focus on building an intuition, on learning how CSS really works, the language becomes a joy to use. I used to find CSS frustrating, but now, it's one of my favourite parts of web development. I love writing CSS. I wanted to share this joy, and so I quit my job and spent a year building a comprehensive self-paced online course. It's called CSS for JavaScript Developers(opens in new tab). This course takes the approach we've used in this tutorial and applies it to the entire CSS language. Using interactive demos and live-editable code snippets, we explore how the language works, and how you can build an intuition you can use to implement any layout. Not just the ones we cover explicitly. I built a custom course platform from scratch, using the same technology stack as my blog. But it's so much more. It includes tons of bite-sized videos, exercises, real-world-inspired projects, and even a handful of mini-games. ✨ It's specifically built for JavaScript developers, folks who use a component-based framework like React or Vue. In addition to core language concepts, we also explore things like how to build a component library from scratch. If you're sick of not understanding how CSS works, this course is for you. 💖 Learn more here: https://css-for-js.dev/(opens in new tab) Link to this headingBonus: Rem quality of life Alright, so as we've seen, there are plenty of cases where we need to use rem values for best results. Unfortunately, this unit can often be pretty frustrating to work with. It's not easy to do the conversion math in our heads. And we wind up with a lot of decimals: 14px 0.875rem 15px 0.9375rem 16px 1rem 17px 1.0625rem 18px 1.125rem 19px 1.1875rem 20px 1.25rem 21px 1.3125rem Before you go memorize this list, let's look at some of the things we can do to improve the experience of working with rems. Link to this headingThe 62.5% trick Let's start with one of the most common options I've seen shared online. Here's what it looks like: html { font-size: 62.5%; } p { /* Equivalent to 18px */ font-size: 1.8rem; } h3 { /* Equivalent to 21px */ font-size: 2.1rem; } The idea is that we're scaling down the root font size so that each rem unit is equal to 10px instead of 16px. People like this solution because the math becomes way easier. To get the rem equivalent of 18px, you move the decimal (1.8rem) instead of having to divide 18 by 16 (1.125rem). But, honestly, I don't recommend this approach. There are a couple of reasons. First, It can break compatibility with third-party packages. If you use a tooltip library that uses rem-based font sizes, text in those tooltips is going to be 37.5% smaller than it should be! Similarly, it can break browser extensions the end user has. There's a baseline assumption on the web that 1rem will produce readable text. I don't wanna mess with that assumption. Also, there are significant migration challenges to this approach. There's no reasonable way to “incrementally adopt” itWell, you could, but then the definition of 1rem wouldn't be consistent across the site/app, which sounds like a nightmare. You'll need to update every declaration that uses rem units across the app. Plus, you'll need to convince all your teammates that it's worth the trouble. Logistically, I'm not sure how realistic it is for most teams. Let's look at some alternative options. Link to this headingCalculated values The calc CSS function can be used to translate pixel values to rems: p { /* Produces 1.125rem. Equivalent to 18px */ font-size: calc(18rem / 16); } h3 { /* Produces 1.3125rem. Equivalent to 21px */ font-size: calc(21rem / 16); } h2 { /* Produces 1.5rem. Equivalent to 24px */ font-size: calc(24rem / 16); } h1 { /* Produces 2rem. Equivalent to 32px */ font-size: calc(32rem / 16); } (Thanks to Twitter user Cahnory(opens in new tab) for improving on my original idea!) Pretty cool, right? We can do the math right there inside the CSS declaration, and calc will spit out the correct answer. This is a viable approach, but it's a bit of a mouthful. It's a lot of typing every time you want to use a rem value. Let's look at one more approach. Link to this headingLeveraging CSS variables This is my favourite option. Here's what it looks like: html { --14px: 0.875rem; --15px: 0.9375rem; --16px: 1rem; --17px: 1.0625rem; --18px: 1.125rem; --19px: 1.1875rem; --20px: 1.25rem; --21px: 1.3125rem; } h1 { font-size: var(--21px); } We can do all the calculations once, and use CSS variables to store those options. When we need to use them, it's almost as easy as typing pixel values, but fully accessible! ✨ It's a bit unconventional to start CSS variables with a number like this, but it's compliant with the spec, and appears to work across all major browsers. If you use a design system with a spacing scale, we can use this same trick: html { --font-size-xs: 0.75rem; --font-size-sm: 0.875rem; --font-size-md: 1rem; --font-size-lg: 1.125rem; --font-size-xl: 1.3125rem; --font-size-2xl: 1.5rem; --font-size-3xl: 2.652rem; --font-size-4xl: 4rem; } CSS variables are absolutely delightful. We explore a bunch of cool things we can do with them in CSS for JavaScript Developers(opens in new tab)! Ultimately, all of these methods will work. I certainly have my preferences, but the important thing is the end user experience. As long as they can adjust the size of all text on the page, you're doing it right. 💯Last updated onOctober 13th, 2024# of hits
2024-11-08T13:09:27
en
train
42,049,157
JumpCrisscross
2024-11-05T06:46:00
After SpaceX's requests, Taiwanese suppliers move manufacturing abroad
null
https://www.reuters.com/technology/after-spacexs-requests-taiwanese-suppliers-move-manufacturing-abroad-sources-say-2024-11-05/
4
2
[ 42049938, 42049761, 42050674 ]
null
null
null
null
null
null
null
null
null
train
42,049,165
MrBuddyCasino
2024-11-05T06:47:02
Mineral nutrient composition of vegetables, fruits and grains is not declining
null
https://www.sciencedirect.com/science/article/pii/S0889157516302113
1
1
[ 42049570 ]
null
null
null
null
null
null
null
null
null
train
42,049,169
blackeyeblitzar
2024-11-05T06:47:48
Boeing factory workers vote to accept contract and end more than 7-week strike
null
https://apnews.com/article/boeing-contract-vote-strike-machinists-union-5237a31660b0a06381457e33f9a72369
12
1
[ 42055477 ]
null
null
no_error
Boeing's machinists strike is over but the troubled aerospace giant still faces many challenges
2024-11-05T05:08:20
By  DAVID KOENIG, LINDSEY WASSON, HANNAH SCHOENBAUM and CATHY BUSSEWITZ
SEATTLE (AP) — Factory workers at Boeing have voted to accept a contract offer and end their strike after more than seven weeks, clearing the way for the company to restart idled Pacific Northwest assembly lines. But the strike was just one of many challenges the troubled U.S. aerospace giant faces as it works to return to profitability and regain public confidence. Boeing’s 33,000 striking machinists disbanded their picket lines late Monday after leaders of the International Association of Machinists and Aerospace Workers district in Seattle said 59% of union members who cast ballots agreed to approve the company’s fourth formal offer, which included a 38% wage increase over four years. Union machinists assemble the 737 Max, Boeing’s bestselling airliner, along with the 777 or “triple-seven” jet and the 767 cargo plane at factories in Renton and Everett, Washington. Resuming production will allow Boeing to generate much-needed cash, which it has been bleeding. “Even for a company the size of Boeing, it is a life-threatening problem,” said Gautam Mukunda, lecturer at the Yale School of Management. The union said its workers can return to work as soon as Wednesday or as late as Nov. 12. Boeing CEO Kelly Ortberg has said it might take “a couple of weeks” to resume production in part because some workers might need retraining.As the machinists get back to work, management will have to address a host of other problems. The company needs to get on better financial footing. But while doing so, it also needs to prioritize the quality of its workmanship and its relationships with employees and suppliers, analysts said. Boeing has been managing itself to meet short-term profit goals and “squeezing every stakeholder, squeezing every employee, every supplier to the point of failure in order in order to maximize their short-term financial performance,” Mukunda said. “That is bad enough if you run a clothing company. It is unacceptable when you are building the most complex mass-produced machines human beings have ever built.” Above all, Boeing needs to produce more planes. When workers are back and production resumes, the company will be producing about 30 737s a month, and “they must get that number over 50. They have to do it. And the people who are going to do that are the workers on the factory floor,” Mukunda said. Another challenge will be getting the company’s fragile supply chain running again, said Cai von Rumohr, an aviation analyst at financial services firm TD Cowen. Suppliers that were working ahead of Boeing’s schedule when the strike began may have had to lay workers off or finance operations on their own. “There are lots of nasty questions in terms of complexities that go into revamping the supply chain,” he said.One way Boeing could generate cash would be to sell companies that don’t fit directly in the business, such as flight information provider Jeppesen Sanderson, which it bought in 2000 for $1.5 billion, von Rumohr said. “They’d lose some earnings but they’d get a lot of cash to reduce their debt,” he added. “They really need to get to a more stable position where they have a solid credit rating.” Ortberg acknowledged the challenges ahead in a message to employees after they voted to end the walkout.“There is much work ahead to return to the excellence that made Boeing an iconic company,” he said.The average annual pay of Boeing machinists is currently $75,608 and eventually will rise to $119,309 under the new contract, according to the company. The union said the compounded value of the promised pay raise would amount to an increase of more than 43% over the life of the agreement. Reactions were mixed even among union members who voted to accept the contract. Although she voted “yes,” Seattle-based calibration specialist Eep Bolaño said the outcome was “most certainly not a victory.” Bolaño said she and her fellow workers made a wise but infuriating choice to accept the offer.“We were threatened by a company that was crippled, dying, bleeding on the ground, and us as one of the biggest unions in the country couldn’t even extract two-thirds of our demands from them. This is humiliating,” she said. For other workers like William Gardiner, a lab lead in calibration services, the revised offer was a cause for celebration.“I’m extremely pumped over this vote,” said Gardiner, who has worked for Boeing for 13 years. “We didn’t fix everything — that’s OK. Overall, it’s a very positive contract.”Along with the wage increase, the new contract gives each worker a $12,000 ratification bonus and retains a performance bonus the company wanted to eliminate. President Joe Biden congratulated the machinists and Boeing for coming to an agreement that he said supports fairness in the workplace and improves workers’ ability to retire with dignity. The contract, he said, is important for Boeing’s future as “a critical part of America’s aerospace sector.” A continuing strike would have plunged Boeing into further financial peril and uncertainty. Last month, Ortberg announced plans to lay off about 17,000 people and a stock sale to prevent the company’s credit rating from being cut to junk status. The labor standoff — the first strike by Boeing machinists since an eight-week walkout in 2008 — was the latest setback in a volatile year for the aerospace giant. Boeing came under several federal investigations this year after a door plug blew off a 737 Max plane during an Alaska Airlines flight in January. Federal regulators put limits on Boeing airplane production that they said would last until they felt confident about manufacturing safety at the company. The door-plug incident renewed concerns about the safety of the 737 Max. Two of the planes had crashed less than five months apart in 2018 and 2019, killing 346 people. The CEO at the time, whose efforts to fix the company failed, announced in March that he would step down. In July, Boeing agreed to plead guilty to conspiracy to commit fraud for deceiving regulators who approved the 737 Max. ___Koenig reported from Dallas, Schoenbaum from Salt Lake City and Bussewitz from New York.
2024-11-07T23:25:52
en
train
42,049,174
tosh
2024-11-05T06:50:19
Migrating the Shopify mobile app to React Native
null
https://twitter.com/mustafa01ali/status/1853619638141071573
1
0
null
null
null
null
null
null
null
null
null
null
train
42,049,176
bilater
2024-11-05T06:50:45
Show HN: Instantly compare multiple YouTube videos
This new feature lets you do a meta analysis on multiple youtube videos. Get the common themes as well as the differences to build a holistic view of the topic.
https://twitter.com/deepwhitman/status/1853682044548874497
1
0
[ 42050677 ]
null
null
null
null
null
null
null
null
null
train
42,049,180
jnord
2024-11-05T06:52:12
Programmer in Berlin: Culture
null
https://wickedchicken.github.io/post/programmer-in-berlin-culture/
113
209
[ 42049537, 42049433, 42049708, 42049483, 42049651, 42049381, 42049633, 42051401, 42049534, 42049434, 42049608, 42049417, 42049959, 42053491, 42049629, 42053639, 42049710, 42050387, 42049843, 42049602, 42049517, 42049528, 42049744, 42049609, 42050321, 42055882, 42049671, 42053264, 42050996, 42049762, 42049782, 42054994, 42049423, 42049838, 42049480, 42049456, 42049793, 42049464, 42049621, 42049795, 42049555 ]
null
null
null
null
null
null
null
null
null
train
42,049,186
denisshilov
2024-11-05T06:53:08
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,049,206
lc0_stein
2024-11-05T06:56:41
How to Move Multiple Ions in Two Dimensions
null
https://physics.aps.org/articles/v17/159
1
0
null
null
null
null
null
null
null
null
null
null
train
42,049,209
walterbell
2024-11-05T06:57:14
Inspired by social media, Gen Z is increasingly drawn to the skilled trades
null
https://blog.thumbtack.com/inspired-by-social-media-gen-z-is-increasingly-drawn-to-the-skilled-trades-30a9c352d61a
4
0
null
null
null
null
null
null
null
null
null
null
train
42,049,216
ingve
2024-11-05T06:59:05
The Use Method
null
https://www.brendangregg.com/usemethod.html
2
0
null
null
null
missing_parsing
The USE Method
null
Brendan Gregg
Boeing 707 Emergency Checklist(1969) The Utilization Saturation and Errors (USE) Method is a methodology for analyzing the performance of any system. It directs the construction of a checklist, which for server analysis can be used for quickly identifying resource bottlenecks or errors. It begins by posing questions, and then seeks answers, instead of beginning with given metrics (partial answers) and trying to work backwards. The resulting USE Method-derived checklists for different operating systems are listed on the left navigation panel (Linux, Solaris, etc). You can customize these for your environment, adding additional tools that your site uses. There is also the Rosetta Stone of Performance Checklists, automatically generated from some of these. Performance monitoring products can make the USE method easier to follow by providing its metrics via an easy-to-use interface. Intro A serious performance issue arises, and you suspect it's caused by the server. What do you check first? Getting started can be the hardest part. I developed the USE Method to teach others how to solve common performance issues quickly, without overlooking important areas. Like an emergency checklist in a flight manual, it is intended to be simple, straightforward, complete, and fast. The USE Method has been used successfully countless times in different enterprise environments, classroom environments (as a learning tool), and more recently in cloud computing environments. The USE Method is based on three metric types and a strategy for approaching a complex system. I find it solves about 80% of server issues with 5% of the effort, and, as I will demonstrate, it can be applied to systems other than servers. It should be thought of as a tool, one that is part of larger toolbox of methodologies. There are many problem types it doesn't solve, which will require other methods and longer time spans. Summary The USE Method can be summarized as: For every resource, check utilization, saturation, and errors. It's intended to be used early in a performance investigation, to identify systemic bottlenecks. Terminology definitions: resource: all physical server functional components (CPUs, disks, busses, ...) [1] utilization: the average time that the resource was busy servicing work [2] saturation: the degree to which the resource has extra work which it can't service, often queued errors: the count of error events [1] It can be useful to consider some software resources as well, or software imposed limits (resource controls), and see which metrics are possible. [2] There is another definition where utilization describes the proportion of a resource that is used, and so 100% utilization means no more work can be accepted, unlike with the "busy" definition above. The metrics are usually expressed in the following terms: utilization: as a percent over a time interval. eg, "one disk is running at 90% utilization". saturation: as a queue length. eg, "the CPUs have an average run queue length of four". errors: scalar counts. eg, "this network interface has had fifty late collisions". Errors should be investigated because they can degrade performance, and may not be immediately noticed when the failure mode is recoverable. This includes operations that fail and are retried, and devices from a pool of redundant devices that fail. Does Low Utilization Mean No Saturation? A burst of high utilization can cause saturation and performance issues, even though utilization is low when averaged over a long interval. This may be counter-intuitive! I had an example where a customer had problems with CPU saturation (latency) even though their monitoring tools showed CPU utilization was never higher than 80%. The monitoring tool was reporting five minute averages, during which CPU utilization hit 100% for seconds at a time. Resource List To begin with, you need a list of resources to iterate through. Here is a generic list for servers: CPUs: sockets, cores, hardware threads (virtual CPUs) Memory: capacity Network interfaces Storage devices: I/O, capacity Controllers: storage, network cards Interconnects: CPUs, memory, I/O Some components are two types of resources: storage devices are a service request resource (I/O) and also a capacity resource (population). Both types can become a system bottleneck. Request resources can be defined as queueing systems, which can queue and then service requests. Some physical components have been left out, such as hardware caches (eg, MMU TLB/TSB, CPU). The USE Method is most effective for resources that suffer performance degradation under high utilization or saturation, leading to a bottleneck. Caches improve performance under high utilization. Cache hit rates and other performance attributes can be checked after the USE Method - after systemic bottlenecks have been ruled out. If you are unsure whether to include a resource, include it, then see how well the metrics work. Functional Block Diagram Another way to iterate over resources is to find or draw a Functional Block Diagram for the system. These also show relationships, which can be very useful when looking for bottlenecks in the flow of data. Here is an example from the Sun Fire V480 Guide (page 82): I love these diagrams, although they can be hard to come by. Hardware engineers can be the best resource – the people who actually build the things. Or you can try drawing your own. While determining utilization for the various busses, annotate each bus on the functional diagram with its maximum bandwidth. This results in a diagram where systemic bottlenecks may be identified before a single measurement has been taken. (This is a useful exercise during hardware product design, when physical components can be changed.) Interconnects CPU, memory and I/O interconnects are often overlooked. Fortunately, they aren't commonly the system bottleneck. Unfortunately, if they are, it can be difficult to do much about (maybe you can upgrade the main board, or reduce load: eg, "zero copy" projects lighten memory bus load). With the USE Method, at least you become aware of what you weren't considering: interconnect performance. See Analyzing the HyperTransport for an example of an interconnect issue which I identified with the USE Method. Metrics Given the list of resources, consider the metric types: utilization, saturation and errors. Here are some examples. In the table below, think about each resource and metric type, and see if you can fill in the blanks. Mousing over the empty cells will reveal some possible answers, described in generic Unix/Linux terms (you can be more specific): resourcetypemetric CPUutilizationCPU utilization (either per-CPU or a system-wide average) CPUsaturationrun-queue length or scheduler latency(aka Memory capacityutilizationavailable free memory (system-wide) Memory capacitysaturationanonymous paging or thread swapping (maybe "page scanning" too) Network interfaceutilizationRX/TX throughput / max bandwidth Storage device I/Outilizationdevice busy percent Storage device I/Osaturationwait queue length Storage device I/Oerrorsdevice errors ("soft", "hard", ...) Click here to reveal all. I've left off timing: these metrics are either averages per interval or counts. I've also left off how to fetch them: for your custom checklist, include which OS tool or monitoring software to use, and which statistic to read. For those metrics that aren't available, write "?". You will end up with a checklist that is easy and quick to follow, and is as complete as possible for your system. Harder Metrics Now for some harder combinations (again, try to think about these first!): resourcetypemetric CPUerrorseg, correctable CPU cache ECC events or faulted CPUs (if the OS+HW supports that) Memory capacityerrorseg, failed malloc()s (although this is usually due to virtual memory exhaustion, not physical) Networksaturationsaturation related NIC or OS events; eg "dropped", "overruns" Storage controllerutilizationdepends on the controller; it may have a max IOPS or throughput that can be checked vs current activity CPU interconnectutilizationper port throughput / max bandwidth (CPU performance counters) Memory interconnectsaturationmemory stall cycles, high CPI (CPU performance counters) I/O interconnectutilizationbus throughput / max bandwidth (performance counters may exist on your HW; eg, Intel "uncore" events) Click here to reveal all. These typically get harder to measure, depending on the OS, and I often have to write my own software to do them (eg, the "amd64htcpu" script from Analyzing the HyperTransport). Repeat for all combinations, and include instructions for fetching each metric. You'll end up with a list of about thirty metrics, some of which can't be measured, and some of which are tricky to measure. Fortunately, the most common issues are usually found with the easy ones (eg, CPU saturation, memory capacity saturation, network interface utilization, disk utilization), which can be checked first. See the top of this page for the example checklists for Linux, Solaris, Mac OS X, FreeBSD, etc. In Practice Reading metrics for every combination on your OS can very time consuming, especially once you start working through bus and interconnect metrics. You may only have time to check a subset: CPUs, memory capacity, storage capacity, storage device I/O, network interfaces. This is better than it sounds! The USE Method has made you aware of what you didn't check: what were once unknown-unknowns are now known-unknowns. And for that time when it's vital for your company to root cause a performance issue, you already have a to-do list of known extra work that can be performed for more thorough analysis, completing the USE Method for when it's really needed. It's hoped that the subset of metrics that are easy to check grows over time, as more metrics are added to OSes to make the USE Method easier. Performance monitoring software can also help, adding USE method wizards to do the work for you. Software Resources Some software resources can be considered in a similar way. This usually applies to smaller components of software, not entire applications. For example: mutex locks: utilization may be defined as the time the lock was held; saturation by those threads queued waiting on the lock. thread pools: utilization may be defined as the time threads were busy processing work; saturation by the number of requests waiting to be serviced by the thread pool. process/thread capacity: the system may have a limited number of processes or threads, the current usage of which may be defined as utilization; waiting on allocation may be saturation; and errors are when the allocation failed (eg, "cannot fork"). file descriptor capacity: similar to the above, but for file descriptors. Don't sweat this type. If the metrics work well, use them, otherwise software can be left to other methodologies (eg, latency). Suggested Interpretations The USE Method helps you identify which metrics to use. After learning how to read them from the operating system, your next task is to interpret their current values. For some, interpretation may be obvious (and well documented). Others, not so obvious, and may depend on workload requirements or expectations. The following are some general suggestions for interpreting metric types: Utilization: 100% utilization is usually a sign of a bottleneck (check saturation and its effect to confirm). High utilization (eg, beyond 70%) can begin to be a problem for a couple of reasons: When utilization is measured over a relatively long time period (multiple seconds or minutes), a total utilization of, say, 70% can hide short bursts of 100% utilization. Some system resources, such as hard disks, cannot be interrupted during an operation, even for higher-priority work. Once their utilization is over 70%, queueing delays can become more frequent and noticeable. Compare this to CPUs, which can be interrupted ("preempted") at almost any moment. Saturation: any degree of saturation can be a problem (non-zero). This may be measured as the length of a wait queue, or time spent waiting on the queue. Errors: non-zero error counters are worth investigating, especially if they are still increasing while performance is poor. It's easy to interpret the negative case: low utilization, no saturation, no errors. This is more useful than it sounds - narrowing down the scope of an investigation can quickly bring focus to the problem area. Cloud Computing In a cloud computing environment, software resource controls may be in place to limit or throttle tenants who are sharing one system. This can include hypervisor or container (cgroup) limits for memory, CPU, network, and storage I/O. External hardware may also impose limits, such as for network throughput. Each of these resource limits can be examined with the USE Method, similar to examining the physical resources. For example, in our environment "memory capacity utilization" can be the tenant's memory usage vs its memory cap. "memory capacity saturation" can be seen by anonymous paging activity, even though the traditional Unix page scanner may be idle. Strategy The USE Method is pictured as a flowchart below. Note that errors can be checked before utilization and saturation, as a minor optimization (they are usually quicker and easier to interpret). The USE Method identifies problems which are likely to be system bottlenecks. Unfortunately, systems can be suffering more than one performance problem, and so the first one you find may be a problem but not the problem. Each discovery can be investigated using further methodologies, before continuing the USE Method as needed to iterate over more resources. Strategies for further analysis include workload characterization and drill-down analysis. After completing these (if needed), you should have evidence for whether the corrective action is to adjust the load applied or to tune the resource itself. Apollo I said earlier that the USE Method could be applied beyond servers. Looking for a fun example, I thought of a system in which I have no expertise at all, and no idea where to start: the Apollo Lunar Module guidance system. The USE Method provides a simple procedure to try. The first step is to find a list of resources, or better still, a functional block diagram. I found the following in the "Lunar Module - LM10 Through LM14 Familiarization Manual" (1969): Some of these components may not exhibit utilization or saturation characteristics. After iterating through them, this can be redrawn to only include relevant components. (I'd also include more: the "erasable storage" section of memory, the "core set area" and "vac area" registers.) I'll start with the Apollo guidance computer (AGC) itself. For each metric, I browsed various LM docs to see what might make sense: AGC utilization: This could be defined as the number of CPU cycles doing jobs (not the "DUMMY JOB") divided by the clock rate (2.048 MHz). This metric appears to have been well understood at the time. AGC saturation: This could be defined as the number of jobs in the "core set area", which are seven sets of registers to store program state. These allow a job to be suspended (by the "EXECUTIVE" program - what we'd call a "kernel" these days) if an interrupt for a higher priority job arrives. Once exhausted, this moves from a saturation state to an error state, and the AGC reports a 1202 "EXECUTIVE OVERFLOW-NO CORE SETS" alarm. AGC errors: Many alarms are defined. Apart from 1202, there is also a 1203 alarm "WAITLIST OVERFLOW-TOO MANY TASKS", which is a performance issue of a different type: too many timed tasks are being processed before returning to normal job scheduling. As with 1202, it could be useful to define a saturation metric that was the length of the WAITLIST, so that saturation can be measured before the overflow and error occurs. Some of these details may be familiar to space ethusiasts: 1201 ("NO VAC AREAS") and 1202 alarms famously occurred during the Apollo 11 descent. ("VAC" is short for "vector accumulator", extra storage for jobs that process vector quantities; I think wikipedia's description as "vacant" may be incorrect). Given Apollo 11's 1201 alarm, analysis can continue using other methodologies, such as workload characterization. The workload is mostly applied via interrupts, many of which can be seen in the functional diagram. This includes the rendezvous radar, used to track the Command Module, which was interrupting the AGC with work even though the LM was performing descent. This is an example of finding unnecessary work (or low priority work; some updates from the radar may have been desirable so that the LM AGC could immediately calculate an abort trajectory and CM rendezvous if needed). As a harder example, I'll examine the rendezvous radar as a resource. Errors are the easiest to identify. There are three types: "DATA NO GOOD", "NO TRACK", and "SHAFT- AND TRUNNION-AXIS ERROR" signals. Utilization is harder: one type may be utilization of the drive motors - defined as the time they were busy responding to angle commands (seen in the functional diagram via the "COUPLING DATA UNIT"). I'll need to read the LM docs more to see if there saturation characteristics either with the drive motors or with the returned radar data. In a short amount of time, using this methodology, I've gone from having no idea where to start, to having specific metrics to look for and research. Other Methodologies While the USE Method may find 80% of server issues, latency-based methodologies (eg, Method R) can approach finding 100% of all issues. However, these can take much more time if you are unfamiliar with software internals. They may be more suited for database administrators or application developers, who already have this familiarity. The USE Method is more suited for junior or senior system administrators, whose responsibility and expertise includes the operating system (OS) and hardware. It can also be employed by these other staff when a quick check of system health is desired. Tools Method For comparison with the USE Method, I'll describe a tools-based approach (I'll call this "Tools Method"): List available performance tools (optionally install or purchase more). For each tool, list useful metrics it provides. For each metric, list possible interpretation rules. The result of this is a prescriptive checklist showing which tool to run, which metrics to read, and how to interpret them. While this can be fairly effective, one problem is that it relies exclusively on available (or known) tools, which can provide an incomplete view of the system. The user is also unaware that they have an incomplete view - and so the problem will remain. The USE Method, instead, iterates over the system resources to create a complete list of questions to ask, then searches for tools to answer them. A more complete view is constructed, and unknown areas are documented and their existence known ("known unknowns"). Based on USE, a similar checklist can be developed showing which tool to run (where available), which metric to read, and how to interpret it. Another problem can be when iterating through a large number of tools distracts from the goal - to find bottlenecks. The USE Method provides a strategy to find bottlenecks and errors efficiently, even with an unwieldy number of available tools and metrics. Conclusion The USE Method is a simple strategy you can use to perform a complete a check of system health, identifying common bottlenecks and errors. It can be deployed early in the investigation and quickly identify problem areas, which then can be studied in more detail other methodologies, if need be. The strength of USE is its speed and visibility: by considering all resources, you are unlikely to overlook any issues. It will, however, only find certain types of issues – bottlenecks and errors – and should be considered as one tool in a larger toolbox. I explained the USE Method on this page and provided generic examples of metrics. See the example checklists in the left navigation pane for specific operating systems, where tools and metrics to apply the USE Method are suggested. Also see the complementary thread-based methodology, the TSA Method. Acknowledgments "Optimizing Oracle Performance" by Cary Millsap and Jeff Holt (2003) describes Method R (and other methodologies), which reminded me recently that I should write this methodology down. The groups at Sun Microsystems, including PAE and ISV, who helped apply the USE Method (before it was named) to the storage appliance series. We drew ASCII functional block diagrams annotated with metric names and bus speeds – these were harder to construct than you would think (we should have asked the hardware teams for help sooner). My students from performance classes several years ago, to whom I taught this methodology and who provided feedback. The Virtual AGC project, which became a fun distraction as I read through their document library, hosted by ibiblio.org. In particular was the LMA790-2 "Lunar Module LM-10 Through LM-14 Vehicle Familiarization Manual" (page 48 has the functional block diagram), and the "Apollo Guidance and Navigation Lunar Module Student Study Guide", which has a good explanation of the EXECUTIVE program including flow charts. (These docs are 109 and 9 Mbytes in size.) Deirdré Straughan for edits and feedback, improving my explanations. The image at the top of this post is from a Boeing 707 flight manual, printed in 1969. This is (of course) obsolete and shouldn't be used for reference. If you click it you get the full page. Updates USE Method updates: It was published in ACMQ as Thinking Methodically about Performance (2012). It was also published in Communications of the ACM as Thinking Methodically about Performance (2013). I presented it in the FISL13 talk The USE Method (2012). I spoke about it at Oaktable World 2012: video, PDF. I included it in the USENIX LISA `12 talk Performance Analysis Methodology. It is covered in my book on Systems Performance, published by Prentice Hall (2013). More updates (Apr 2014): LuceraHQ are implementing USE Method metrics on SmartOS for performance monitoring of their high performance financial cloud. I spoke about the USE Method for OS X at MacIT 2014 (slides). More updates (Aug 2017): Heinrich Hartmann posted System Monitoring with the USE Dashboard, showing how a USE method dashboard has been implemented by the Circonus monitoring product.
2024-11-08T21:04:13
null
train
42,049,221
rcarmo
2024-11-05T06:59:52
Raspberry Pi gets 240p Composite Video
null
https://lakka.tv/articles/2024/05/02/rpi-composite/
1
0
[ 42050673 ]
null
null
no_error
Raspberry Pi gets 240p Composite Video
null
null
Posted on 2024-05-02 by vudiq Thumbnail by MegaManGB Special community version of Lakka has been released! This special version is aimed at Raspberry Pi SBCs, namely Pi 3, Pi 4 and Pi 5. Out of the box these builds work with the composite video and analog audio outputs of these SBCs. In case of Pi 5 some soldering and a USB to 3,5 mm jack soundcard/dongle are required. Release summary This version of Lakka is set up to use analog video and audio outputs, so no changes after flashing the image are required. Two different versions of Raspberry Pi kernels have been used. Raspberry Pi 5 uses kernel 6.6.y, but Pi 3 and Pi 4 use kernel 5.10.y due to broken analog video output in kernel 6.6.y. RetroArch is updated to commit tagged 1.18.0 and all libretro cores are same version as latest v5.0 release. Mesa is at version 24.0.6. Due to issues (hiccups and stutters) on the current release (Lakka-v5.x) and development (devel) branches, this community version is based on the Lakka-v4.x branch, where these issues are not present. The community is grateful to all Raspberry Pi engineers and namely to njhollinghurst, who contributed latest updates to the driver for the RP1 VEC of the Pi 5 and provided detailed explanations and help in Raspberry Pi forums. Many thanks also to MegaManGB for hours of testing. Currently only images with NTSC preset are available, but we will update this article with instructions for PAL users. You can download the images from below links: Raspberry Pi 3 Raspberry Pi 4 Raspberry Pi 5 Once updated images will be released with community fixes and updates, use the built-in online updater to download new updates. Remember to configure each core individually to have an integer vertical height, which varies per core (like 224 or 240 for NES). And mainly use a tvout shader to horizontally even out / blur pixels and alleviate composite artifacting. Update May 10th, 2024 / May 15th, 2024 We now include shader specially crafted for CRT televisions using composite input. Also most of the libretro cores are now preconfigured to run at the correct aspect ratio. Here is a small demonstration from MegaManGB: Wi-Fi configuration You can edit wifi-config.txt to setup Wi-Fi network access after flashing the image and before the first boot. Just uncomment lines SSID= and PSK= and add your network name (SSID) and password (PSK). It is recommended to use standard characters for both, try to avoid asterisk, semicolon, backslash, spaces and quotes - single and double. You can still use RetroArch to connect to Wi-Fi network as usually, this only helps to set up the Wi-Fi network in advance. PAL users If you are using a PAL TV, you will need to edit and modify cmdline.txt and retroarch-overrides.txt in the FAT32 partition. Below are instructions for individual devices. These instructions are also included in the files as comments. In case of retroarch-overrides.txt you just need to uncomment those lines. In cmdline.txt the very first line has to be modified according to the instructions. Please keep in mind that no line breaks are allowed between the arguments - all arguments must be in single line. RPi3 and RPi4: In cmdline.txt change video=Composite-1:720x480@60e vc4.tv_norm=NTSC-J to video=Composite-1:720x576@50e In retroarch-overrides.txt add / uncomment following lines: video_autoswitch_refresh_rate = "3" video_refresh_rate = "50.08" If you already installed Lakka (i.e. the first boot with partition resizing is already done), modify these options: Settings → Video → Output → change Vertical Refresh rate to 50.080 using left/right D-PAD on your gamepad or use the on-screen keyboard change Automatic Refresh Rate Switch to OFF Then save the settings (not needed if saving of settings on exit is enabled - default behavior) and restart RetroArch. RPi5: In cmdline.txt change video=Composite-1:720x480,tv_mode=NTSC-J drm_rp1_vec.cmode=27000:721/16/64/58,480/6/6/34 to video=Composite-1:720x576,tv_mode=PAL drm_rp1_vec.cmode=27000,721/12/64/68,576/4/6/38 In retroarch-overrides.txt add / uncomment following line: video_fullscreen_y = "576" It is very important to change the Y resolution before first boot, otherwise RetroArch will be not able start normally. If you happen to be in this situation, you have to edit the configuration file in /storage/.config/retroarch/retroarch.cfg (via ssh) or \\lakka-ip\Configfiles\retroarch\retroarch.cfg (via network share) and change the value of the key video_fullscreen_y to 576. After saving the file RetroArch should pick up the updated file and start normally. Known issues As this is a community release, there is no official support, but feel free to join the community on Discord (channel #lakkatv) using the invite link in the footer! Final notes If you want to show your support for further development of the Libretro projects and ecosystem, you can learn more here. Happy retro-gaming!
2024-11-07T13:23:33
en
train
42,049,227
paulcarroty
2024-11-05T07:00:50
Russia Suspected of Plotting to Send Incendiary Devices on U.S.-Bound Planes
null
https://www.wsj.com/world/russia-plot-us-planes-incendiary-devices-de3b8c0a
16
1
[ 42049254, 42050671 ]
null
null
null
null
null
null
null
null
null
train
42,049,257
Rezentic
2024-11-05T07:05:03
Show HN: Solve Complex Math Problems with OCR Scan and Snip
null
https://tutorupai.com/
3
3
[ 42049258, 42049981 ]
null
null
null
null
null
null
null
null
null
train
42,049,263
wmstack
2024-11-05T07:05:49
Google Claims World First as AI Finds 0-Day Security Vulnerability
null
https://www.forbes.com/sites/daveywinder/2024/11/04/google-claims-world-first-as-ai-finds-0-day-security-vulnerability/
3
2
[ 42050588, 42049389, 42050952 ]
null
null
null
null
null
null
null
null
null
train
42,049,275
2pk03
2024-11-05T07:08:31
Show HN: Apache Wayang supports now Kafka
null
https://github.com/apache/incubator-wayang
2
0
null
null
null
null
null
null
null
null
null
null
train
42,049,276
rcarmo
2024-11-05T07:08:37
Fedora 41 Releases Today with Many Shiny New Features
null
https://www.phoronix.com/news/Fedora-41-Download
1
0
[ 42050670 ]
null
null
null
null
null
null
null
null
null
train
42,049,285
precommunicator
2024-11-05T07:10:32
MetroDreamin' – Build the Public Transit System of Your Dreams
null
https://metrodreamin.com/explore
3
0
null
null
null
null
null
null
null
null
null
null
train
42,049,292
tosh
2024-11-05T07:11:26
null
null
null
1
null
[ 42050668 ]
null
true
null
null
null
null
null
null
null
train
42,049,294
marban
2024-11-05T07:11:56
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,049,296
rcarmo
2024-11-05T07:12:27
M5Stack 3.2 Tops LLM Module
null
https://www.hackster.io/news/m5stack-adds-large-language-model-support-to-its-offerings-with-the-3-2-tops-llm-module-f0a4e061f0de
1
0
null
null
null
null
null
null
null
null
null
null
train
42,049,302
mfiguiere
2024-11-05T07:13:47
Hetzner Server Comparison
null
https://achromatic.dev/blog/hetzner-server-comparison
3
0
[ 42050666 ]
null
null
no_error
Hetzner Server Comparison
null
Achromatic
Cloud Virtual servers that provide CPU resources within a shared physical host server. Prices do not include VAT. CategoryServerCPU modelSCore MultithreadRAMStorageMonthly priceSCore / M. priceRAM / M. priceStorage / M. priceDedicated vCPUCCX13AMD Milan EPYC™ 7003 or AMD Genoa EPYC™ 965430758 GBSSD 80 GB€ 12.492460.646.41CCX23538616 GBSSD 160 GB€ 24.492200.656.53CCX331227432 GBSSD 240 GB€ 48.492530.664.95CCX432455264 GBSSD 360 GB€ 96.492550.663.73CCX5349227128 GBSSD 600 GB€ 192.492560.663.12CCX6373650192 GBSSD 960 GB€ 288.492550.673.33Shared vCPU AmpereCAX11Ampere® Altra®36084 GBSSD 40 GB€ 3.799501.0510.55CAX2172368 GBSSD 80 GB€ 6.4911151.2312.33CAX311445616 GBSSD 160 GB€ 12.4911561.2812.81CAX412911832 GBSSD 320 GB€ 24.4911891.3113.07Shared vCPU AMDCPX11AMD EPYC™ 700215252 GBSSD 40 GB€ 4.353510.469.20CPX2179814 GBSSD 80 GB€ 7.5510570.5310.60CPX31106688 GBSSD 160 GB€ 13.607850.5911.76CPX41619216 GBSSD 240 GB€ 25.202460.639.52CPX511246332 GBSSD 360 GB€ 54.902270.586.56Shared vCPU IntelCX22Intel® Xeon® Gold19954 GBSSD 40 GB€ 3.795261.0510.55CX3234628 GBSSD 80 GB€ 6.805091.1811.76CX42750616 GBSSD 160 GB€ 16.404580.989.76CX521220032 GBSSD 320 GB€ 32.403770.999.88 Dedicated vCPU Hetzner’s Dedicated vCPU options offer good value for projects needing consistent, high-performance CPU access. Dedicated vCPUs allocate exclusive CPU resources to each instance, while Shared vCPU means performance can vary based on the demand from other users on the same server. Steady and reliable performance is crucial for workloads like production applications, large databases and performance-sensitive tasks. Shared vCPU The Ampere-based CAX series emerges as the undisputed value champion. The value scores consistently outperform both AMD EPYC and Intel Xeon alternatives. What's particularly remarkable is how the CAX line maintains its cost-efficiency as you scale up which is rare in cloud computing where higher tiers typically come with diminishing returns. When comparing with other CPU architectures, the contrast becomes stark. The closest competitor, the AMD EPYC-powered CPX21, reaches slightly lower value scores, while Intel Xeon options trail even further behind. This data clearly demonstrates why the Ampere-based CAX series has become the go-to choice for cost-conscious businesses seeking reliable cloud performance. Dedicated Physical servers exclusively reserved for one user. Prices do not include VAT. Lowest options have been chosen for RAM and Storage. ModelCPUPassMark Multi-threadPassMark Single-threadRAMStoragePrice / monthPassMark Multi-thread / M. pricePassMark Single-thread / M. PriceRAM / M. priceStorage / M. priceAX41-NVMeAMD Ryzen™ 5 360017748256764 GB2 x 512 GB NVMe SSD€42.80415601.5023.92EX44Intel® Core™ i5-1350031857388464 GB2 x 512 GB NVMe SSD€44.00724881.4523.27AX42AMD Ryzen™ 7 PRO 8700GE27882363264 GB2 x 512 GB NVMe SSD€49.00569741.3120.90AX52AMD Ryzen™ 7 770034622406364 GB2 x 1 TB NVMe SSD€64.00542631.0032.00EX101Intel® Core™ i9-1390047017432764 GB2 x 1.92 TB NVMe SSD Datacenter Edition€89.00528490.7243.15AX102AMD Ryzen™ 9 7950X3D625024149128 GB2 x 1.92 TB NVMe SSD Datacenter Edition€109.00573381.1735.23SX65AMD Ryzen™ 7 3700X22542266064 GB4 x 22 TB HDD 2 x 1 TB SSDstarting €104.00217260.62433.04EX130-RIntel® Xeon® Gold 5412U525573119256 GB2 x 1.92 TB NVMe SSD Datacenter Edition€134.00393231.9128.66EX130-SIntel® Xeon® Gold 5412U525573119128 GB2 x 3.84 TB NVMe SSD Datacenter Edition€139.00378220.9255.32RX170Ampere® Altra® Q80-3044425882128 GB2 x 960 GB NVMe SSD Datacenter Edition€179.0024850.7210.73GEX44Intel® Core™ i5-1350031857388464 GB2 x 1.92 TB NVMe SSD (Gen3)starting €184.00173210.3520.87AX162-RAMD EPYC™ 9454P959182993256 GB2 x 1.92 TB NVMe SSD Datacenter Editionstarting €199.00482151.2919.30AX162-SAMD EPYC™ 9454P959182993128 GB2 x 3.84 TB NVMe SSD Datacenter Edition (Gen 4)starting €199.00482150.6438.59SX135AMD Ryzen™ 9 3900307962623128 GB2 x 1.92 TB NVMe SSD Datacenter Edition 8 x 22 TB SATA Enterprise Hard Drive€209.00147130.61200.87RX220Ampere® Altra® Q80-3044425882128 GB2 x 3.84 TB NVMe SSD Datacenter Edition€229.0019440.5633.54SX295AMD EPYC™ 7502P508182017256 GB2 x 7.68 TB NVMe SSD Datacenter Edition 14 x 22 TB SATA Enterprise Hard Drive€399.0012750.64423.58GEX130Intel® Xeon® Gold 5412U525573119128 GB2 x 1.92 TB NVMe SSD Datacenter Edition€838.006340.154.58 The standout performer is the EX44, equipped with an Intel Core i5-13500. At €44.00 monthly it offers a compelling combination of 31857 PassMark points in multi-thread performance, a strong single-thread score of 3884 and comes with 64GB RAM and dual 512GB NVMe SSDs. Following closely is the AX42 powered by AMD's Ryzen 7 PRO 8700GE. At €49.00 per month it delivers 27882 PassMark points and matches the EX44's storage and RAM specifications. The AX52 featuring the Ryzen 7 7700 maintains strong value at €64.00 monthly, offering enhanced performance with 34622 PassMark points. What's particularly interesting is how the higher-end models maintain relatively good value despite their premium positioning. The AX162 series featuring the powerful AMD EPYC 9454P with 95918 PassMark points at €199.00 monthly - impressive for a server of this caliber. With extremely high storage value, SX65 is excellent for storage-heavy application despite a lower PassMark. Server Auction This comparison intentionally excludes servers listed on Hetzner’s Server Auction platform, which employs a descending-price (Dutch) auction mechanism. In this model, server prices progressively decrease over time, optimizing cost-efficiency for users willing to trade instant availability for price variability. Seasonal patterns such as holidays or promotions have historically impacted Hetzner’s pricing. My personal recommendation is to look out for an i5-12500 or i5-13500 (EX44) at a price point around 36€ / month during the Christmas week. Recommendations For startups / small projects: Start with the CAX series (CAX11 to CAX41), as they offer the highest value scores and very competitive pricing. For medium businesses: Consider the EX44 or AX42, which provide excellent balance of performance and cost. For high-performance needs: The AX162 series provides enterprise-grade performance, offering excellent value despite the higher price point. For storage-intensive applications the SX series, particularly the SX295 with up to 14 x 22TB drives, delivers massive storage capacity at a premium value, while the more budget-friendly SX65 offers a solid alternative with 4 x 22TB HDDs and 2 x 1TB SSDs.
2024-11-08T08:19:02
en
train
42,049,304
CozyThemes
2024-11-05T07:14:15
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,049,314
JumpCrisscross
2024-11-05T07:16:10
Boeing Union Votes to End Strike
null
https://www.wsj.com/business/boeing-union-votes-to-end-strike-1b6cb89d
3
0
[ 42050949 ]
null
null
null
null
null
null
null
null
null
train
42,049,323
ilonamosh
2024-11-05T07:17:51
Exploratory Testing
null
https://testomat.io/blog/top-exploratory-testing-tools-to-watch/
1
1
[ 42049324 ]
null
null
null
null
null
null
null
null
null
train
42,049,351
janandonly
2024-11-05T07:23:40
Eighth Wave of Nostr Grants
null
https://opensats.org/blog/8th-wave-nostr-grants
1
0
null
null
null
null
null
null
null
null
null
null
train
42,049,382
izwasm
2024-11-05T07:29:12
Show HN: Todoit – Todo list gnome shell extension
Hi everyone, im the author of todoit, i was recently looking for a good todo list gnome extension, but most of the ones i found were old or have some bugs and did not work.<p>So i decided to create mine
https://github.com/wassimbj/todoit-gnome
1
0
null
null
null
Failed after 3 attempts. Last error: Quota exceeded for quota metric 'Generate Content API requests per minute' and limit 'GenerateContent request limit per minute for a region' of service 'generativelanguage.googleapis.com' for consumer 'project_number:854396441450'.
GitHub - wassimbj/todoit-gnome: Gnome Todo list extension
null
wassimbj
Todoit Gnome Extension Demo Features Add/remove/copy tasks Toggle tasks progress Toggle with a shortcut (Alt+Shift+Space) Focus on a specific task (coming soon) Drag and Drop (coming soon) Install All you need to use todoit is the zip file Baby steps 1) Install gnome extensions manager But before that, make sure you have gnome extensions prefs installed, search for extensions in your search bar, if it's not found install it with the command below. sudo apt install gnome-shell-extension-prefs 2) Download the zip file Download [email protected] 3) Extract and enable Unzip the extension inside ~/.local/share/gnome-shell/extensions folder Make sure to create the extensions folder if it doesn't exist Now open the extensions manager and enable the Todo List extension. You can run gnome-shell-extension-prefs to open it, or directly from the apps list (Extensions). The extracted zip file structure should look like this [email protected]/ ├── extension.js ├── LICENCE ├── manager.js ├── metadata.json ├── schemas │ ├── gschemas.compiled │ └── org.gnome.shell.extensions.todoit.gschema.xml ├── stylesheet.css └── utils.js Credits the very first icon is from flaticon
2024-11-08T02:38:11
null
train
42,049,391
salerkim
2024-11-05T07:30:41
null
null
null
1
null
[ 42049393 ]
null
true
null
null
null
null
null
null
null
train
42,049,414
danboarder
2024-11-05T07:38:00
Multi-Agent Orchestrator framework for handling complex AI conversations
null
https://github.com/awslabs/multi-agent-orchestrator
1
0
null
null
null
null
null
null
null
null
null
null
train
42,049,418
gnabgib
2024-11-05T07:38:17
Study reveals blood sugar control is a key factor in slowing brain aging
null
https://www.bgu.ac.il/en/news-and-articles/blood-sugar-control-is-key-factor-in-slowing-brain-aging/
147
143
[ 42050258, 42050113, 42050057, 42049891, 42050044, 42050211, 42050203, 42050235, 42059109, 42050219, 42049949, 42050177, 42051552, 42049976, 42050196, 42050327, 42049997, 42050180, 42050324, 42049947, 42049919, 42050166, 42049975 ]
null
null
null
null
null
null
null
null
null
train
42,049,422
null
2024-11-05T07:39:24
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,049,429
swolpers
2024-11-05T07:40:21
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,049,446
null
2024-11-05T07:44:48
null
null
null
null
null
[ 42049447 ]
[ "true" ]
null
null
null
null
null
null
null
null
train
42,049,448
janmikael
2024-11-05T07:44:55
null
null
null
1
null
[ 42049449 ]
null
true
null
null
null
null
null
null
null
train
42,049,451
jessezhang
2024-11-05T07:46:18
OpenAI partners with Decagon to make customer support agents
null
https://openai.com/index/decagon/
3
0
[ 42050842 ]
null
null
null
null
null
null
null
null
null
train
42,049,453
JumpCrisscross
2024-11-05T07:46:23
Nuclear-Power Companies Hit by U.S. Regulator's Rejection of Amazon-Talen Deal
null
https://www.wsj.com/tech/nuclear-power-companies-hit-by-u-s-regulators-rejection-of-amazon-talen-deal-aaae4a59
2
0
null
null
null
null
null
null
null
null
null
null
train
42,049,470
purple-leafy
2024-11-05T07:48:50
Ask HN: I built a popular salary transparency tool – how else can I help?
As per title - I built a popular salary transparency tool (~6000 daily users) for my countries main job site.<p>I made this because I was pissed off one day with employers who underpay and negotiate down employees salaries.<p>What else can I build to push-back against the enshittification of the world, and help the common woman&#x2F;man?<p>I hate the way employers treat employees.<p>I also hate the corruption of the government and government workers, and hate politics.<p>I hate the media and social media, regarding surveillance and manipulation.<p>I hate mortgages.<p>So, any ideas for things we can build to help the common man&#x2F;woman?
null
3
3
[ 42049643, 42056519 ]
null
null
null
null
null
null
null
null
null
train
42,049,482
gnabgib
2024-11-05T07:51:12
Despite output, Generative AI doesn't have a coherent understanding of the world
null
https://news.mit.edu/2024/generative-ai-lacks-coherent-world-understanding-1105
4
0
null
null
null
null
null
null
null
null
null
null
train
42,049,488
epaga
2024-11-05T07:51:41
Final Silver Bulletin 2024 presidential election forecast: 50.015% Harris
null
https://www.natesilver.net/p/nate-silver-2024-president-election-polls-model
5
1
[ 42049946, 42050830 ]
null
null
null
null
null
null
null
null
null
train
42,049,489
1WayInterview
2024-11-05T07:52:03
null
null
null
1
null
[ 42049490 ]
null
true
null
null
null
null
null
null
null
train
42,049,491
MrBuddyCasino
2024-11-05T07:52:18
American Elections Are Unfair
null
https://www.cremieux.xyz/p/american-elections-are-unfair
3
0
[ 42050658 ]
null
null
null
null
null
null
null
null
null
train
42,049,508
henriy
2024-11-05T07:54:54
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,049,521
xkriva11
2024-11-05T07:57:11
Smalltalk: Conceptual Integrity in Action
null
https://medium.com/@jolisper/smalltalk-conceptual-integrity-in-action-06686ae8f3bf
2
0
[ 42050657 ]
null
null
null
null
null
null
null
null
null
train
42,049,566
aa_is_op
2024-11-05T08:07:10
Combining Machine Learning and Homomorphic Encryption in the Apple Ecosystem
null
https://machinelearning.apple.com/research/homomorphic-encryption
1
0
null
null
null
no_error
Combining Machine Learning and Homomorphic Encryption in the Apple Ecosystem
null
null
At Apple, we believe privacy is a fundamental human right. Our work to protect user privacy is informed by a set of privacy principles, and one of those principles is to prioritize using on-device processing. By performing computations locally on a user’s device, we help minimize the amount of data that is shared with Apple or other entities. Of course, a user may request on-device experiences powered by machine learning (ML) that can be enriched by looking up global knowledge hosted on servers. To uphold our commitment to privacy while delivering these experiences, we have implemented a combination of technologies to help ensure these server lookups are private, efficient, and scalable. One of the key technologies we use to do this is homomorphic encryption (HE), a form of cryptography that enables computation on encrypted data (see Figure 1). HE is designed so that a client device encrypts a query before sending it to a server, and the server operates on the encrypted query and generates an encrypted response, which the client then decrypts. The server does not decrypt the original request or even have access to the decryption key, so HE is designed to keep the client query private throughout the process. At Apple, we use HE in conjunction with other privacy-preserving technologies to enable a variety of features, including private database lookups and ML. We also use a number of optimizations and techniques to balance the computational overhead of HE with the latency and efficiency demands of production applications at scale. In this article, we’re sharing an overview of how we use HE along with technologies like private information retrieval (PIR) and private nearest neighbor search (PNNS), as well as a detailed look at how we combine these and other privacy-preserving techniques in production to power Enhanced Visual Search for Photos while protecting user privacy (see Figure 2). Introducing HE into the Apple ecosystem provides the privacy protections that make it possible for us to enrich on-device experiences with private server look-ups, and to make it easier for the developer community to similarly adopt HE for their own applications, we have open-sourced swift-homomorphic-encryption, an HE library. See this post for more information. Apple's Implementation of Homomorphic Encryption Our implementation of HE needs to allow operations common to ML workflows to run efficiently at scale, while achieving an extremely high level of security. We have implemented the Brakerski-Fan-Vercauteren (BFV) HE scheme, which supports homomorphic operations that are well suited for computation (such as dot products or cosine similarity) on embedding vectors that are common to ML workflows. We use BFV parameters that achieve post-quantum 128-bit security, meaning they provide strong security against both classical and potential future quantum attacks (previously explained in this post). HE excels in settings where a client needs to look up information on a server while keeping the lookup computation encrypted. We first show how HE alone enables privacy preserving server look up for exact matches with private information retrieval (PIR), and then we describe how it can serve more complex applications with ML when combining approximate matches with private nearest neighbor search (PNNS). Private Information Retrieval (PIR) A number of use-cases require a device to privately retrieve an exact match to a query from a server database, such as retrieving the appropriate business logo and information to display with a received email (a feature coming to the Mail app in iOS 18 later this year), providing caller ID information on an incoming phone call, or checking if a URL has been classified as adult content (as is done when a parent has set content restrictions for their child’s iPhone or iPad) (see Image 2C in Figure 2). To protect privacy, the relevant information should be retrieved without revealing the query itself, for example in these cases the business that emailed the user, the phone number that called the user, or the URL that is being checked. Gallery item 1Gallery item 2Gallery item 3Figure 2A: Features using Private Information Retrieval - Caller ID.Figure 2B: Features using Private Information Retrieval - Mail.Figure 2C: Features using Private Information Retrieval - Website filtering. For these workflows, we use private information retrieval (PIR), a form of private keyword-value database lookup. With this process, a client has a private keyword and seeks to retrieve the associated value from a server, without downloading the entire database. To keep the keyword private the client encrypts its keyword before sending it to the server. The server performs HE computation between the incoming ciphertext and its database, and sends the resulting encrypted value back to the requesting device, which decrypts it to learn the value associated with the keyword. Throughout this process, the server does not learn the client’s private keyword or the retrieved result, as it operates on the client’s ciphertext. For example, in the case of web content filtering, the URL is encrypted and sent to the server. The server performs encrypted computation on the ciphertext with URLs in its database, the output of which is also a ciphertext. This encrypted result is sent down to the device, where it is decrypted to identify if the website should be blocked as per the parental restriction controls. Private Nearest Neighbor Search (PNNS) For use-cases that require an approximate match, we use Apple's private nearest neighbor search (PNNS), an efficient private database retrieval process for approximate matching on vector embeddings, described in the paper Scalable Private Search with Wally. With PNNS, the client encrypts a vector embedding and sends the resulting ciphertext as a query to the server. The server performs HE computation to conduct a nearest neighbor search and sends the resulting encrypted values back to the requesting device, which decrypts to learn the nearest neighbor to its query embedding. Similar to PIR, throughout this process, the server does not learn the client’s private embedding or the retrieved results, as it operates on the client’s ciphertext. By using techniques like PIR and PNNS in combination with HE and other technologies, we are able to build on-device experiences that leverage information from large server-side databases, while protecting user privacy. Implementing These Techniques in Production Enhanced Visual Search for photos, which allows a user to search their photo library for specific locations, like landmarks and points of interest, is an illustrative example of a useful feature powered by combining ML with HE and private server lookups. Using PNNS, a user’s device privately queries a global index of popular landmarks and points of interest maintained by Apple to find approximate matches for places depicted in their photo library. Users can configure this feature on their device, using: Settings → Photos → Enhanced Visual Search. The process starts with an on-device ML model that analyzes a given photo to determine if there is a “region of interest” (ROI) that may contain a landmark. If the model detects an ROI in the “landmark” domain, a vector embedding is calculated for that region of the image. The dimension and precision of the embedding affects the size of the encrypted request sent to the server, the HE computation demands and the response size, so to meet the latency and cost requirements of large-scale production services, the embedding is quantized to 8-bit precision before being encrypted. The server database to which the client will send its request is divided into disjointed subdivisions, or shards, of embedding clusters. This helps reduce the computational overhead and increase the efficiency of the query, because the server can focus the HE computation on just the relevant portion of the database. A precomputed cluster codebook containing the centroids for the cluster shards is available on the user’s device. This enables the client to locally run a similarity search to identify the closest shard for the embedding, which is added to the encrypted query and sent to the server. Identifying the database shard relevant to the query could reveal sensitive information about the query itself, so we use differential privacy (DP) with OHTTP relay — operated by a third party — as an anonymization network which hides the device's source IP address before the request ever reaches the Apple server infrastructure. With DP, the client issues fake queries alongside its real ones, so the server cannot tell which are genuine. The queries are also routed through the anonymization network to ensure the server can’t link multiple requests to the same client. For running PNNS for Enhanced Visual Search, our system ensures strong privacy parameters for each user's photo library i.e. (ε, δ)-DP, with ε = 0.8 , δ = 10-6. For more details, see Scalable Private Search with Wally. The fleet of servers that handle these queries leverage Apple’s existing ML infrastructure, including a vector database of global landmark image embeddings, expressed as an inverted index. The server identifies the relevant shard based on the index in the client query and uses HE to compute the embedding similarity in this encrypted space. The encrypted scores and set of corresponding metadata (such as landmark names) for candidate landmarks are then returned to the client. To optimize the efficiency of server-client communications, all similarity scores are merged into one ciphertext of a specified response size. The client decrypts the reply to its PNNS query, which may contain multiple candidate landmarks. A specialized, lightweight on-device reranking model then predicts the best candidate by using high-level multimodal feature descriptors, including visual similarity scores; locally stored geo-signals; popularity; and index coverage of landmarks (to debias candidate overweighting). When the model has identified the match, the photo’s local metadata is updated with the landmark label, and the user can easily find the photo when searching their device for the landmark’s name. Conclusion As shown in this article, Apple is using HE to uphold our commitment to protecting user privacy, while building on-device experiences enriched with information privately looked up from server databases. By implementing HE with a combination of privacy-preserving technologies like PIR and PNNS, on-device and server-side ML models, and other privacy preserving techniques, we are able to deliver features like Enhanced Visual Search, without revealing to the server any information about a user’s on-device content and activity. Introducing HE to the Apple ecosystem has been central to enabling this, and can also help to provide valuable global knowledge to inform on-device ML models while preserving user privacy. With the recently open sourced library swift-homomorphic-encryption, developers can now similarly build on-device experiences that leverage server-side data while protecting user privacy.Related readings and updates.This paper presents Wally, a private search system that supports efficient semantic and keyword search queries against large databases. When sufficiently many clients are making queries, Wally’s performance is significantly better than previous systems. In previous private search systems, for each client query, the server must perform at least one expensive cryptographic operation per database entry. As a result, performance degraded…See paper detailsUnderstanding how people use their devices often helps in improving the user experience. However, accessing the data that provides such insights — for example, what users type on their keyboards and the websites they visit — can compromise user privacy. We develop a system architecture that enables learning at scale by leveraging local differential privacy, combined with existing privacy best practices. We design efficient and scalable local differentially private algorithms and provide rigorous analyses to demonstrate the tradeoffs among utility, privacy, server computation, and device bandwidth. Understanding the balance among these factors leads us to a successful practical deployment using local differential privacy. This deployment scales to hundreds of millions of users across a variety of use cases, such as identifying popular emojis, popular health data types, and media playback preferences in Safari. We provide additional details about our system in the full version.See highlight details
2024-11-08T08:28:55
en
train
42,049,572
Fraterkes
2024-11-05T08:09:17
null
null
null
1
null
[ 42050137, 42050638 ]
null
true
null
null
null
null
null
null
null
train
42,049,580
COINTURK
2024-11-05T08:10:57
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,049,585
COINTURK
2024-11-05T08:11:13
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,049,599
NoaEshed
2024-11-05T08:13:18
null
null
null
1
null
[ 42049600 ]
null
true
null
null
null
null
null
null
null
train
42,049,628
k33g
2024-11-05T08:19:03
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,049,630
TrufiAssoc
2024-11-05T08:19:13
Spanish Webinar TOMORROW: Open data's role in Mexico's urban transport
null
https://www.trufi-association.org/creating-data-together-improving-urban-transport-management-in-mexico/
1
0
[ 42049631 ]
null
null
null
null
null
null
null
null
null
train
42,049,645
bornabox
2024-11-05T08:22:30
Canada Arrests Man Suspected of Hacks of Snowflake Customers
null
https://www.bloomberg.com/news/articles/2024-11-05/hacker-said-to-be-behind-breach-of-snowflake-customers-arrested
8
1
[ 42049658, 42050636 ]
null
null
null
null
null
null
null
null
null
train
42,049,649
thunderbong
2024-11-05T08:23:00
A nail polish to detect drugs? (2017)
null
https://www.mcgill.ca/oss/article/controversial-science-drugs-news/nail-polish-detect-drugs
2
0
[ 42050642 ]
null
null
Failed after 3 attempts. Last error: Quota exceeded for quota metric 'Generate Content API requests per minute' and limit 'GenerateContent request limit per minute for a region' of service 'generativelanguage.googleapis.com' for consumer 'project_number:854396441450'.
A nail polish to detect drugs?
null
null
The press went crazy jumping all over a report that four North Carolina students invented a nail polish to detect "date rape" drugs. Just dip a finger into a drink, and watch for a colour change that is indicative of the beverage having been doctored with rohypnol, Xanax or gamma hydroxybutyrate, the classic date rape drugs. At least so goes the story.. Actually, the nail polish doesn't yet exist, it is just a concept. It is, however, a legitimate idea, given that test strips, coasters, straws and even glasses that change colour in response to the presence of certain drugs do exist. The chemistry here is fascinating but very complex. It is based on a polymer which is cross linked after being treated with the drug that is to be eventually detected. The drug forges a space in the polymer matrix according to its molecular shape. It is then washed out leaving a cavity in the shape of the drug molecule. The same drug is then coupled to a dye and is added to occupy the spaces that have been vacated. When the polymer, which in theory could be incorporated into nail polish, is then dipped into a beverage, should any of the same drug be present, it will displace some of the the embedded molecules which after being bumped out release the dye that was attached. Just how well the technology works still has not been properly established. There are many substances such as juices or milk that can interfere with the reaction.Furthermore there is a whole host of other potential date rape drugs like ketamine, zolpidem, barbiturates, chloral hydrate, opiods and phencyclidine that would not be detected. And of course the most widely used date rape drug is alcohol. Then there is also the issue that such products suggest that it is a potential victim's responsibility to detect the presence of a drug. As is far too common, press reports have been far too zealous in hyping this "invention."
2024-11-08T10:33:26
null
train
42,049,654
jack2026
2024-11-05T08:24:20
null
null
null
1
null
[ 42049655 ]
null
true
null
null
null
null
null
null
null
train
42,049,659
matteohorvath
2024-11-05T08:26:07
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,049,661
cincura_net
2024-11-05T08:26:14
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,049,663
thunderbong
2024-11-05T08:26:29
Open_graph_fetcher: Fetch Open Graph metadata in a safer way
null
https://github.com/collimarco/open_graph_fetcher
2
0
null
null
null
null
null
null
null
null
null
null
train
42,049,669
iamthejuan
2024-11-05T08:27:15
null
null
null
2
null
[ 42050279, 42049670, 42050943 ]
null
true
null
null
null
null
null
null
null
train
42,049,672
weneat
2024-11-05T08:27:50
Need a Python Library
I need to read an excel file with python in my backend codebase whose contents will then be inserted in MSSQL database, which only allows strings enclosed by single quotes, but the content in file can contain any characters, is there any library which would parse the string content according to MSSQL syntax?
null
4
3
[ 42056081, 42051515, 42049686 ]
null
null
null
null
null
null
null
null
null
train
42,049,673
janegg
2024-11-05T08:27:59
Hunyuan-Large pre-trained model achieves the best overall performance
null
https://github.com/Tencent/Tencent-Hunyuan-Large/blob/main/README_CN.md
3
2
[ 42050817, 42049922, 42050635 ]
null
null
null
null
null
null
null
null
null
train
42,049,684
dools
2024-11-05T08:29:34
Maastricht and All That (1992)
null
https://www.lrb.co.uk/the-paper/v14/n19/wynne-godley/maastricht-and-all-that
2
0
[ 42050631 ]
null
null
null
null
null
null
null
null
null
train
42,049,696
taubek
2024-11-05T08:30:58
New York State to get new $825M semiconductor R&D facility
null
https://www.tomshardware.com/tech-industry/semiconductors/new-york-state-to-get-new-usd825-million-semiconductor-r-and-d-facility
2
0
[ 42050643 ]
null
null
null
null
null
null
null
null
null
train
42,049,700
denisshilov
2024-11-05T08:31:19
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,049,711
austinallegro
2024-11-05T08:33:53
Over 700 cybersecurity incidents in Ireland last year
null
https://www.rte.ie/news/business/2024/1105/1479075-cyber-security-reports/
2
0
[ 42050652 ]
null
null
null
null
null
null
null
null
null
train
42,049,715
null
2024-11-05T08:34:35
null
null
null
null
null
null
[ "true" ]
null
null
null
null
null
null
null
null
train
42,049,720
daniers
2024-11-05T08:35:07
null
null
null
1
null
[ 42049721 ]
null
true
null
null
null
null
null
null
null
train
42,049,730
hacxx
2024-11-05T08:36:51
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,049,731
fil104
2024-11-05T08:37:28
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,049,732
ludiwin
2024-11-05T08:37:51
JaxLightning: Run Jax within PyTorch Lightning
null
https://github.com/ludwigwinkler/JaxLightning
1
0
[ 42049733 ]
null
null
null
null
null
null
null
null
null
train
42,049,736
JNRowe
2024-11-05T08:38:57
Functors to Monads: A Story of Shapes
null
https://blog.jle.im/entry/functors-to-monads-a-story-of-shapes.html
5
0
null
null
null
null
null
null
null
null
null
null
train
42,049,742
fabiothiroki
2024-11-05T08:40:19
null
null
null
1
null
null
null
true
null
null
null
null
null
null
null
train
42,049,746
isaacfrond
2024-11-05T08:41:12
AI overwhelmingly prefers white and male candidates in test of resume bias
null
https://www.geekwire.com/2024/ai-overwhelmingly-prefers-white-and-male-job-candidates-in-new-test-of-resume-screening-bias/
2
0
[ 42050627 ]
null
null
null
null
null
null
null
null
null
train
42,049,747
the-mitr
2024-11-05T08:41:20
Indian government issues notice to Wikipedia over bias
null
https://www.thehindu.com/news/national/centre-questions-wikipedia-over-complaints-of-bias-and-inaccuracies-in-information/article68831550.ece
4
0
[ 42050567 ]
null
null
null
null
null
null
null
null
null
train
42,049,764
xushengnet
2024-11-05T08:43:57
AI Minecraft Game
null
https://aiminecraft.co/
1
1
[ 42049765 ]
null
null
null
null
null
null
null
null
null
train
42,049,768
Ewukong
2024-11-05T08:44:10
null
null
null
5
null
[ 42049901, 42049906, 42050029 ]
null
true
null
null
null
null
null
null
null
train
42,049,769
isaacfrond
2024-11-05T08:44:11
Why GM is ditching Apple CarPlay, with software boss Baris Cetinok
null
https://www.theverge.com/24285581/gm-software-baris-cetinok-apple-carplay-android-auto-google-cars-evs-decoder-podcast
2
4
[ 42053059, 42050425, 42051520, 42049943, 42050569 ]
null
null
null
null
null
null
null
null
null
train