id
int64 2
42.1M
| by
large_stringlengths 2
15
⌀ | time
timestamp[us] | title
large_stringlengths 0
198
⌀ | text
large_stringlengths 0
27.4k
⌀ | url
large_stringlengths 0
6.6k
⌀ | score
int64 -1
6.02k
⌀ | descendants
int64 -1
7.29k
⌀ | kids
large list | deleted
large list | dead
bool 1
class | scraping_error
large_stringclasses 25
values | scraped_title
large_stringlengths 1
59.3k
⌀ | scraped_published_at
large_stringlengths 4
66
⌀ | scraped_byline
large_stringlengths 1
757
⌀ | scraped_body
large_stringlengths 1
50k
⌀ | scraped_at
timestamp[us] | scraped_language
large_stringclasses 58
values | split
large_stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
42,019,599 | tosh | 2024-11-01T17:49:05 | iPad Mini Review: The Third Place | null | https://www.macstories.net/stories/ipad-mini-review-the-third-place/ | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,019,608 | gmays | 2024-11-01T17:50:01 | Human brain can process certain sentences in 'blink of an eye' | null | https://www.theguardian.com/science/2024/oct/23/human-brain-can-process-certain-sentences-in-blink-of-an-eye-says-study | 5 | 2 | [
42019738
] | null | null | null | null | null | null | null | null | null | train |
42,019,610 | thunderbong | 2024-11-01T17:50:04 | Open Source Licenses and Lego Blocks | null | https://kftray.app/blog/posts/9-oss-legos | 1 | 1 | [
42020030
] | null | null | null | null | null | null | null | null | null | train |
42,019,624 | euvin | 2024-11-01T17:50:53 | Local58.tv | null | https://local58.tv | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,019,632 | saikatsg | 2024-11-01T17:51:32 | The horrors of software bugs [video] | null | https://www.youtube.com/watch?v=Iq_r7IcNmUk | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,019,643 | nickwritesit | 2024-11-01T17:53:15 | Toward a Phenomenology of the Phone | null | https://www.newcartographies.com/p/out-of-the-landscape-into-the-portrait | 2 | 1 | [
42020117
] | null | null | null | null | null | null | null | null | null | train |
42,019,649 | tosh | 2024-11-01T17:53:50 | Claris | null | https://en.wikipedia.org/wiki/Claris | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,019,661 | zahlman | 2024-11-01T17:54:31 | Tim Peters has returned on the Python Discourse forum | null | https://discuss.python.org/t/three-month-suspension-for-a-core-developer/60250?page=2 | 50 | 13 | [
42021765,
42021368,
42026735,
42033224
] | null | null | null | null | null | null | null | null | null | train |
42,019,672 | f1shy | 2024-11-01T17:55:12 | Bird wings inspire new approach to flight safety | null | https://engineering.princeton.edu/news/2024/10/28/bird-wings-inspire-new-approach-flight-safety | 14 | 3 | [
42051366,
42051251,
42051388
] | null | null | null | null | null | null | null | null | null | train |
42,019,674 | PaulHoule | 2024-11-01T17:55:23 | Netflix Shutters AAA Game Studio It Built with Former Blizzard, Bungie Devs | null | https://kotaku.com/netflix-team-blue-bungie-blizzard-sony-closed-1851678075 | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,019,694 | brandonb | 2024-11-01T17:57:39 | How Apple Watch estimates VO2Max within 1.2 ml/kg/min without a treadmill test | null | https://www.empirical.health/blog/apple-watch-cardio-fitness-accuracy-vo2max/ | 89 | 57 | [
42050295,
42050621,
42050290,
42050388,
42050556,
42050697,
42050301,
42050534,
42050889,
42050369,
42050617,
42050661,
42050778
] | null | null | null | null | null | null | null | null | null | train |
42,019,696 | bookofjoe | 2024-11-01T17:58:06 | Five Centuries of Board Games | null | http://bibliodyssey.blogspot.com/2008/11/board-games.html | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,019,713 | AbhilashK26 | 2024-11-01T17:59:54 | Guide to Getting Slack Webhooks | null | https://pinggy.io/blog/how_to_get_slack_webhook/ | 8 | 0 | [
42019714
] | null | null | null | null | null | null | null | null | null | train |
42,019,730 | apichar | 2024-11-01T18:01:09 | Localize your Supabase database with AI translation right from the dashboard [video] | null | https://www.youtube.com/watch?v=loOJxuwgn2g | 1 | 0 | null | null | null | no_article | null | null | null | null | 2024-11-08T06:17:12 | null | train |
42,019,731 | tosh | 2024-11-01T18:01:10 | Nintendo Releases a Music App | null | https://www.macstories.net/news/nintendo-releases-a-music-app/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,019,749 | LonnieMc | 2024-11-01T18:02:35 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,019,753 | whodis12 | 2024-11-01T18:02:46 | One pager simple AF local OpenAI client | null | https://github.com/ammarasmro/LocalOpenAIChat | 1 | 1 | [
42019754
] | null | null | null | null | null | null | null | null | null | train |
42,019,772 | todsacerdoti | 2024-11-01T18:04:38 | Conditional class names using DOM attributes as state | null | https://www.simeongriggs.dev/tailwindcss-conditional-class-names-the-right-way | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,019,779 | LaSombra | 2024-11-01T18:05:17 | 'We were just trying to get it to work': The failure that started the internet | null | https://www.bbc.com/future/article/20241028-the-failure-that-started-the-internet | 1 | 0 | null | null | null | missing_parsing | 'We were just trying to get it to work': The failure that started the internet | 2024-10-29T13:00:00.000Z | Scott Nover | Emmanuel LaFontThe first message sent over Arpanet was an inauspicious start to what would grow into the internet (Credit: Emmanuel LaFont)On 29 October 1969, two scientists established a connection between computers some 350 miles away and started typing a message. Halfway through, it crashed. They sat down with the BBC 55 years later.At the height of the Cold War, Charley Kline and Bill Duvall were two bright-eyed engineers on the front lines of one of technology's most ambitious experiments. Kline, a 21-year-old graduate student at the University of California, Los Angeles (UCLA), and Duvall, a 29-year-old systems programmer at Stanford Research Institute (SRI), were working on a system called Arpanet, short for the Advanced Research Projects Agency Network. Funded by the US Department of Defense, the project aimed to create a network that could directly share data without relying on telephone lines. Instead, this system used a method of data delivery called "packet switching" that would later form the basis for the modern internet.It was the first test of a technology that would change almost every facet of human life. But before it could work, you had to log in.Kline sat at his keyboard between the lime-green walls of UCLA's Boelter Hall Room 3420, prepared to connect with Duvall, who was working a computer halfway across the state of California. But Kline didn't even make it all the way through the word "L-O-G-I-N" before Duvall told him over the phone that his system crashed. Thanks to that error, the first "message" that Kline sent Duvall on that autumn day in 1969 was simply the letters "L-O".Courtesy of Charley KlineCharley Kline, pictured in the center smiling at the camera, was the first to send a message over the internet (Credit: Courtesy of Charley Kline)They got their connection up and running about an hour later after some tweaks, and that initial crash was just a blip in an otherwise monumental achievement. But neither man realised the significance of the moment. "I certainly didn't at that time," Kline says. "We were just trying to get it to work."The BBC spoke to Kline and Duvall for the 55th anniversary of the occasion. Half a century later, the internet has shrunk the whole world down to a small black box that fits in your pocket, one that dominates our attention and touches the furthest reaches of lived experience. But it all started with two men, experiencing just how frustrating it is when you can't get online for the very first time.This interview has been edited for clarity and length.Can you describe the computers that enabled Arpanet? Were these massive, noisy machines?Kline: They were small computers – by standards of that time – about the size of a refrigerator. They were somewhat noisy from the cooling fans, but quiet compared with the sounds from all the fans in our Sigma 7 computer. There were lights on the front that would blink, switches that could control the IMP [Interface Message Processor], and a paper tape reader that could be used to load the software.Duvall: They were in a rack big enough to hold a complete set of sound equipment for a large show today. And they were thousands if not millions or billions of times less powerful than the processor in an Apple Watch. These were the old days!Take us inside that moment when you started typing L-O.Kline: Unlike websites and other systems today, when you connected a terminal to the SRI system nothing happened until you typed something. If you wanted to run a programme, you first needed to login – by typing the word "login" – and the system would ask for your user name and password.As I typed a character on my terminal – a teletype model 33 – it would get sent from my terminal to the programme I wrote for the SDS Sigma 7 computer. That programme would take the character, format it into a message and send it to the Interface Message Processor. When it was received by SRI's system, [it] would treat [the message] as if it came from a local terminal and would process it. It would "echo" the character [replicate it on the terminal]. In this case, Bill's code would take that character and format it into a message and send it to the IMP to go back to UCLA. When I received it, I would print it on my terminal.UCLAThe Interface Message Processor (IMP) functioned as the internet’s first router (Credit: UCLA)I was on the phone with Bill when we tried this. I told him I typed the letter L. He told me he had received the letter L and echoed it back. I told him that it printed. Then I typed the letter O. Again, it worked fine. I typed the letter G. Bill told me his system had crashed, and he would call me back.Duvall: The UCLA system did not anticipate that it would receive G-I-N after Charlie had typed L-O, so it sent an error message to the SRI computer. I don't recall exactly what the message was, but what happened next was due to the fact that the network connection was much faster than anything seen before.The normal connection speed was 10 characters per second whereas the Arpanet could transmit characters at up to 5,000 characters per second. The result of this message being sent from UCLA to the SRI computer flooded the input buffer which only expected 10 characters per second. It was like filling a glass with a fire hose. I quickly discovered what had happened, changed the buffer size and rebuilt the system, which took about an hour.Emmanuel LaFontThe first message sent over what would become the internet consisted of just two letters - L and O (Credit: Emmanuel LaFont)Did you realise this could be a historic moment?Kline: No, I certainly didn't at that time.Duvall: Not really. It was another step forward in the larger context of the work we were doing at SRI which we did believe would have a large impact.When Samuel Morse sent the first telegraph message in 1844, he had an eye for drama, tapping out "What hath God wrought" on a line from Washington, DC to Baltimore, Maryland, US. If you could go back, would you have typed something more memorable?Kline: Of course, if I had realised its importance. But we were just trying to get it to work.Duvall: No. This was the first test of a very complicated system with a lot of moving parts. To have something this complex work in the very first test was dramatic in and of itself.What was the atmosphere like when the message was sent?Duvall: We were each alone in our respective computer laboratories at night. We were both happy to have had such a successful first test as the culmination of a lot of work. I went to a local "watering hole" and had a burger and a beer.Kline: I was happy that it worked and went home to get some sleep.What did you expect Arpanet to become?Duvall: I saw the work we were doing at SRI as a critical part of a larger vision, that of information workers connected to each other and sharing problems, observations, documents and solutions. What we did not see was the commercial adoption nor did we anticipate the phenomenon of social media and the associated disinformation plague. Although, it should be noted, that in [SRI computer scientist] Douglas Engelbart's 1962 treatise describing the overall vision, he notes that the capabilities we were creating would trigger profound change in our society, and it would be necessary to simultaneously use and adapt the tools we were creating to address the problems which would arise from their use in society.They were thousands if not millions or billions of times less powerful than the processor in an Apple Watch. These were the old days! - Bill DuvallWhat aspects of the internet today remind you of Arpanet?Duvall: Referring to the larger vision which was being created in Engelbart's group (the mouse, full screen editing, links, etc.), the internet today is a logical evolution of those ideas enhanced, of course, by the contributions of many bright and innovative people and organisations.Kline: The ability to use resources from others. That's what we do when we use a website. We are using the facilities of the website and its programmes, features, etc. And, of course, email.The Arpanet pretty much created the concept of routing and multiple paths from one site to another. That got reliability in case a communication line failed. It also allowed increases in communication speeds by using multiple paths simultaneously. Those concepts have carried over to the internet.Courtesy of UCLAToday, the site of the first internet transmission at UCLA’s Boetler Hally Room 3420 functions as a monument to technology history (Credit: Courtesy of UCLA)As we developed the communications protocols for the Arpanet, we discovered problems, redesigned and improved the protocols and learned many lessons that carried over to the Internet. TCP/IP [the basic standard for internet connection] was developed both to interconnect networks, in particular the Arpanet with other networks, and also to improve performance, reliability and more.How do you feel about this anniversary?Kline: That's a mix. Personally, I feel it is important, but a little overblown. The Arpanet and what sprang from it are very important. This particular anniversary to me is just one of many events. I find somewhat more important than this particular anniversary were the decisions by Arpa to build the Network and continue to support its development.Duvall: It's nice to remember the origin of something like the internet, but the most important thing is the enormous amount of work that has been done since that time to turn it into what is a major part of societies worldwide.The modern web is dominated not by government or academic researchers, but by some of the largest companies in the world. How do you feel about what the internet has become? What are you most concerned about?Kline: We use it in our daily lives, and it is very important. It's hard to imagine ever not having it again. One of the benefits of it being so open and not controlled by a government is that new ideas can get developed, such as online shopping, banking, video streaming, news sites, social media, and more. But because it has become so important to our lives it is a target for malicious activity.We hear constantly about how things have been compromised. There is a tremendous loss of privacy. And the big companies (Google, Meta, Amazon and internet service providers such as Comcast and AT&T) have too much power in my opinion. But I am not sure of the right remedy.Courtesy of UCLABy December 1969, Arpanet connected a few computer hubs dotting the US, compared to the estimated 50 billion nodes that make up the modern internet (Credit: Courtesy of UCLA)Duvall: I think that there is great danger in the domination by any single entity. We have seen the power of disinformation in directing policy and elections. We have also seen the power of companies in influencing the direction of social norms and the formation of adults and young adults.Kline: One of my biggest fears has been about the spread of false information. How many times have you heard someone say, "I saw it on the internet". It was always possible to spread false information, but it would cost money to send out mailers, put up a billboard or take out a TV ad. Now it is cheap and easy. And as it reaches millions of people, it gets repeated and treated as fact.Another fear is that as more and more critical systems have moved onto the internet it becomes easier to cause a serious disruption if those systems are taken down or compromised. For example, not only communications systems but banking, utilities, transportation, etc.Duvall: It has great power but, not heeding Engelbart's warning in 1962, we have not effectively used the power of the internet to manage the social impact.Are there any lessons from your time at Arpanet that could make it a better place for everyone?Kline: While the openness of the internet allows experimentation and new uses, the lack of control can lead to compromises. Arpa kept some control of the Arpanet. That way they could make sure that everything worked, make decisions about which protocols were required, deal with issues such as site names and other issues.While Icann [the Internet Corporation for Assigned Names and Numbers] still manages some of that, there have been international disagreements about how to move forward and whether the US has too much control, etc. But we still need some controls to keep the network functional. Also, since the Arpanet was relatively small, we could experiment with major changes in design, protocols, and more. That would be extremely hard now.Duvall: We are standing on the edge of a precipice with AI and the reflexive access it has to everyone who graces the internet. The internet had explosive growth and development – some of it socially damaging – in the early days. AI now stands at that threshold, and is inseparable from the internet. And it is not unreasonable to call AI an existential threat. The time to recognise the dangers as well as the promise is now.--For timely, trusted tech news from global correspondents to your inbox, sign up to the Tech Decoded newsletter, while The Essential List delivers a handpicked selection of features and insights twice a week.For more science, technology, environment and health stories from the BBC, follow us on Facebook, Xand Instagram. | 2024-11-08T20:49:33 | null | train |
42,019,782 | sandwichsphinx | 2024-11-01T18:05:31 | Ubuntu Hoping to Remove Qt 5 Before Ubuntu 26.04 LTS | null | https://www.phoronix.com/news/Ubuntu-Hopes-Removing-Qt-5 | 86 | 95 | [
42020935,
42020162,
42021094,
42020565,
42020633,
42020241,
42021024,
42022318,
42021490,
42021868,
42020963,
42020363
] | null | null | no_error | Ubuntu Hoping To Remove Qt 5 Before Ubuntu 26.04 LTS | null | Written by Michael Larabel in Ubuntu on 1 November 2024 at 06:24 AM EDT. 51 Comments |
Ubuntu developer Simon Quigley laid out the plans for hoping Ubuntu packages will move from Qt 5 to Qt 6 so that by the time of the Ubuntu 26.04 LTS cycle in early 2026 that the older version of this graphical toolkit can be removed.
The hope is that for the next major Long Term Support (LTS) release, Qt 5 can be removed from the Ubuntu archive and just shipping Qt 6. This is similar to Ubuntu's prior phasing out of the Qt 4 toolkit and trying to get Qt 5 out by Ubuntu 26.04 LTS to avoid that long maintenance period.
The Ubuntu 26.04 LTS without Qt 5 is simply a goal at this point. The hope is developers will help with the upstream projects in transitioning from Qt 5 to Qt 6, which is less of a burden than it was going from Qt 4 to Qt 5. There is also the Qt5Compat library to help with making the move for Qt5 apps to Qt 6 libraries.
The expressed plan for removing Qt 5 from Ubuntu before Ubuntu 26.04 LTS was made on Ubuntu Discourse.
Among the early concerns with the plan is that KDE Plasma has a hard dependency on VLC via the Phono-GStreamer package, but VLC won't be ported to Qt 6 until the VLC 4.0 release. Currently there is no timeline for VLC 4.0 shipping. Ubuntu Touch developers working on the Lomiri (former Unity 8) code also have raised doubts if they will be able to migrate to Qt 6 in time for Ubuntu 26.04. In any event we'll see over the next year if enough progress is made for Ubuntu 26.04 LTS to ship with the Qt 5 packages removed. | 2024-11-08T01:41:28 | en | train |
42,019,790 | jazmichaelking | 2024-11-01T18:06:01 | Mastodon's Account Recommendations Explained | null | https://connect.iftas.org/news/connect/mastodon-discovery/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,019,793 | danimg | 2024-11-01T18:06:08 | null | null | null | 1 | null | [
42019794
] | null | true | null | null | null | null | null | null | null | train |
42,019,820 | mrdeboulay | 2024-11-01T18:08:51 | Looking for a Great Platform to Host Free Interactive Book | null | https://medium.com/@msg_3707/im-going-to-document-my-journey-to-bring-a-passion-project-to-life-7ec57b4e53a3 | 2 | 1 | [
42019821
] | null | null | no_error | Kabukimono - @mrdeboulay - Medium | 2024-11-01T18:03:14.162Z | @mrdeboulay | Logline: In U.S.-occupied 1950s Japan, a Japanese American armorer for the yakuza — scarred by his WWII internment — is pulled into a deadly game of loyalty and betrayal by a rogue CIA agent. Inspired by John le Carré’s A Perfect Spy.The idea for Kabukimono came from a fascination I developed in college with a lesser-known part of history: the U.S. occupation of Japan after WWII. We hear a lot about the end of the war, the bombings of Hiroshima and Nagasaki, and the aftermath in the U.S., but what about the people left behind in Japan?During the occupation, American forces were everywhere. The CIA was working with unexpected allies — the yakuza, Japan’s organized crime syndicate — to fight off Communist influence. The U.S., supposedly bringing democracy and freedom to Japan, was cutting deals with criminals to maintain control.Structuring Kabukimono as a memoir allowed for a deeply reflective, non-linear narrative that moves back and forth in time. Kenji’s memories of Pym and his own moments of conflict unfold like fragments, each scene peeling back a new layer of his struggle with identity, loyalty, and anger. As Kenji recounts his journey, we see the evolution of a man who begins as a disillusioned craftsman and ends as a quiet resistor, carrying forward Pym’s legacy of defiance.Each flashback becomes a piece of a larger puzzle, revealing how Kenji’s path intertwined with Pym’s and how he arrived at a place where rebellion felt necessary.The protagonist, Kenji, is a Japanese American who returns to Japan after WWII with the intent to find a sense of belonging, only to confront the reality of occupation and foreign control. Kenji’s experience parallels my own personal journey — returning to the Caribbean as a Black American and finding myself both connected to and alienated from a culture that was simultaneously my heritage and something foreign. This sense of rootlessness, of being between worlds, is central to Kenji’s story.Tonally, I want to capture the introspective, methodical atmosphere of The American starring George Clooney — a film that takes its time, showing the life of a man who operates on the edges of society, using his skills in solitude. This inspired the main character’s occupation: he’s an armorer for the yakuza. His role as gunsmith defines him; he’s a craftsman — a meticulous, disciplined figure whose work is a reflection of his inner turmoil and restrained fury.During the occupation, American forces were everywhere. The CIA was working with unexpected allies — the yakuza, Japan’s organized crime syndicate — to fight off Communist influence. The U.S., supposedly bringing democracy and freedom to Japan, was cutting deals with criminals to maintain control.When creating Kabukimono, the initial spark of inspiration came from the very word itself. “Kabukimono” (傾奇者) refers to a group of samurai and rōnin during Japan’s Edo period who defied social norms. Known for their flamboyant clothing, outlandish behavior, and rebellious attitude, kabukimono were the original outcasts — warriors who stood apart from society, often unsettling those around them. They were unpredictable, sometimes violent, and unapologetically individualistic in a culture that valued conformity and restraint. In Japanese, “kabuku” means “to tilt” or “to deviate,” and kabukimono literally means “those who lean” — a fitting name for characters who live on the edges of society, challenging the norms that define it.Inspiration from John le CarreAt the heart of Kabukimono is an homage to John le Carré’s A Perfect Spy, a novel that digs deep into the moral ambiguities of espionage and the fractured identity of a British spy. In le Carré’s original, Magnus Pym is an MI6 agent grappling with conflicting loyalties during the Cold War, specifically in Czechoslovakia. In Kabukimono, the character of Pym has been reimagined as a CIA operative stationed in occupied Japan — a setting that brings new complexities and historically relevant stakes to the narrative.By making Pym an American intelligence officer in Japan, we transplant the intrigue of Cold War Europe to the complex, morally ambiguous landscape of post-war Asia. The CIA’s involvement in suppressing communism through alliances with the yakuza mirrors MI6’s Cold War entanglements in Europe, but with distinctly American overtones of interventionism. The dynamics of occupation, control, and ideological conflict provide fertile ground for exploring the internal conflicts that would lead someone like Pym to question everything he stands for.While Pym in Kabukimono is based on the same character arc as in le Carré’s novel, he serves as a catalyst rather than the main character. Most of Pym’s struggles happen offscreen, hinted at through conversations, letters, and memories from the protagonist’s perspective. This narrative shift allows the focus to remain on Kenji, the Japanese American armorer, while Pym’s ideological transformation profoundly influences the plot and Kenji’s own journey.Pym’s quiet descent into disillusionment drives Kenji’s story forward, inspiring him to confront his own inner conflicts. As Pym unravels, he leaves behind a trail of small but powerful rebellions — signs that he is beginning to doubt his role as an agent of American control. For Kenji, witnessing Pym’s disillusionment validates his own feelings of anger and betrayal, sparking his decision to push back against forces that have marginalized him his entire life.A key character in the story is a Black American soldier who’s stationed in Japan. Many Black soldiers found themselves marginalized in the military, and in Japan, some started trading wartime ammunition on the black market to enrich themselves. This character is really fun to write. Isaac Ford thrives in Tokyo’s black market, trading American ammunition and weapons in a world where allegiance is flexible. Isaac is a pragmatist, valuing survival above ideals, and serves as a counterpoint to Kenji’s emerging sense of purpose. Through Isaac, Kenji learns that loyalty is often a liability in their world.I’m Documenting the ProcessThis project won’t just be a film; it’ll be a journey that you can experience along the way. The “living book” platform will serve as both a blog and an interactive portfolio, where you can follow along with the story’s progress, scene by scene and challenge by challenge. Here’s what you can expect:Story Development and Visuals: I’ll share storyboards, character designs, and concept art, so you can watch how the visual style and tone of Kabukimono evolve. Each storyboard will be tied to specific scenes, showing you the progression from initial sketch to final vision.Writing & Research Notes: You’ll be able to see drafts of scenes, snippets of dialogue, and even notes from the research that grounds the story’s historical context. A “Dive Deeper” section will link out to articles, documentaries, and historical references that inform Kabukimono, allowing you to explore the backdrop of post-war Japan and the complex relationship between the U.S. and Japan.Collaborative Process: Through this platform, you can get a sense of what goes into collaborating with writers, actors, and cultural consultants. You’ll see the ways we weave cultural and comedic nuances into the story and how these elements bring layers of authenticity and levity to Kabukimono.Audience Interaction: I want to hear from you! As I post updates, you’ll have the chance to leave comments, ask questions, and share insights. Think of this as an open conversation; your input will help shape the platform, making it a living, breathing part of the creative process.FinancingBeyond the story and characters, there’s another vital part of Kabukimono: bringing it to the screen. My goal is to document the financing and packaging strategy — something rarely shown to audiences. This kind of transparency is particularly valuable for aspiring filmmakers and film enthusiasts interested in the nitty-gritty of international production.With a budget estimated around $15 million, funding Kabukimono will require navigating subsidies, financing structures, and international sales. Here’s how I’ll break it down:Budgeting & Cost Breakdown: You’ll see how costs are estimated for a mid-budget film, with breakdowns for casting, crew, special effects, locations, and post-production. This gives you a realistic look at the financial planning behind a project of this scale.International Sales Strategy: One critical aspect of financing Kabukimono involves pre-selling distribution rights in international markets. By securing distribution deals in advance, especially in markets with strong demand for thrillers with Japanese themes, we can offset production costs. On the platform, I’ll break down the process of identifying target territories, approaching distributors, and understanding how these pre-sales contribute to the overall financing package. You’ll get insights into how international sales impact the scope and scale of a film’s budget — and why certain territories are prioritized over others in a film’s financing strategy.The financing process is rarely smooth, especially for a project that bridges cultures, languages, and film industries. I want to be candid about the obstacles we encounter — whether it’s unexpected costs, changes in international policies, or shifting market interests — and how we adapt to keep Kabukimono on track. These updates will provide a real-time look at the challenges involved in creating a film that requires authenticity, historical accuracy, and high production value, all while working within a mid-budget framework.By tracking the financing journey of Kabukimono step-by-step, this platform will serve as an invaluable resource for anyone curious about the logistics of filmmaking. Too often, the challenges of funding international projects and mid-budget films remain opaque. Through this real-time documentation, you’ll get a detailed, transparent look at the layers of planning, strategy, and creativity required to bring a complex, character-driven story to the screen.As the platform evolves, it will stand as a comprehensive record of Kabukimono’s creation — a model for how independent filmmakers can navigate the world of financing and production with transparency. My hope is that this approach not only demystifies the process but also inspires a new generation of filmmakers to document and share their own journeys.With this “living book,” you’ll be part of Kabukimono every step of the way, from concept to release. You’ll watch the story grow, see the artwork and research come to life, witness the financial planning unfold, and follow the creative struggles and victories in real time. Together, we’ll bring Kabukimono from an idea into a cinematic experience.When will I finish Kabukimono?To do Kabukimono justice, I knew I’d need to collaborate with a Japanese or Japanese American writer who could add depth, authenticity, and subtle cultural details I might overlook. As a Japanese American character wrestling with themes of identity and colonialism, Kenji’s story needs the voices and experiences that only someone intimately familiar with these cultural layers can bring. I’m looking for a co-writer with a comedic edge to help balance the tone; thrillers can get dark and heavy, and a sense of humor can bring in unexpected levity and make the characters feel more real. | 2024-11-08T15:21:43 | en | train |
42,019,890 | Geekette | 2024-11-01T18:13:04 | I Made a Wholesome OnlyFans to Try to Make Ends Meet | null | https://www.wired.com/story/i-made-a-wholesome-onlyfans-to-try-to-make-ends-meet/ | 3 | 0 | null | null | null | no_error | I Made a Wholesome OnlyFans to Try to Make Ends Meet | 2024-10-19T07:30:00.000-04:00 | Andrew Rummer | As I leave my house on an overcast Tuesday morning to walk the dog, I’m accosted by a neighbor who cheerily calls down the street: “I hear you have an OnlyFans now!” I start to wonder if I’ve made a terrible mistake.OnlyFans has—how shall I put it—a reputation. Like many online platforms, it matches content creators with their audience. But OnlyFans is primarily known for one type of content: sex.When friends and acquaintances hear I—a 43-year-old father of two—have set up an OnlyFans account, they are intrigued. When I explain I’m only posting content that’s nonsexual and very much safe for work, their next question is “Why?” In their minds, it’s clear that “having an OnlyFans” means doing sexy stuff on the internet, for money.OnlyFans, a UK-based outfit that raked in $658 million in pretax profit last year, wants to shake this image. For every university student raising cash by sharing nudes, there’s a wholesome housewife uploading DIY tips or an up-and-coming musician posting his latest tracks, at least if you go by the accounts highlighted on the company’s blog.“Everyone’s doing a dance on the rest of social media, where it’s like, ‘Hey, you’re not supposed to show people your penis here and you’re not supposed to say crazy, wild shit,’” John Hastings, a 39-year-old Canadian comic, tells me via phone from his home in Los Angeles. On OnlyFans however, he still has people who slide into his DMs just to say “I want to see your feet, I'm not here for jokes.”Like all the safe-for-work creators I speak to, Hastings has a presence on many social networks, from Instagram to X to YouTube. The audience on OnlyFans will usually be smaller than on other sites, but followers are often more engaged and—importantly—must have a bank account linked to their profile, ready to be prized open.“It is a different world, for sure, compared to the people who are on my other social media platforms,” says Dudley Alexander, an R&B artist who releases music under the moniker Nevrmind.Alexander, 33, joined OnlyFans in 2019, before the site’s profile surged as the Covid-19 pandemic pushed many previously IRL activities online. As such, he’s a pioneer of the safe-for-work OnlyFans scene and has amassed more than 67,000 likes on his page. (OnlyFans only displays a user’s like count publicly; the follower count, which is usually higher, is hidden.)Most of those people are there for his music, but, like Hastings, he’s had some fans cross the line into asking for sexual content. “There are people who try to get me to offer other types of content and stuff like that,” he says.Alexander isn’t opposed to showing the occasional rippling bicep or taut pectoral but declines to go the full monty. “I do more of the R&B look, where it’s still tasteful but it’s not completely nude,” he explains.For the uninitiated, the OnlyFans homepage has a simple design, with lots of white space, sans-serif text in black and blue, and a few embedded videos. These videos feature young men or women (usually women) working on DIY projects or making recipes—they just tend to wear less clothing or show more cleavage than you might expect on a site’s front door.Venture beyond the homepage, however, and you can find some seriously X-rated content. OnlyFans declines to break out how the $5.3 billion it funneled to creators last year was split between sexual and nonsexual content. “We don’t categorize our creators into SFW/NSFW. OnlyFans is all over 18 so we don’t need to,” says an external spokesperson speaking for the company.But I want my own tiny share of those billions—and I’m prepared to risk public ridicule to get them. So, one Wednesday in late September, having packed the kids safely off to school, I set up an account of my very own.After being verified on the platform, I decide my debut will be a one-minute video simply introducing myself. I immediately bump up against OnlyFans’ discoverability problem.Whether you’re on OnlyFans for numismatics or nudes, finding the content you want is hard. The site’s search functionality is severely limited, allowing you to search the posts of only people you already follow. There’s also no algorithmically driven feed to surface posts you might like. Follow 10 comedians on Instagram and the app will be sure to push you more jokes. Follow 10 comedians on OnlyFans and you’ll still have no idea how to find an 11th.OnlyFans tells me the lack of proper search functionality is a deliberate safety feature, “so fans don’t stumble across content they don’t want to see,” says the spokesperson. Several third-party sites, with names like OnlyFinder and NosyFan, have stepped in to fill the vacuum—for those who very much do want to see.If people are going to find my OnlyFans page, I have to do what everyone else does: Promote it elsewhere. So I take a deep breath and write posts for my few thousand followers on X and LinkedIn.On usually sober LinkedIn, the vibe is riotous. Reactions include “I shudder to think what you’re posting there,” “let me know when the NSFW one launches pls,” and “you’ve got to give the people what they want, Andrew.” My post on X mainly gets responses from bot accounts called things like United Babes, suggesting we follow each other’s OnlyFans pages. The topic even comes up at my regular five-a-side football kickabout.The result of all this public attention? One measly follower on OnlyFans. To win over some more, I embrace the imperfect and upload a home-recorded video of me indulging in one of my favorite pastimes: reading a classic novel (Three Men in a Boat by Jerome K. Jerome). I also post a wind-blown update from the reservoir where I sometimes go sailing.Top content like this sees my follower count surge to a mighty five. Time to slam down the paywall and rake in some cold, hard cash. Time for my first locked post on OnlyFans.Asking subscribers to pay for nontitillating content poses an awkward question: What can I possibly offer that’s worth any money? As a 20-year media veteran, I decide to post some tips for how PR professionals can best get their message across to journalists. Gold, surely?Unfortunately, despite promoting the video across my X and LinkedIn accounts once again, I find no one willing to pay the $5 to view it.I’m starting to get tired with the platform’s interface, which makes casual browsing hard. A major difference between OnlyFans and other social platforms is that creators’ posts—even those that are thoroughly safe for work—are all locked unless you register and subscribe to that page. While you can browse most Instagram, YouTube, or TikTok posts without logging in or following an account, OnlyFans exists within its own walled garden.Despite these limitations, OnlyFans’ creator accounts increased by 29 percent last year to 4.1 million, according to its parent company’s latest financial filing. Fan accounts grew by 28 percent to 305 million. (The company doesn’t say how many of these accounts are active.) More than 300 million users and I can’t find even one willing to pay for my content. They can’t all be there for porn, can they?For creators, a major advantage of OnlyFans over Instagram or TikTok is the way direct payments from fans are baked in from the start. Many rival apps expect creators to post content for free, with the only reward coming from growing online clout as their follower counts climb. OnlyFans provides a smaller pool of users, but one with verified payment details who are just a click away from emptying their wallets.Another draw for creators on OnlyFans is the flexibility it gives over how they charge. Want to paywall your entire profile? Pick a price between $4.99 and $50 a month. Want to add fees to unlock specific posts? Slap on your own price tag, up to $100. Want to make fans pay to message you? All these options and more are easy to set up.“All your subscribers are invested in you. They’re interested in you, your lifestyle,” says Liam O’Neill, a 33-year-old professional golfer who has amassed 2,300 likes on his OnlyFans profile after some 18 months on the site. He also has a sponsorship deal with OnlyFans to display the company’s logo on his golf bag at tournaments. “It’s much more personable. I can easily reply to people on OnlyFans DM, whereas on Instagram it can be a bit more diluted.”O’Neill keeps his main feed free, but charges followers for suggestions on how to improve their golf swing via DMs. Alexander, the musician, provides a menu of paid options on his profile, like “$35 for me to sing you a personalized Happy Birthday message.” He experimented with paywalling his content behind a $4.99 monthly subscription, but finds fans prefer more piecemeal payments.OnlyFans takes a 20 percent cut from all these payments, accumulating a handy $1.3 billion in revenue last year. Only 41 percent of that was from recurring subscriptions, lending weight to Alexander’s personal experience. In its accounts, the company reports having just 42 staff members, leading to a jaw-dropping $31 million in revenue per employee.OnlyFans’ commission is low compared with some rivals. YouTube, for example, takes a 45 percent cut of advertising revenue generated from its longer videos, and only for creators with a sufficiently large following. The cut is even higher for posts on YouTube’s Shorts platform.Back-of-the-envelope calculations indicate that the average OnlyFans creator account netted some $1,300 last year, although research suggests the vast majority of income flows to the top 10 percent of accounts, leaving very little for those at the bottom of the pack. Many (sexual) accounts insist they make millions of dollars a year, often posting screenshots on social media to back up their claims.By contrast, most of the SFW OnlyFans creators I talked to for this article were coy about their income from the site. Nate Craig, a 47-year-old comedian based in Los Angeles, agreed to share his numbers, but they’re hardly inspiring: He’s made less than $100 from fans in the year or two he’s been on the platform.Craig isn’t really on OnlyFans for the occasional $5 tip, however. Like many SFW performers, the platform paid him to join. A producer working for OnlyFans approached him with a tempting offer: OnlyFans would film one of his sets and pay him a “good” sum for his trouble (Craig declined to specify how much) on the condition that the stand-up would share the OnlyFans-watermarked video widely across his other social media—and agree to post regularly on his OnlyFans page.“They were pretty straightforward about it. They were like, ‘We want to open up our site to other types of content creators,’” the comedian tells me on a video call, while bouncing his infant son on his knee. “They didn’t say this, but it was clear they wanted to expand their brand.”Despite OnlyFans’ best efforts to diversify into more SFW terrain, it still has one key problem: Not many people who want that kind of content want to see it on OnlyFans.After I promote my paywalled OnlyFans post on LinkedIn, one PR professional, the target market for my advice, replies, “I’m super curious about the contents of the video but also do not want to give OF my personal data and sign up for an account.” Another respondent says he would be “worried about people seeing OnlyFans on my card statement,” while a third, a former newsroom colleague, seems disgusted with the whole enterprise.“I want to read your thoughts on PR and journalism, but joining OnlyFans is a strict no-no,” he gripes. “I think your potential subscribers will be afraid they will be seen as consumers of the other thing that this platform offers and who only use you as a human shield.”Ouch.So what have I gained from my OnlyFans adventure? After a couple of weeks, I’m sad to report I have earned a grand total of zero dollars. I didn’t even get anyone sliding into my DMs to propose something inappropriate. But I did get something fun to talk about the next time I bump into the neighbors.Update: 10/23/2024 10:50 AM EDT: Statements attributed to a spokesperson speaking on behalf of OnlyFans have been clarified. | 2024-11-08T02:36:36 | en | train |
42,019,891 | gitroom | 2024-11-01T18:13:10 | Show HN: Convert any website to a LinkedIn carousel | null | https://linkedincarouselgenerator.com | 4 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,019,920 | karlschlosshax | 2024-11-01T18:15:10 | You can now try Microsoft's more modern Windows Hello UI | null | https://www.theverge.com/2024/11/1/24285558/microsoft-windows-hello-ui-passkeys-beta-testing | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,019,921 | TheAnkurTyagi | 2024-11-01T18:15:14 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,019,925 | deephire | 2024-11-01T18:15:33 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,019,961 | tosh | 2024-11-01T18:18:30 | Systemwide security flaw has been hiding in macOS for 2 decades | null | https://twitter.com/stephancasas/status/1852295601519829014 | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,019,970 | neo_123 | 2024-11-01T18:19:23 | How to Easily Share ComfyUI Online | null | https://pinggy.io/blog/how_to_easily_share_comfyui_online/ | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,019,974 | atlasunshrugged | 2024-11-01T18:19:35 | AI for Startups | null | https://a16z.com/ai-for-startups/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,019,988 | mostlystatic | 2024-11-01T18:21:09 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,020,016 | nhatcher | 2024-11-01T18:23:57 | Math's 'Bunkbed Conjecture' Has Been Debunked | null | https://www.quantamagazine.org/maths-bunkbed-conjecture-has-been-debunked-20241101/ | 25 | 3 | [
42051318,
42051256,
42051262,
42051312
] | null | null | null | null | null | null | null | null | null | train |
42,020,041 | icyou780 | 2024-11-01T18:26:35 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,020,067 | joebig | 2024-11-01T18:28:37 | Sarah Baartman | null | https://en.wikipedia.org/wiki/Sarah_Baartman | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,069 | mempko | 2024-11-01T18:28:39 | Fire and Fury Podcast Episode 22: Jeffrey Epstein and Donald Trump | null | https://podcasts.apple.com/us/podcast/episode-22-jeffrey-epstein-and-donald-trump/id1750757108?i=1000675243446 | 8 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,074 | keiran_cull | 2024-11-01T18:29:05 | Broken trust: How a Clearwater man took $100M from funds for disabled | null | https://www.tampabay.com/news/business/2024/08/02/center-special-needs-trust-govoni-bankruptcy/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,094 | robg | 2024-11-01T18:30:46 | null | null | null | 4 | null | [
42020787
] | null | true | null | null | null | null | null | null | null | train |
42,020,101 | siraben | 2024-11-01T18:31:47 | null | null | null | 6 | null | [
42020572
] | null | true | null | null | null | null | null | null | null | train |
42,020,106 | Tomte | 2024-11-01T18:32:34 | UFO50, the Only Game You'll Ever Need | null | https://bottomfeeder.substack.com/p/ufo50-the-only-game-youll-ever-need | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,141 | happyhalloween | 2024-11-01T18:35:25 | Interactive, visually appealing comparison of Trump and Harris on the issues | null | https://visualizing2024.org | 4 | 1 | [
42020142
] | null | null | null | null | null | null | null | null | null | train |
42,020,146 | datadeft | 2024-11-01T18:35:58 | Recraft V3 SVG, a text-to-image model with the ability to generate SVG images | null | https://replicate.com/recraft-ai/recraft-v3-svg | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,154 | PaulHoule | 2024-11-01T18:37:39 | BrainTransformers: SNN-LLM | null | https://arxiv.org/abs/2410.14687 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,167 | noch | 2024-11-01T18:38:44 | AI for Startups | null | https://blogs.microsoft.com/on-the-issues/2024/11/01/ai-for-startups/ | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,169 | rntn | 2024-11-01T18:39:03 | Facial Recognition That Tracks Suspicious Friendliness Coming to Stores Near You | null | https://gizmodo.com/facial-recognition-that-tracks-suspicious-friendliness-is-coming-to-a-store-near-you-2000519190 | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,175 | GavinAnderegg | 2024-11-01T18:39:56 | Using WordPress Is Risky | null | https://anderegg.ca/2024/11/01/using-wordpress-is-risky | 8 | 3 | [
42020519
] | null | null | null | null | null | null | null | null | null | train |
42,020,177 | ndiddy | 2024-11-01T18:40:05 | Live Nation decision will force companies to rethink consumer arbitration rules | null | https://www.reuters.com/legal/litigation/column-live-nation-decision-will-force-companies-rethink-consumer-arbitration-2024-10-29/ | 3 | 0 | null | null | null | http_other_error | reuters.com | null | null | Please enable JS and disable any ad blocker | 2024-11-08T13:02:40 | null | train |
42,020,186 | gmays | 2024-11-01T18:40:30 | How Inland Waterways Work [video] | null | https://www.youtube.com/watch?v=Uqs-f862YaU | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,203 | jwarden | 2024-11-01T18:42:42 | Quadratic vs. Pairwise | null | https://blog.zaratan.world/p/quadratic-v-pairwise | 4 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,209 | doener | 2024-11-01T18:43:48 | US data broker declares bankruptcy after data leak | null | https://posteo.de/en/news/us-data-broker-declares-bankruptcy-after-data-leak | 3 | 1 | [
42020260
] | null | null | null | null | null | null | null | null | null | train |
42,020,212 | uday_singlr | 2024-11-01T18:44:02 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,020,221 | lapnect | 2024-11-01T18:44:28 | Essential Forms | null | https://www.abyme.net/revue/essentialforms/ | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,226 | gmays | 2024-11-01T18:44:55 | A new theory suggests mistakes are an essential part of being alive | null | https://aeon.co/essays/a-new-theory-suggests-mistakes-are-an-essential-part-of-being-alive | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,245 | gnabgib | 2024-11-01T18:46:59 | Revealing causal links in complex systems | null | https://news.mit.edu/2024/revealing-causal-links-complex-systems-1101 | 42 | 9 | [
42022480,
42026394,
42026964,
42026746
] | null | null | null | null | null | null | null | null | null | train |
42,020,247 | pgr0ss | 2024-11-01T18:47:01 | DuckDB over Pandas/Polars | null | https://www.pgrs.net/2024/11/01/duckdb-over-pandas-polars/ | 56 | 33 | [
42056827,
42056768,
42056698,
42058194,
42056943,
42060806,
42057064,
42056623
] | null | null | missing_parsing | DuckDB over Pandas/Polars | 2024-11-01T00:00:00-07:00 | Paul Gross |
November 1, 2024
2 minute read
Since my previous post on DuckDB (DuckDB as the New jq), I’ve been continuing to use and enjoy DuckDB.
Recently, I wanted to analyze and visualize some financial CSVs, including joining a few files together. I started out with Polars (which I understood to be a newer/better Pandas). However, as someone who doesn’t use it frequently, I found the syntax confusing and cumbersome.
For example, here is how I parsed a Transactions.csv and summed entries by Category for rows in 2024 (simplified example, code formatted with Black):
df = pl.read_csv("Transactions.csv")
df = (
df.select("Date", "Category", "Amount")
.with_columns(
pl.col("Date").str.to_date("%m/%d/%Y"),
pl.col("Amount")
.map_elements(lambda amount: amount.replace("$", ""))
.str.to_decimal(),
)
.filter(pl.col("Date") > datetime.date(2024, 1, 1))
.group_by("Category")
.agg(pl.col("Amount").sum())
)
print(df)
Things that tripped me up:
The syntax for selecting and transforming columns
Telling it how to parse the month/day/year column
Writing a lambda to strip out the $ (maybe there is a better way to do this?)
The mix of df. and pl. calls, such as calling df.group_by but passing in pl.col(...).sum(...) as the argument to the aggregation
I’m sure this is straightforward for someone who uses these tools frequently. However, that’s not me. I play around for a bit and then come back to it weeks or months later and have to relearn.
In contrast, I write SQL day in and day out, so I find it much easier. Once I switched to DuckDB, I could write much more familiar (to me) SQL, while still using python for the rest of the code:
results = duckdb.sql(
"""
select
Category,
sum(replace(Amount, '$', '')::decimal) as Amount
from read_csv('Transactions.csv')
where Date > '2024-01-01'
group by Category
"""
)
results.show()
Note that DuckDB automatically figured out how to parse the date column.
And I can even join multiple CSVs together with SQL and add more complex WHERE conditions:
results = duckdb.sql(
"""
select
c.Group,
sum(replace(t.Amount, '$', '')::decimal) as Amount
from read_csv('Transactions.csv') t
join read_csv('Categories.csv') c on c.Category = t.Category
where t.Date > '2024-01-01'
and c.Type in ('Income', 'Expense')
group by c.Group
"""
)
results.show()
In summary, I find DuckDB powerful, easy, and fun to use.
Update:
A Reddit comment showed me how to remove the map_elements:
pl.col("Amount").str.replace("\\$", "").str.to_decimal()
But I think the double use of .str is a good example of how this is complex to me as a casual user.
Update 2:
Another Reddit comment showed how to do “a shorter (no intermediary steps) and more efficient (scan) version”:
df = (
pl.scan_csv("Transactions.csv")
.filter(pl.col("Date").str.to_date("%m/%d/%Y") > datetime.date(2024, 1, 1))
.group_by("Category")
.agg(pl.col("Amount").str.replace("\\$", "").str.to_decimal().sum())
.collect()
)
print(df)
Discussions:
There are some good discussions about this post, especially around the increased composability of Polars/Pandas vs SQL and better ways to write the Polars code:
https://lobste.rs/s/rlkltp/duckdb_over_pandas_polars
https://www.reddit.com/r/DuckDB/comments/1ghd6t4/duckdb_over_pandaspolars/
| 2024-11-08T21:01:48 | null | train |
42,020,253 | NoSEVDev | 2024-11-01T18:47:28 | The Elimination Strategy – Why More Makes Your SaaS Worth Less | null | https://slimsaas.com/blog/elimination-strategy/ | 34 | 31 | [
42020499,
42020464,
42020502,
42020682,
42021240,
42021271,
42020545,
42020475,
42020885,
42020593,
42020603,
42021561,
42021276
] | null | null | null | null | null | null | null | null | null | train |
42,020,271 | doener | 2024-11-01T18:49:42 | Kingdom Uncovered: Inside Saudi Arabia | null | https://www.itv.com/presscentre/ep1weekweek-44-2024-sat-26-oct-fri-01-nov/kingdom-uncovered-inside-saudi-arabia | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,281 | mmooss | 2024-11-01T18:50:34 | The Online Cultural and Historical Research Environment (Ochre) | null | https://digitalculture.uchicago.edu/platforms/ochre-overview/ | 1 | 0 | null | null | null | no_error | OCHRE Overview | Forum for Digital Culture | null | null | OCHRE Is Comprehensive, Scalable, and Sustainable
OCHRE is a comprehensive computational platform for all stages of research. The alternative is to employ an ad hoc collection of separate software tools for data management, statistics, image processing, geospatial mapping, online publication, and so on. But that approach requires the cumbersome transferring of data from one piece of software to another using intermediate file formats and chunks of code (scripts) that most researchers either do not know how to write or do not want to debug and maintain. The result is a series of time-consuming and error-prone tasks in which it is easy to lose track of the many pieces of information accumulated in a typical project. By contrast, OCHRE users have a comprehensive view of all their data in all stages of the project and an intuitive user interface with which to view, edit, analyze, and publish the data, without having to code their own scripts or manually transfer data from one piece of software to another.
OCHRE is comprehensive because the innovative semistructured graph database used on the back end of the platform is based on a foundational ontology (meta-ontology) that is universal in scope and can accommodate any number of project-specific or domain-specific ontologies while faithfully preserving each project’s own terminology. The database schema that implements this foundational ontology is highly scalable, being able to accommodate any number of research projects with high efficiency. The database runs on Tamino XML Server, an enterprise-class native-XML database management system (DBMS) that provides sophisticated indexing and fast querying of the data using XQuery.
OCHRE is not only comprehensive and scalable but also sustainable because it is institutionally supported and maintained by the University of Chicago’s Forum for Digital Culture (a unit of the Division of the Humanities) in collaboration with the University Library. The Digital Library Development Center of the University Library provides system administration and multi-generation data back-up for the OCHRE core database and for the OCHRE publication database and external resource server maintained by the University of Chicago. Other institutions may choose to host their own publication database and external resource server, although scholars everywhere are free to use the ones hosted by the University of Chicago.
OCHRE has been in operation since 2001. It has been thoroughly tested and enhanced over the years in close consultation with academic researchers in a wide range of fields in the humanities and social sciences, and in some of the natural sciences (e.g., paleontology, population genetics, and astronomy). OCHRE is currently being used by more than 100 projects in the United States, Canada, Europe, and the Middle East. As of June 2023 there were approximately 1,000 user accounts for researchers and their students to add data for their projects. The database contains approximately 10 million indexed items representing over 100 terabytes of data. This scale of usage has enabled rigorous real-world testing of the software for a wide range of use cases and has demonstrated the system’s sustainability over a long period of time. In the future, we expect to add many more projects and users.
OCHRE Is Open and Freely Accessible
In addition to being scalable and sustainable, OCHRE is “open” and accessible in the following ways:
Open Standards
The OCHRE platform is entirely based on non-proprietary “open” standards published by the World Wide Web Consortium (W3C), i.e., XML, XML Schema, XSLT, XQuery, HTML, RDF, and SPARQL, supplemented by JSON, which is an ISO standard, and IIIF, a set of image data standards published by a global consortium of research libraries.
Open Access
All the data that projects choose to publish from the OCHRE core database is available on an open-access basis via Web apps provided by the Forum for Digital Culture, with no paywall, subject to a Creative Commons license that requires non-commercial use with attribution to the creators of the data. Published data is made available in widely supported open-standard data formats (XML and JSON plus IIIF for image data).
Open Source
All the apps provided on the front end of the OCHRE platform are open source. These are JavaScript/HTML/CSS apps for viewing, analyzing, and annotating data published by projects from the core database on the back end of the platform. The back end itself contains a mixture of open-source software combined with proprietary software for which there is no good open-source alternative. This is the normal practice when building enterprise-class database systems with high scalability and high availability.
A word about open source: Open-source software is obviously desirable whenever it is available and sufficient for the task at hand, in order to minimize financial barriers to access that inhibit non-commercial academic use of the system by scholars and students. But very few people use only open-source software throughout the entire software stack. This is because open-source software that remains usable over the long term is not “free.” Someone has to be paid to maintain it and document it, thus open source alone does not ensure accessibility. A vast amount of open-source software ends up orphaned and unusable, as we have seen over and over again in digital humanities when a project’s funding runs out or its leaders retire, causing the website to go dark. This has led academic funding agencies to question whether it is the best use of their resources to pay for large numbers of boutique software applications that end up being unsustainable, whether open source or not.
In the case of OCHRE, the cost of licensing proprietary software from commercial vendors is borne by the University of Chicago for the benefit of its own faculty’s research. This benefit is extended to non-Chicago scholars at a minimal cost due to the economies of scale engendered by sharing a common platform with a single code base. Scholars use this shared platform free-of-charge, although they may be asked to pay a modest fee or apply for a grant to help cover the cost of data conversion, data hosting, and user training for their own projects. The permanent full-time staff of the Forum for Digital Culture maintain the OCHRE platform and facilitate its use over the long term.
Invisible
Your content goes here. Edit or remove this text inline or in the module Content settings. You can also style every aspect of this content in the module Design settings and even apply custom CSS to this text in the module Advanced settings.
OCHRE Embraces Multiple Ontologies
OCHRE was initially developed for use in archaeology and philology. These fields of research have widely shared empirical methods but they are characterized by a high degree of ontological diversity, such that similar phenomena are described in different ways by different researchers. In fact, ontological diversity is characteristic of almost all fields of study in the humanities and social sciences and is found in varying degrees even in the natural sciences. (An “ontology,” in the sense intended here, is a formal specification of the concepts and relations in a given domain of knowledge. A hierarchical taxonomy is a common, and relatively simple, kind of ontology. See the OCHRE Ontology page of this website for a more detailed discussion of ontologies and the philosophical questions they raise.)
For this reason, computational tools for working with the vast body of scholarly knowledge now expressed in digital form must cope with the reality that such knowledge has been recorded by many different people using divergent ontologies. Each ontology reflects the nomenclature and conceptual distinctions relevant to a particular research community, and perhaps also reflects the idiosyncratic views of an individual researcher. No single ontology, no matter how complex and ramified, will be suitable for all purposes. There is an endless array of conceptual possibilities depending on the subject matter and the questions being asked, not to mention the linguistic traditions and historically situated perspectives of the scholars involved.
It is important to remember that ontological diversity is not a problem in itself. Indeed, it is inherent in the practice of research because different ontologies reflect different interpretive frameworks and research agendas; they are not just the result of sloppy thinking or individual quirks and egotism. Ontological diversity is not a vice to be eliminated, in a misguided attempt to standardize human ways of knowing, but rather a defining virtue of critically minded communities of thought that are open to multiple perspectives.
The practices of digital knowledge representation that emerged in large governmental and business organizations suppress ontological diversity. This reflects the fact that these are hierarchical organizations with central semantic authorities that mandate standard ontologies to be used throughout the organization. Unfortunately, these diversity-suppressing digital practices have permeated academic research, even though most scholars lack a central semantic authority, especially in the humanities. But since the vast majority of software development is done within and for governmental and business organizations, it is not surprising that most designers of information storage and retrieval systems assume without question that ontological standards are necessary, even in academic settings. These standards are typically expressed as a single prescribed database schema for each predetermined class of structured data, or perhaps as a set of prescribed markup tags for natural-language texts of a given type.
Ontological prescriptions of this sort cause problems for researchers because they force them to adopt standard ontologies that employ terms and distinctions which may not be suitable for their own work. On the other hand, allowing people to use their own ontologies inhibits automated integration and comparison of data among research projects. For this reason, a mechanism for automated querying and comparison that spans multiple ontologies is required. What is needed is database software that does not suppress ontological diversity via forced standardization but instead embraces it, while also facilitating semantic data integration across ontological boundaries. Data integration can be achieved in the face of diversity by making it easy for scholars to create semantic mappings from one ontology to another — not through additional software coding, which is prohibitively expensive, but by letting them add thesaurus relationships between the taxonomic terms of different ontologies within an intuitive user interface, perhaps with the assistance of AI (deep learning) tools. The querying software can then use these thesaurus relationships to do automatic query expansion, retrieving semantically comparable information from many projects at once.
This is what OCHRE was designed to do. It was engineered from the outset to respect the deeply rooted practices of semantic autonomy in the humanities by directly modeling each project’s own terminology and conceptual distinctions, avoiding any attempt forcibly to standardize ontologies across multiple projects, while still permitting semantic mappings across projects for large-scale querying and analysis. OCHRE thereby upholds the hermeneutical principle that meaning depends on context. In our view, software for digital humanities should acknowledge this hermeneutical principle and support the demand of modern scholars for the freedom to describe phenomena of interest in the light of their own critical judgments without being forced to conform to someone else’s ontology due to its being inscribed in the very structure of the computer system.
OCHRE achieves this goal by means of a foundational ontology (meta-ontology) defined in terms of very basic conceptual categories such as space, time, agency, and discourse. This ontology is implemented in the logical schema of a semistructured graph database, which can thus model any local or project-specific ontology within a universal ontological structure. As a result, OCHRE is very flexible and customizable. It does not force researchers to conform to a predetermined recording system but lets them use their own terminologies and conceptual distinctions. And it does so while providing powerful mechanisms for ingesting and integrating existing data; for querying and analyzing data; and for publishing and archiving a project’s data in an open, standards-compliant fashion.
OCHRE Uses Recursion to Model Space, Time, Language, and Taxonomy
Archaeologists and art historians study the material traces of human cultures. Philologists and literary critics study the historical development and interconnections of languages, literatures, and systems of writing. Linguists and philosophers study human linguistic capacities and the structure of language in general. All these disciplines exhibit, not just ontological diversity, but a high proportion of relatively unstructured and semistructured information in the form of qualitative descriptions and natural-language texts. This kind of information is best represented digitally by means of open-ended hierarchies (trees) of recursively nested entities rather than by means of rigid tables that have one row for each entity and a predetermined column for each property of the entities represented in the table.
Archaeology and philology also entail close attention to geographical and chronological variations in the phenomena being studied. And when dealing with the spatial and temporal relations among entities, researchers need mechanisms for representing not just absolute locations in space and time, in terms of numeric map coordinates and calendar dates, but also the relative placement of spatial objects or temporal events with respect to other spatial and temporal phenomena. This, too, is best accomplished computationally by representing spatial units of observation and temporal periods by means of open-ended hierarchies of recursively nested entities of the same kind — spatial or temporal, as the case may be — with the same structures at each level of the hierarchy regardless of scale, allowing the use of powerful recursive programming techniques to search and analyze the hierarchies.
Large collaborative projects in archaeology and philology have been test beds for OCHRE and provide examples of its use. But we have found that software methods developed to deal with the spatial, temporal, linguistic, and taxonomic complexity of archaeological and philological data are applicable to a much wider range of research. This is so because the OCHRE software is based on powerful conceptual abstractions expressed in an innovative database structure characterized by overlapping recursive hierarchies of highly atomized entities. OCHRE’s hierarchical and recursive data model can flexibly represent scholarly knowledge of all kinds without sacrificing the power of modern databases because it is implemented, not in an unconstrained web of knowledge that is difficult to search efficiently, as is common in simpler graph databases, but by means of well-indexed and atomized database items that conform to a predictable hierarchical schema, thus enabling semantically rich and efficient queries. Accordingly, OCHRE is now being used, not just in archaeology and philology, but in many other areas of the humanities and social sciences, and also in branches of the natural sciences where spatial, temporal, and taxonomic variation are key concerns, such as population genetics (comparing ancient and modern DNA), paleoclimatology, and other kinds of environmental research.
OCHRE Ingests and Manages All Kinds of Research Data
OCHRE supports a wide range of digital formats and data types: textual, numeric, visual, sonic, geospatial, etc. A project’s textual and numeric data are ingested into the OCHRE database, where they are atomized and manipulated as individual keyed-and-indexed database items. OCHRE can automatically import textual and numeric data stored in comma-separated value files (CSV), Excel spreadsheets (XLSX), Word documents (DOCX), and other XML documents (e.g., digital texts in the TEI-XML format).
In contrast, a project’s 2D images, 3D models, GIS mapping data, PDF files, and audio/video clips are not stored directly in the central OCHRE database but are catalogued in the central database as “external resources” to be fetched as needed from external HTTP or FTP servers. External resources are linked to keyed-and-indexed items in the central database via their URLs and are fetched as needed and displayed seamlessly together with a project’s textual and numeric data, which is stored internally in the database. Items in the database can be linked to specific locations within an image or other resource (e.g., a pixel region in a photograph or a page in multi-page document).
OCHRE uses ArcGIS Online and the ArcGIS Runtime SDK (embeddable software components) to provide a powerful mapping and spatial analysis capability that is tightly integrated with other data. This is especially important for archaeology but is necessary also for many other kinds of research. Spatial-containment relationships and chronological systems of temporal relationships (periods and sub-periods) can be represented in a way that makes it easy to work with both relative and absolute dates and lets users visualize temporal sequences via graphical timelines. More generally, relationships of all kinds — temporal, spatial, social, linguistic, etc. — can be modeled and visualized as node-arc network graphs and can be used in database queries that incorporate the extrinsic relationships among entities as well as their intrinsic properties.
OCHRE Has Advanced Capabilities for Textual Research
For textual projects, OCHRE has sophisticated capabilities for representing texts written in any language and writing system, modern or ancient. The epigraphic (physical) and discursive (linguistic) dimensions of a text are carefully distinguished in OCHRE as separate recursive hierarchies linked to one another by cross-hierarchy relations between epigraphic units (physical marks of inscription) and the discourse units (linguistic meanings) that a reader recognizes when reading the text. This distinction between the epigraphic and discursive dimensions of a text is necessary for many kinds of scholarly analysis but is muddled in the Text Encoding Initiative (TEI) encoding scheme, for example.
Moreover, in the OCHRE database the writing system itself is represented separately from texts that use it to avoid confusion between the ideal signs of a writing system as understood abstractly and the physical instantiations of the signs as they appear in particular texts. Writing systems are represented as sets of ideal signs modeled separately from the epigraphic hierarchies of texts in which each sign is instantiated by some allograph or other of it. This is necessary when dealing with ancient logosyllabic writing systems such as Mesopotamian cuneiform or Egyptian hieroglyphs, whose signs have many possible phonetic values and allographic variants, and it is quite useful even when dealing with alphabetic writing systems, which often have allographic variations of scholarly interest across the texts in which they are instantiated.
Finally, in addition to distinguishing scholarly analyses of the epigraphic hierarchy of a text from analyses of its discourse hierarchy, and also distinguishing the signs of a writing system from the epigraphic units in which these signs are instantiated, OCHRE represents the lexicon of each language or dialect as a separate set of ideal lexical units contained in dictionary lemmas. The lexical units of a language are instantiated by the discourse units of texts written in that language. A word-level discourse unit is normally linked to the epigraphic units that were read to produce it and also to the particular grammatical form of the word within a dictionary lemma.
This allows the software to compile automatically for each lemma all the grammatical forms of the word and all the orthographic and allographic variations in the spelling of each grammatical form of the word, together with textual citations of the use of each form in context generated automatically from the texts in which they appear. OCHRE can thus generate from its database a dictionary view that looks like an OED-style corpus-based dictionary, constructed dynamically from the underlying text editions with no error-prone duplication of information. Text editions are closely interwoven with dictionaries, on the one hand, and with analyses of writing systems, on the other, making it easy to explore computationally the entire web of connections of interest to philology.
The value of the OCHRE data model for textual studies is illustrated by the multi-project Critical Editions for Digital Analysis and Research (CEDAR) initiative at the University of Chicago. The CEDAR projects are producing online critical editions of a wide range of culturally influential or “canonical” texts — ancient, medieval, and modern — written in diverse languages and writing systems and transmitted over long periods in multiple copies and translations. Textual variation in long-lasting textual traditions of this kind can be modeled computationally as a textual “space of possibilities” using OCHRE’s basic model of overlapping recursive hierarchies of entities with cross-hierarchy relations between entities in different hierarchies, which in this case are hierarchies of epigraphic units and discourse units.
In contrast, most software for digital humanities conflates the epigraphic and discursive dimensions of a text, which is usually represented by a single hierarchy of textual components, as in the Text Encoding Initiative (TEI) markup scheme. However, this yields an inadequate digital representation of the conceptual entities and relations that scholars employ when constructing critical editions. For more on this, see the 2014 article in Digital Humanities Quarterly entitled “Beyond Gutenberg: Transcending the Document Paradigm in Digital Humanities” by David Schloen and Sandra Schloen.
OCHRE Is Compatible with the Semantic Web and Linked Open Data
As was noted above, OCHRE is based on the open standards published by the World Wide Web Consortium (W3C), the organization responsible for the technical specifications of the Web itself and of the Semantic Web. OCHRE can expose and archive the data contained in its back-end database using the standard graph-data format of the Semantic Web, which is based on the W3C’s Resource Description Framework (RDF) and Web Ontology Language (OWL).
RDF represents knowledge in the form of subject-predicate-object “triples,” which constitute statements about entities. In terms of mathematical graph theory, a collection of RDF triple-statements is a labeled, directed graph. RDF triples can be queried using the W3C’s SPARQL querying language and can be easily imported into, or exported from, any graph database system that supports the Semantic Web standards.
RDF triples are well suited for the long-term archiving of OCHRE data in a standardized format that preserves all the conceptual distinctions and relationships projects have made when entering their data into the OCHRE database. RDF triples can be implemented in a number of different syntactical forms (e.g., in XML notation or Turtle notation) and do not depend on any particular software or operating system, so an RDF archive exported from the OCHRE database does not depend on the OCHRE software.
OCHRE can easily generate RDF triples in any notation that is desired because it stores data in a structurally identical way, as triple-statements about entities of interest, although in OCHRE these are called item-attribute-value triples rather than subject-predicate-object triples. OCHRE can also expose its data dynamically as SPARQL endpoints for other software to use. Thus OCHRE is fully compatible with the Semantic Web and the Linked Open Data approach that is based on the Semantic Web standards.
| 2024-11-08T02:43:28 | en | train |
42,020,284 | null | 2024-11-01T18:50:59 | null | null | null | null | null | null | [
"true"
] | null | null | null | null | null | null | null | null | train |
42,020,294 | metadigm | 2024-11-01T18:52:04 | Show HN: Termfu – A terminal debugger with custom layouts | Hi HN,<p>First "Show HN" post!<p>Termfu is a fast, multi-language TUI debugger that allows you to create and switch between custom layouts.<p>I couldn't find a terminal debugger that fulfilled my particular needs and desires, so I create one that does for the most part. On the UI spectrum, it's somewhere between GDB's TUI and Vimspector. It currently supports GDB and PDB. It uses customizable, single-key bindings, which can be documented on screen via their (t)itles. All window data is scrollable. I decided to have some fun with the layout creation process, which uses "key-binding ASCII art" to set the header command title order as well as the window positions and size ratios. For example, a configuration for a single layout would look something like this:<p><pre><code> >h
abc
dEfghi
>w
jnnnll
Mnnnll
onnnll
PPPqqq
</code></pre>
It's still pretty rough around the edges, but it has become a valuable tool for me, so I thought I'd go ahead and post it here. Please note that I am far from a C expert. Somewhere along the line, I got it in my head that a C project is a rite of passage, so here we are. If I hadn't had Gookin's Ncurses guide to get me started, I might already be dead. That being said, feedback is very much appreciated. | https://github.com/jvalcher/termfu | 4 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,302 | doener | 2024-11-01T18:53:10 | Facebook Took More Than $1M for Ads Sowing Election Lies | null | https://www.forbes.com/sites/emilybaker-white/2024/10/31/facebook-ads-election-misinformation/ | 14 | 5 | [
42021935,
42021166,
42021130,
42021231
] | null | null | null | null | null | null | null | null | null | train |
42,020,334 | mikece | 2024-11-01T18:56:02 | Wine 10.0 Release Plans Aim for Mid-January Release | null | https://www.phoronix.com/news/Wine-10.0-Release-Plans | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,335 | doener | 2024-11-01T18:56:12 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,020,338 | socialjulio | 2024-11-01T18:56:21 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,020,345 | hambandit | 2024-11-01T18:57:07 | Roaring Bitmap Compression | null | https://arxiv.org/abs/1603.06549 | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,349 | starfarer | 2024-11-01T18:57:27 | Hyper-Personalized AI Movie Trailer Generation with FakeMe (iOS) | null | https://www.fakemeapp.com/new-feature-announcement/ | 1 | 0 | null | null | null | no_error | Introducing Hyper-Personalized AI Movie Trailer Generation with FakeMe (iOS) – Now Live – FakeMe | null | null |
HomeIntroducing Hyper-Personalized AI Movie Trailer Generation with FakeMe (iOS) – Now Live
Introducing Hyper-Personalized AI Movie Trailer Generation with FakeMe (iOS) – Now Live
By the FakeMe Development Team
A scene from our “Roman War” theme demonstrating the capability to introduce the user as a main character.
Lights, camera, AI action! We’re thrilled to announce the latest breakthrough feature on our iOS app, FakeMe: Hyper-Personalized AI Movie Trailer Generation! This feature has been a labor of love and technical dedication, and after months of development, we’re ready to give you an exclusive look.
Imagine putting yourself in the spotlight of a professionally crafted movie trailer—all without stepping foot in a studio or hiring a production team. With a few simple steps, you can now create fully AI-generated movie trailers with everything from story, narration, music, and video generated based on your input.
Here’s a quick overview of what this feature can do, how it works, and what’s coming next for FakeMe.
How Does It Work?
Our AI-powered movie trailer generation process is designed for simplicity but packed with high-level tech. Here’s how you get started:
Upload Your Images: Provide five images of yourself. These images allow us to train a LORA (Low-Rank Adaptation), embedding your likeness into the generated visuals.
Select Your Theme: Want to see yourself as a superhero, a spy, or maybe a knight in an epic fantasy war? Choose your theme, and our custom AI system takes it from there. You can also write your own stories and custom design your own theme. Our LLM will take it from there and draft some visually unique for you.
Sit Back and Enjoy: Within moments, our AI crafts a trailer featuring you in the spotlight, building an immersive storyline around the selected theme.
The trailers are a powerful blend of AI technologies, brought together by a 90% open-source tech stack. We’ve fine-tuned each component for quality and consistency, all while keeping it accessible to our users.
A scene from our theme “Pirate”. The mood & scenery changes dramatically.
Tech Behind the Magic
Here’s a quick breakdown of the tech stack we’re using to bring this experience to life:
Story: Crafted by Llama 3.1 70B, the latest iteration from Llama AI. It shapes a coherent and engaging storyline based on the chosen theme.
Images: Generated through Flux with custom LORA training, designed specifically to incorporate user likeness in high fidelity.
Narration: We’re using F5-TTS, a state-of-the-art text-to-speech system, to deliver a custom voice clone narration that aligns perfectly with the tone of the trailer.
Sound Effects: Created by FoleyCrafter for immersive and authentic audio that enhances the cinematic feel.
Video: The visuals are generated by CogVideoX, with KlingAI supplementing certain scenes to overcome some limitations in CogVideoX and ensure a fluid experience.
Trailer Consistency Guidance: Since we wanted to have control over quality and consistency of the output, we give the LLM a predefined trailer template. Within the template we define the high level structure of the AI trailer (high-level story, transitions, narrator start times, etc..). From there the LLM takes this as a guidance for the custom AI trailer adjusting the structure to the custom query. While this limits the creativity of the output in some way, it allows us to better control the user outcome. At the moment we only created one trailer guidance – but more trailer guidance templates will be added in the future yielding even more creative output in story telling and trailer generation.
Overcoming Key Challenges
The biggest challenge we faced was keeping story consistency and visual harmony across multiple AI systems. Lighting, character consistency, and scene transitions posed major obstacles, so we developed a custom pipeline that manages these elements cohesively. This ensures your trailer not only looks good but feels like a professionally crafted video.
Another critical design decision was integrating a human input element to give users more control over the final product. We wanted the experience to feel customizable, yet seamless, so we made sure our pipeline has manual checkpoints to refine the output based on user feedback.
Two more scenes from our “Roman War” theme showing the lighting consistency from scene to scene.
Watch an Example Trailer
We’ve uploaded a sample trailer on YouTube to showcase our feature. Set against the theme of “Roman War” this 2-minute trailer gives a glimpse into what you can create with the new feature on FakeMe.
What’s Next for FakeMe?
Now that the feature is live, we’re excited to see the creative ways our users will bring it to life. We’re continuously refining the pipeline, especially in maintaining consistency across scenes, improving visual and auditory quality, and adding even more theme options. If there’s interest, we also plan to open-source the pipeline down the road, once it’s polished for a larger release.
As an example is our first punch towards styling the output. Here is an example of our “block” style that is already released in the current app version. In the future you will be able to choose from a variety of trailer styles including Anime, Cartoon, etc…
Our “Blocks” styled theme showcasing the user as a the main character.
Our goal is to make high-quality, AI-generated entertainment accessible to everyone. Whether you’re looking to create trailers for fun, as a gift, or even to inspire a larger creative project, FakeMe’s new feature is here to make that possible.
We Want to Hear From You!
Got feedback or suggestions? We’d love to hear your thoughts as we continue to improve and expand FakeMe’s capabilities. You can reach us via our contact page. Let us know what themes you’d love to see next or what features you’d find useful for personalizing your AI movie trailer experience.
Thank you for joining us on this journey of AI-powered creativity. We can’t wait to see what trailers you’ll create with FakeMe!
| 2024-11-07T07:19:18 | en | train |
42,020,350 | user0x1d | 2024-11-01T18:57:31 | Ask HN: Top podcast episodes of all time? | Andrew Huberman is compiling a list of the top 10 podcast episodes of all time. Naval @ Joe Rogan is 1st.<p>That is also probably number 1 for me (it was life changing.) The other Naval podcasts at The Knowledge Project, his "How to get rich" series would be there as well. But I'm confident the HN crowd will know about the other Naval episodes.<p>If not including Naval's, what are for you the top 10 podcast episodes of all time?<p>here's the original post: https://www.linkedin.com/posts/andrew-huberman_im-assembling-a-list-of-the-top-10-podcast-activity-7249422054246998016-wdL9?utm_source=share&utm_medium=member_desktop | null | 1 | 1 | [
42020468
] | null | null | null | null | null | null | null | null | null | train |
42,020,355 | gmays | 2024-11-01T18:57:44 | Millennials–The Unluckiest Generation–Became the Most Economically Divided | null | https://www.barrons.com/articles/millennials-generation-wealth-gap-economy-49bf2e3a?mod=1440 | 7 | 1 | [
42020558
] | null | null | null | null | null | null | null | null | null | train |
42,020,359 | mooreds | 2024-11-01T18:58:27 | Migrating In-Place from PostgreSQL to MySQL | null | https://engineeringblog.yelp.com/2024/10/migrating-from-postgres-to-mysql.html | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,366 | null | 2024-11-01T18:59:09 | null | null | null | null | null | null | [
"true"
] | null | null | null | null | null | null | null | null | train |
42,020,386 | 0x3d-site | 2024-11-01T19:01:37 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,020,387 | zodwick | 2024-11-01T19:01:38 | Show HN: Open source Chrome Extension to Auto-Create Google Tasks/Calendar | hey hn,<p>I built lazel because i wanted to get more organized, but was too lazy to do it myself.
The extension is still under verification and not yet published, but i figured that it might be better to share as its fairly easy to set up locally and modify it to fit your needs.<p>Right now it only supports Goggle Tasks and Google Calender, but the idea is to integrate more services.<p>This is my fist open source project, so any suggestions are welcome :)<p>Cheers,
Anand | https://github.com/zodwick/lazel | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,395 | reesericci | 2024-11-01T19:02:10 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,020,398 | geox | 2024-11-01T19:02:40 | An Idaho health department isn't allowed to give Covid vaccines anymore | null | https://apnews.com/article/covid19-vaccine-public-health-idaho-76f1c29bf3f07a2c029175bf6c2180c4 | 15 | 2 | [
42020777
] | null | null | null | null | null | null | null | null | null | train |
42,020,406 | willemlaurentz | 2024-11-01T19:03:45 | Budget Android (€99) vs. Expensive iPhone (€1000) (2018) | null | https://willem.com/blog/2018-10-09_using-a-budget-android-as-main-smartphone/ | 61 | 50 | [
42024462,
42026860,
42025148,
42032733,
42026448,
42021128,
42025471,
42025737,
42020923
] | null | null | null | null | null | null | null | null | null | train |
42,020,443 | NominalNews | 2024-11-01T19:07:59 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,020,451 | hn_acker | 2024-11-01T19:08:43 | Study: 76% of U.S. Residents Want Government to Address Soaring Broadband Prices | null | https://www.techdirt.com/2024/11/01/study-76-of-u-s-residents-want-government-to-do-something-about-soaring-broadband-prices/ | 3 | 3 | [
42023111,
42020872,
42020452
] | null | null | null | null | null | null | null | null | null | train |
42,020,453 | azhenley | 2024-11-01T19:08:57 | Racket Syntax: The Great, the Good, and the Back to the Drawing Board [video] | null | https://www.youtube.com/watch?v=ZtTqRH1uwu4 | 2 | 0 | null | null | null | no_article | null | null | null | null | 2024-11-08T08:27:23 | null | train |
42,020,477 | ulrischa | 2024-11-01T19:11:13 | Control your smart home devices with the Gemini mobile app on Android | null | https://support.google.com/gemini/answer/15335456?hl=en | 1 | 0 | null | null | null | no_error | Control your smart home devices with the Gemini mobile app on Android | null | null | You can control smart home devices that your Google Account can access, including those added later, with the Google Home extension in the Gemini mobile app. This includes the following smart home devices:
Lights & power, like lights, outlets, and switches
Climate control, like air conditioning units, thermostats, heaters, and fans
Window coverings, like curtains, blinds, and shutters
Media devices, like TVs and speakers
Other smart devices, like washers, coffee makers, and vacuums
What you need
An Android phone or tablet. For now, the Google Home extension isn’t available on iPhone devices.
The Gemini app, including as your mobile assistant. Learn more about the Gemini mobile app.
The Google Home extension isn’t available in Gemini in Google Messages or the Gemini web app.
Access to Public Preview. For now, the Google Home extension is available in Public Preview only. Learn how to join Public Preview.
Gemini Apps Activity on. Extensions are only available when Gemini Apps Activity is on.
Important:
For now, the Google Home extension works with prompts in English only.
Extensions work in the same way for both spoken and typed prompts. On Android mobile devices, if "Hey Google" doesn’t work, check if “Hey Google” and Voice Match are set up.
Before you use the Google Home extension
Important: Home controls are for convenience only, not safety- or security-critical purposes. Don't rely on Gemini for requests that could result in injury or harm if they fail to start or stop.
The Google Home extension can’t perform some actions on security devices, like gates, cameras, locks, doors, and garage doors. For unsupported actions, the Gemini app gives you a link to the Google Home app where you can control those devices.
If you turn on the Google Home extension, your Gemini mobile app can:
Help you control and manage the same homes and devices in the Google Home app as your signed-in Google Account. This includes any homes and devices that are added later.
Access information about smart home devices that you may share with other household members in the Google Home app. Remember to keep household members in mind when you control these devices with the Google Home extension. Learn more about household members.
Connect Google Home to Gemini Apps
On your Android phone or tablet, open Gemini .
Make sure you’re signed in to the same account you use with Google Home.
Ask the Gemini app to perform an action on a smart home device, like turn on living room lights.
Tip: If the Gemini app doesn’t use the Google Home extension, you can include @Google Home in your prompt.
If you haven’t connected Gemini Apps to Google Home, you’ll get the option to connect it.
If you turn on the Google Home extension, you give Gemini Apps permission to access and control the same homes and devices as this account in the Google Home app. This includes any homes and devices added or shared with you later.
Follow the on-screen instructions.
Tip: You can also turn the Google Home extension on or off in your Extensions settings. Learn how to turn extensions on or off.
Examples
Set the dining room for a romantic date night
Set the AC to a good temperature for sleeping
Turn off the bedroom TV and lights
The sun is too bright in the living room (to close window coverings)
Help me clean up the kitchen (to start vacuum)
Control smart home devices
There are a lot of smart home devices that the Google Home extension can help you control. The devices listed below are just a few of the common ones that are supported.
Control lights & powerLights
Turn on/off [light name].
Turn on/off all of the lights.
Turn my [room name] lights on/off.
Dim the [light name].
Dim the [room name] lights.
Brighten the [light name].
Set [light name] to 50%.
Turn [light name] green.
Switches or outlets
Turn on/off [outlet name].
Turn on/off [switch name].
Control home climateThermostats
Turn on or off
Turn on heating/cooling mode.
Turn on heat-cool mode.
Turn off thermostat.
Set or adjust the temperature
Set the heat to [temperature].
Set heat-cool to [temperature].
Set the air conditioning to [temperature].
Set the [room name] thermostat to [temperature].
Make it warmer/cooler.
Raise/lower the temp.
Raise/lower the temp by 2 degrees.
Switch heating or cooling modes
Turn on heating/cooling.
Set thermostat to cooling/heating.
Turn thermostat to heat-cool mode.
Fans, heaters & A/C units
Turn on/off [fan, heater, A/C, device name].
Increase the temperature on my heater
Increase/decrease the fan speed
Control window coverings
Open/close [curtain name]
Open/close [blinds name]
Open/close [shutters name]
Control media devices
Turn on/off [TV name]
Turn volume up/down on [TV name, speaker name]
Control other smart home devicesOther devices, like a vacuum, washer, coffee maker & more
Start [device name]
Stop [device name]
Vacuum the [room]
Turn on/off [device name]
What the Google Home extension can’t do
Complete security device actions that require a pin
Stream video feed from cameras
Execute Routines
While Routines aren’t supported with Gemini Extensions yet, some Routines functionality is supported in your Gemini mobile app with help from Google Assistant. Learn more about Routines powered by Google Assistant in your Gemini mobile app.
How extensions work in Gemini Apps
Gemini Apps only use extensions that are on in your Extensions settings. This includes extensions you turn on when you specify them in your prompt with an "@" mention.
Gemini Apps check for extensions that can help it generate a more helpful response. If a Gemini app finds an extension that can help, it automatically sends information from your conversation and other relevant information to that extension. For example, Gemini Apps will send your location data to Google Maps if you ask for coffee shops near you and the Google Maps extension is on.
Gemini Apps won’t access your personal content in other services without your permission. Some Gemini Extensions are designed to automatically work with tools, apps, and content on your device to help you seamlessly interact with it.
If you directly interact with another Google service in Gemini Apps, your activity might be saved by that other service. For example, if you watch a YouTube video in a Gemini app, YouTube may:
Collect your personal information.
Store and use that information according to YouTube’s terms of service.
Store your watch history in your YouTube History. Learn how to manage your YouTube watch history.
Gemini Apps can use extensions to help you connect with third-party apps and services. When they do, Gemini Apps share information with those apps and services to fulfill your requests. That information is then used by those third-party apps and services according to their own developers' privacy policies.
Learn more about how extensions work with your personal data.
Related resources
Gemini Apps Privacy Hub
Use extensions in Gemini Apps
Public Preview for Google Home app
| 2024-11-08T02:33:29 | en | train |
42,020,478 | ComputerGuru | 2024-11-01T19:11:20 | As datacenters strain the power grid, bills projected to rise for customers | null | https://www.washingtonpost.com/business/2024/11/01/ai-data-centers-electricity-bills-google-amazon/ | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,486 | MPLan | 2024-11-01T19:11:49 | A 22 percent increase in the German minimum wage: nothing crazy | null | https://paperswithcode.com/paper/a-22-percent-increase-in-the-german-minimum | 69 | 113 | [
42020501,
42020884,
42022009,
42021038,
42053866,
42023543,
42020953,
42021092
] | null | null | missing_parsing | Papers with Code - A 22 percent increase in the German minimum wage: nothing crazy! | null | 21 May 2024 |
We present the first empirical evidence on the 22 percent increase in the German minimum wage, implemented in 2022, raising it from Euro 9.82 to 10.45 in July and to Euro 12 in October. Leveraging the German Earnings Survey, a large and novel data source comprising around 8 million employee-level observations reported by employers each month, we apply a difference-in-difference-in-differences approach to analyze the policy's impact on hourly wages, monthly earnings, employment, and working hours. Our findings reveal significant positive effects on wages, affirming the policy's intended benefits for low-wage workers. Interestingly, we identify a negative effect on working hours, mainly driven by minijobbers. The hours effect results in an implied labor demand elasticity in terms of the employment volume of -0.17 which only partially offsets the monthly wage gains. We neither observe a negative effect on the individual's employment retention nor the regional employment levels.
PDF
Abstract
Code
No code implementations yet. Submit
your code now
Tasks
Datasets
Add Datasets
introduced or used in this paper
Results from the Paper
Submit
results from this paper
to get state-of-the-art GitHub badges and help the
community compare results to other papers.
Methods
No methods listed for this paper. Add
relevant methods here
| 2024-11-08T21:32:40 | null | train |
42,020,488 | ulrischa | 2024-11-01T19:11:58 | Bruce Lawson's personal site: For a better web 2 | null | https://brucelawson.co.uk/2024/for-a-better-web-2-john-ozbay-cryptee/ | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,505 | PaulHoule | 2024-11-01T19:13:57 | Farms study shows plastic mulch is contaminating agricultural fields | null | https://phys.org/news/2024-10-farms-plastic-mulch-contaminating-agricultural.html | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,511 | mikhael | 2024-11-01T19:14:52 | What if A.I. Is Actually Good for Hollywood? | null | https://www.nytimes.com/2024/11/01/magazine/ai-hollywood-movies-cgi.html | 2 | 0 | null | null | null | bot_blocked | nytimes.com | null | null | Please enable JS and disable any ad blocker | 2024-11-08T13:41:55 | null | train |
42,020,512 | sujithv28 | 2024-11-01T19:14:54 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,020,538 | ivewonyoung | 2024-11-01T19:17:56 | Half of Young Voters Say They've Lied about Which Candidates They Support | null | https://www.msn.com/en-us/news/politics/half-of-young-voters-say-they-ve-lied-about-which-candidates-they-support-new-poll-finds/ar-AA1thZks | 25 | 42 | [
42020794,
42020821,
42020727,
42020622,
42020914,
42020810,
42020675,
42022084,
42020969,
42020688,
42020843
] | null | null | no_article | null | null | null | null | 2024-11-08T15:50:16 | null | train |
42,020,541 | aguaviva | 2024-11-01T19:18:25 | Experience: I graduated from art school at the age of 90 | null | https://www.theguardian.com/lifeandstyle/2024/nov/01/experience-i-graduated-from-art-school-at-the-age-of-90 | 4 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,562 | leastangle | 2024-11-01T19:21:14 | Many companies want (partly) out of the cloud | null | https://www.heise.de/en/news/IDC-Many-companies-want-partly-out-of-the-cloud-10001934.html | 2 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,580 | Tomte | 2024-11-01T19:22:51 | What is your system for annotation, note-taking and synthesis? | null | https://acoup.blog/2024/11/01/referenda-ad-senatum-november-1-2024-ancient-weapons-lost-works-and-roman-spooky-stuff/ | 2 | 1 | [
42023137
] | null | null | null | null | null | null | null | null | null | train |
42,020,581 | doener | 2024-11-01T19:22:51 | First M4 Pro Benchmarks | null | https://browser.geekbench.com/v6/cpu/8588187 | 1 | 2 | [
42020584,
42021095
] | null | null | null | null | null | null | null | null | null | train |
42,020,586 | ryan-duve | 2024-11-01T19:23:19 | Emory University News Release – Lorem Ipsum | null | https://www.emory.edu/news/Releases/LoremIpsum.html | 1 | 0 | null | null | null | no_error | Emory University News Release - Lorem Ipsum | null | null |
Release date: Oct. 21, 2002
Contact: Jan Gleason, Associate Vice President, Public Affairs,
at 404-727-6219 or [email protected]
Asdf, Asdfasdf askljh asdf adfasdjfhasfl askdjfhas,
askldfj, asdlfkj asdklfjalsdkjf asldkfj kljlkjlkj
Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass. Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass.
Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass. Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass.
Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass. Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass.
Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass. Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass.
Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass. Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass.
Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass. Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass.
Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass. Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass.
Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass. Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass.
Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass. Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass.
Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass. Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass.
Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass. Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass.
Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass. Askklj asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdf asdfasdfaf asdfasdf asdfasdf asdfasdfasf asdfasdfasdf asdfasdfafa asdfasdfasdfass.
Back
| 2024-11-07T14:50:02 | en | train |
42,020,615 | todsacerdoti | 2024-11-01T19:26:44 | Development-Cycle in Cargo: 1.83 | null | https://blog.rust-lang.org/inside-rust/2024/10/31/this-development-cycle-in-cargo-1.83.html | 1 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,621 | steven-123 | 2024-11-01T19:27:45 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,020,624 | chaituprakash06 | 2024-11-01T19:28:07 | AI YC Application Reviewer | null | https://www.loom.com/share/c1d399de8bbe477894fe5344c8798732?sid=c525e0fb-5623-46de-90ce-5c6a5fb5a12f | 1 | 0 | [
42020625
] | null | null | null | null | null | null | null | null | null | train |
42,020,632 | michaelsbradley | 2024-11-01T19:28:59 | gptel: Mindblowing integration between Emacs and ChatGPT | null | https://www.blogbyben.com/2024/08/gptel-mindblowing-integration-between.html | 4 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,638 | tehnub | 2024-11-01T19:29:58 | Cameras and Lenses (2020) | null | https://ciechanow.ski/cameras-and-lenses/ | 1 | 0 | null | null | null | no_error | Cameras and Lenses – Bartosz Ciechanowski | null | Bartosz Ciechanowski |
December 7, 2020
Pictures have always been a meaningful part of the human experience. From the first cave drawings, to sketches and paintings, to modern photography, we’ve mastered the art of recording what we see.
Cameras and the lenses inside them may seem a little mystifying. In this blog post I’d like to explain not only how they work, but also how adjusting a few tunable parameters can produce fairly different results:
Over the course of this article we’ll build a simple camera from first principles. Our first steps will be very modest – we’ll simply try to take any picture. To do that we need to have a sensor capable of detecting and measuring light that shines onto it.
Recording Light
Before the dawn of the digital era, photographs were taken on a piece of film covered in crystals of silver halide. Those compounds are light-sensitive and when exposed to light they form a speck of metallic silver that can later be developed with further chemical processes.
For better or for worse, I’m not going to discuss analog devices – these days most cameras are digital. Before we continue the discussion relating to light we’ll use the classic trick of turning the illumination off. Don’t worry though, we’re not going to stay in darkness for too long.
The image sensor of a digital camera consists of a grid of photodetectors. A photodetector converts photons into electric current that can be measured – the more photons hitting the detector the higher the signal.
In the demonstration below you can observe how photons fall onto the arrangement of detectors represented by small squares. After some processing, the value read by each detector is converted to the brightness of the resulting image pixels which you can see on the right side. I’m also symbolically showing which photosite was hit with a short highlight. The slider below controls the flow of time:
The longer the time of collection of photons the more of them are hitting the detectors and the brighter the resulting pixels in the image. When we don’t gather enough photons the image is underexposed, but if we allow the photon collection to run for too long the image will be overexposed.
While the photons have the “color” of their wavelength, the photodetectors don’t see that hue – they only measure the total intensity which results in a black and white image. To record the color information we need to separate the incoming photons into distinct groups. We can put tiny color filters on top of the detectors so that they will only accept, more or less, red, green, or blue light:
This color filter array can be arranged in many different formations. One of the simplest is a Bayer filter which uses one red, one blue, and two green filters arranged in a 2x2 grid:
A Bayer filter uses two green filters because light in green part of the spectrum heavily correlates with perceived brightness. If we now repeat this pattern across the entire sensor we’re able to collect color information. For the next demo we will also double the resolution to an astonishing 1 kilopixel arranged in a 32x32 grid:
Note that the individual sensors themselves still only see the intensity, and not the color, but knowing the arrangement of the filters we can recreate the colored intensity of each sensor, as shown on the right side of the simulation.
The final step of obtaining a normal image is called demosaicing. During demosaicing we want to reconstruct the full color information by filling in the gaps in the captured RGB values. One of the simplest way to do it is to just linearly interpolate the values between the existing neighbors. I’m not going to focus on the details of many other available demosaicing algorithms and I’ll just present the resulting image created by the process:
Notice that yet again the overall brightness of the image depends on the length of time for which we let the photons through. That duration is known as shutter speed or exposure time. For most of this presentation I will ignore the time component and we will simply assume that the shutter speed has been set just right so that the image is well exposed.
The examples we’ve discussed so far were very convenient – we were surrounded by complete darkness with the photons neatly hitting the pixels to form a coherent image. Unfortunately, we can’t count on the photon paths to be as favorable in real environments, so let’s see how the sensor performs in more realistic scenarios.
Over the course of this article we will be taking pictures of this simple scene. The almost white background of this website is also a part of the scenery – it represents a bright overcast sky. You can drag around the demo to see it from other directions:
Let’s try to see what sort of picture would be taken by a sensor that is placed near the objects without any enclosure. I’ll also significantly increase the sensor’s resolution to make the pixels of the final image align with the pixels of your display. In the demonstration below the left side represents a view of the scene with the small greenish sensor present, while the right one shows the taken picture:
This is not a mistake. As you can see, the obtained image doesn’t really resemble anything. To understand why this happens let’s first look at the light radiated from the scene.
If you had a chance to explore how surfaces reflect light, you may recall that most matte surfaces scatter the incoming light in every direction. While I’m only showing a few examples, every point on every surface of this scene reflects the photons it receives from the whiteish background light source all around itself:
The red sphere ends up radiating red light, the green sphere radiates green light, and the gray checkerboard floor reflects white light of lesser intensity. Most importantly, however, the light emitted from the background is also visible to the sensor.
The problem with our current approach to taking pictures is that every pixel of the sensor is exposed to the entire environment. Light radiated from every point of the scene and the white background hits every point of the sensor. In the simulation below you can witness how light from different directions hits one point on the surface of the sensor:
Clearly, to obtain a discernible image we have to limit the range of directions that affect a given pixel on the sensor. With that in mind, let’s put the sensor in a box that has a small hole in it. The first slider controls the diameter of the hole, while the second one controls the distance between the opening and the sensor:
While not shown here, the inner sides of the walls are all black so that no light is reflected inside the box. I also put the sensor on the back wall so that the light from the hole shines onto it. We’ve just built a pinhole camera, let’s see how it performs. Observe what happens to the taken image as we tweak the diameter of the hole with the first slider, or change the distance between the opening and the sensor with the second one:
There are so many interesting things happening here! The most pronounced effect is that the image is inverted. To understand why this happens let’s look at the schematic view of the scene that shows the light rays radiated from the objects, going through the hole, and hitting the sensor:
As you can see the rays cross over in the hole and the formed image is a horizontal and a vertical reflection of the actual scene. Those two flips end up forming a 180° rotation. Since rotated images aren’t convenient to look at, all cameras automatically rotate the image for presentation and for the rest of this article I will do so as well.
When we change the distance between the hole and the sensor the viewing angle changes drastically. If we trace the rays falling on the corner pixels of the sensor we can see that they define the extent of the visible section of the scene:
Rays of light coming from outside of that shape still go through the pinhole, but they land outside of the sensor and aren’t recorded. As the hole moves further away from the sensor, the angle, and thus the field of view visible to the sensor gets smaller. We can see this in a top-down view of the camera:
Coincidentally, this diagram also helps us explain two other effects. Firstly, in the photograph the red sphere looks almost as big as the green one, even though the scene view shows the latter is much larger. However, both spheres end up occupying roughly the same span on the sensor and their size in the picture is similar. It’s also worth noting that the spheres seem to grow when the field of view gets narrower because their light covers larger part of the sensor.
Secondly, notice that different pixels of the sensor have different distance and relative orientation to the hole. The pixels right in the center of the sensor see the pinhole straight on, but pixels positioned at an angle to the main axis see a distorted pinhole that is further away. The ellipse in the bottom right corner of the demonstration below shows how a pixel positioned at the blue point sees the pinhole:
This change in the visible area of the hole causes the darkening we see in the corners of the photograph. The value of the cosine of the angle I’ve marked with a yellow color is quite important as it contributes to the reduction of visible light in four different ways:
Two cosine factors from the increased distance to the hole, it’s essentially the inverse square law
A cosine factor from the side squeeze of the circular hole seen at an angle
A cosine factor from the relative tilt of the receptor
These four factors conspire together to reduce the illumination by a factor of cos4(α) in what is known as cosine-fourth-power law, also described as natural vignetting.
Since we know the relative geometry of the camera and the opening we can correct for this effect by simply dividing by the falloff factor and from this point on I will make sure that the images don’t have darkened corners.
The final effect we can observe is that when the hole gets smaller the image gets sharper. Let’s see how the light radiated from two points of the scene ends up going through the camera depending on the diameter of the pinhole:
We can already see that larger hole size ends up creating a bigger spread on the sensor. Let’s see this situation up close on a simple grid of detecting cells. Notice what happens to the size of the final circle hitting the sensor as that diameter of the hole changes:
When the hole is small enough rays from the source only manage to hit one pixel on the sensor. However, at larger radii the light spreads onto other pixels and a tiny point in the scene is no longer represented by a single pixel causing the image to no longer be sharp.
It’s worth pointing out that sharpness is ultimately arbitrary – it depends on the size at which the final image is seen, viewing conditions, and visual acuity of the observer. The same photograph that looks sharp on a postage stamp may in fact be very blurry when seen on a big display.
By reducing the size of the cone of light we can make sure that the source light affects a limited number of pixels. Here, however, lays the problem. The sensor we’ve been using so far has been an idealized detector capable of flawless adjustment of its sensitivity to the lighting conditions. If we instead were to fix the sensor sensitivity adjustment, the captured image would look more like this:
As the relative size of the hole visible to the pixels of the sensor gets smaller, be it due to reduced diameter or increased distance, fewer photons hit the surface and the image gets dimmer.
To increase the number of photons we capture we could extend the duration of collection, but increasing the exposure time comes with its own problems – if the photographed object moves or the camera isn’t held steady we risk introducing some motion blur.
Alternatively, we could increase the sensitivity of the sensor which is described using the ISO rating. However, boosting the ISO may introduce a higher level of noise. Even with these problems solved an actual image obtained by smaller and smaller holes would actually start getting blurry again due to diffraction effects of light.
If you recall how diffuse surfaces reflect light you may also realize how incredibly inefficient a pinhole camera is. A single point on the surface of an object radiates light into its surrounding hemisphere, however, the pinhole captures only a tiny portion of that light.
More importantly, however, a pinhole camera gives us minimal artistic control over which parts of the picture are blurry. In the demonstration below you can witness how changing which object is in focus heavily affects what is the primary target of attention of the photograph:
Let’s try to build an optical device that would solve both of these problems: we want to find a way to harness a bigger part of the energy radiated by the objects and also control what is blurry and how blurry it is. For the objects in the scene that are supposed to be sharp we want to collect a big chunk of their light and make it converge to the smallest possible point. In essence, we’re looking for an instrument that will do something like this:
We could then put the sensor at the focus point and obtain a sharp image. Naturally, the contraption we’ll try to create has to be transparent so that the light can pass through it and get to the sensor, so let’s begin the investigation by looking at a piece of glass.
Glass
In the demonstration below I put a red stick behind a pane of glass. You can adjust the thickness of this pane with the gray slider below:
When you look at the stick through the surface of a thick glass straight on, everything looks normal. However, as your viewing direction changes the stick seen through the glass seems out of place. The thicker the glass and the steeper the viewing angle the bigger the offset.
Let’s focus on one point on the surface of the stick and see how the rays of light radiated from its surface propagate through the subsection of the glass. The red slider controls the position of the source and the gray slider controls the thickness. You can drag the demo around to see it from different viewpoints:
For some reason the rays passing through glass at an angle are deflected off their paths. The change of direction happens whenever the ray enters or leaves the glass.
To understand why the light changes direction we have to peek under the covers of classical electromagnetism and talk a bit more about waves.
Waves
It’s impossible to talk about wave propagation without involving the time component, so the simulations in this section are animated – you can play and pause them by clickingtapping on the button in their bottom left corner.
By default all animations are enabled, but if you find them distracting, or if you want to save power, you can globally pause all the following demonstrations.disabled, but if you’d prefer to have things moving as you read you can globally unpause them and see all the waves oscillating.
Let’s begin by introducing the simplest sinusoidal wave:
A wave like this can be characterized by two components. Wavelength λ is the distance over which the shape of the wave repeats. Period T defines how much time a full cycle takes.
Frequency f, is just a reciprocal of period and it’s more commonly used – it defines how many waves per second have passed over some fixed point. Wavelength and frequency define phase velocity vp which describes how quickly a point on a wave, e.g. a peak, moves:
vp = λ · f
The sinusoidal wave is the building block of a polarized electromagnetic plane wave. As the name implies electromagnetic radiation is an interplay of oscillations of electric field E and magnetic field B:
In an electromagnetic wave the magnetic field is tied to the electric field so I’m going to hide the former and just visualize the latter. Observe what happens to the electric component of the field as it passes through a block of glass. I need to note that dimensions of wavelengths are not to scale:
Notice that the wave remains continuous at the boundary and inside the glass the frequency of the passing wave remains constant, However, the wavelength and thus the phase velocity are reduced – you can see it clearly from the side.
The microscopic reason for the phase velocity change is quite complicated, but it can be quantified using the index of refraction n, which is the ratio of the speed of light c to the phase velocity vp of lightwave in that medium:
n = c
/
vp
The higher the index of refraction the slower light propagates through the medium. In the table below I’ve presented a few different indices of refraction for some materials:
vacuum1.00
air1.0003
water1.33
glass1.53
diamond2.43
Light traveling through air barely slows down, but in a diamond it’s over twice as slow. Now that we understand how index of refraction affects the wavelength in the glass, let’s see what happens when we change the direction of the incoming wave:
The wave in the glass has a shorter wavelength, but it still has to match the positions of its peaks and valleys across the boundary. As such, the direction of propagation must change to ensure that continuity.
I need to note that the previous two demonstrations presented a two dimensional wave since that allowed me to show the sinusoidal component oscillating into the third dimension. In real world the lightwaves are three dimensional and I can’t really visualize the sinusoidal component without using the fourth dimension which has its own set of complications.
The alternative way of presenting waves is to use wavefronts. Wavefronts connect the points of the same phase of the wave, e.g. all the peaks or valleys. In two dimensions wavefronts are represented by lines:
In three dimensions the wavefronts are represented by surfaces. In the demonstration below a single source emits a spherical wave, points of the same phase in the wave are represented by the moving shells:
By drawing lines that are perpendicular to the surface of the wavefront we create the familiar rays. In this interpretation rays simply show the local direction of wave propagation which can be seen in this example of a section of a spherical 3D wave:
I will continue to use the ray analogy to quantify the change in direction of light passing through materials. The relation between the angle of incidence θ1 and angle of refraction θ2 can be formalized with the equation known as Snell’s law:
n1 · sin(θ1) = n2 · sin(θ2)
It describes how a ray of light changes direction relative to the surface normal on the border between two different media. Let’s see it in action:
When traveling from a less to more refractive material the ray bends towards the normal, but when the ray exits the object with higher index of refraction it bends away from the normal.
Notice that in some configurations the refracted ray completely disappears, however, this doesn’t paint a full picture because we’re currently completely ignoring reflections.
All transparent objects reflect some amount of light. You may have noticed that reflection on a surface of a calm lake or even on the other side of the glass demonstration at the beginning of the previous section. The intensity of that reflection depends on the index of refraction of the material and the angle of the incident ray. Here’s a more realistic demonstration of how light would get refracted and reflected between two media:
The relation between transmittance and reflectance is determined by Fresnel equations. Observe that the curious case of missing light that we saw previously no longer occurs – that light is actually reflected. The transition from partial reflection and refraction to the complete reflection is continuous, but near the end it’s very rapid and at some point the refraction completely disappears in the effect known as total internal reflection.
The critical angle at which the total internal reflection starts to happen depends on the indices of refraction of the boundary materials. Since that coefficient is low for air, but very high for diamond a proper cut of the faces makes diamonds very shiny.
While interesting on its own, reflection in glass isn’t very relevant to our discussion and for the rest of this article we’re not going to pay much attention to it. Instead, we’ll simply assume that the materials we’re using are covered with high quality anti-reflective coating.
Manipulating Rays
Let’s go back to the example that started the discussion of light and glass. When both sides of a piece of glass are parallel, the ray is shifted, but it still travels in the same direction. Observe what happens to the ray when we change the relative angle of the surfaces of the glass.
When we make two surfaces of the glass not parallel we gain the ability to change the direction of the rays. Recall, that we’re trying to make the rays hitting the optical device converge at a certain point. To do that we have to bend the rays in the upper part down and, conversely, bend the rays in the lower part up.
Let’s see what happens if we shape the glass to have different angles between its walls at different height. In the demonstration below you can control how many distinct segments a piece of glass is shaped to:
As the number of segments approaches infinity we end up with a continuous surface without any edges. If we look at the crossover point from the side you may notice that we’ve managed to converge the rays across one axis, but the top-down view reveals that we’re not done yet. To focus all the rays we need to replicate that smooth shape across all possible directions – we need rotational symmetry:
We’ve created a convex thin lens. This lens is idealized, in the later part of the article we’ll discuss how real lenses aren’t as perfect, but for now it will serve us very well. Let’s see what happens to the focus point when we change the position of the red source:
When the source is positioned very far away the incoming rays become parallel and after passing through lens they converge at a certain distance away from the center. That distance is known as focal length.
The previous demonstration also shows two more general distances: so which is the distance between the object, or source, and the lens, as well as si which is the distance between the image and the lens. These two values and the focal length f are related by the thin lens equation:
1
/
so
+
1
/
si
=
1
/
f
Focal length of a lens depends on both the index of refraction of the material from which the lens is made and its shape:
Now that we understand how a simple convex lens works we’re ready to mount it into the hole of our camera. We will still control the distance between the sensor and the lens, but instead of controlling the diameter of the lens we’ll instead control its focal length:
When you look at the lens from the side you may observe how the focal length change is tied to the shape of the lens. Let’s see how this new camera works in action:
Once again, a lot of things are going on here! Firstly, let’s try to understand how the image is formed in the first place. The demonstration below shows paths of rays from two separate points in the scene. After going through the lens they end up hitting the sensor:
Naturally, this process happens for every single point in the scene which creates the final image. Similarly to a pinhole a convex lens creates an inverted picture – I’m still correcting for this by showing you a rotated photograph.
Secondly, notice that the distance between the lens and the sensor still controls the field of view. As a reminder, the focal length of a lens simply defines the distance from the lens at which the rays coming from infinity converge. To achieve a sharp image, the sensor has to be placed at the location where the rays focus and that’s what’s causing the field of view to change.
In the demonstration below I’ve visualized how rays from a very far object focus through a lens of adjustable focal length, notice that to obtain a sharp image we must change the distance between the lens and the sensor which in turn causes the field of view to change:
If we want to change the object on which a camera with a lens of a fixed focal length is focused, we have to move the image plane closer or further away from the lens which affects the angle of view. This effect is called focus breathing:
A lens with a fixed focal length like the one above is often called a prime lens, while lenses with adjustable focal length are called zoom lenses. While the lenses in our eyes do dynamically adjust their focal lengths by changing their shape, rigid glass can’t do that so zoom lenses use a system of multiple glass elements that change their relative position to achieve this effect.
In the simulation above notice the difference in sharpness between the red and green spheres. To understand why this happens let’s analyze the rays emitted from two points on the surface of the spheres. In the demonstration below the right side shows the light seen by the sensor just from the two marked points on the spheres:
The light from the point in focus converges to a point, while the light from an out-of-focus point spreads onto a circle. For larger objects the multitude of overlapping out-of-focus circles creates a smooth blur called
bokeh. With tiny and bright light sources that circle itself is often visible, you may have seen effects like the one in the demonstration below in some photographs captured in darker environments:
Notice that the circular shape is visible for lights both in front of and behind the focused distance. As the object is positioned closer or further away from the lens the image plane “slices” the cone of light at different location:
That circular spot is called a circle of confusion. While in many circumstances the blurriness of the background or the foreground looks very appealing, it would be very useful to control how much blur there is.
Unfortunately, we don’t have total freedom here – we still want the primary photographed object to remain in focus so its light has to converge to a point. We just want to change the size of the circle of out-of-focus objects without moving the central point. We can accomplish that by changing the angle of the cone of light:
There are two methods we can use to modify that angle. Firstly, we can change the focal length of the lens – you may recall that with longer focal lengths the cone of light also gets longer. However, changing the focal length and keeping the primary object in focus requires moving the image plane which in turn changes how the picture is framed.
The alternative way of reducing the angle of the cone of light is to simply ignore some of the “outer” rays. We can achieve that by introducing a stop with a hole in the path of light:
This hole is called an aperture. In fact, even the hole in which the lens is mounted is an aperture of some sort, but what we’re introducing is an adjustable aperture:
Let’s try to see how an aperture affects the photographs taken with our camera:
In real camera lenses an adjustable aperture is often constructed from a set of overlapping blades that constitute an iris. The movement of those blades changes the size of the aperture:
The shape of the aperture also defines the shape of bokeh. This is the reason why bokeh sometimes has a polygonal shape – it’s simply the shape of the “cone” of light after passing through the blades of the aperture. Next time you watch a movie pay a close attention to the shape of out-of-focus highlights, they’re often polygonal:
As the aperture diameter decreases, larger and larger areas of the photographed scene remain sharp. The term depth of field is used to define the length of the region over which the objects are acceptably sharp. When describing the depth of field we’re trying to conceptually demark those two boundary planes and see how far apart they are from each other.
Let’s see the depth of field in action. The black slider controls the aperture, the blue slider controls the focal length, and the red slider changes the position of the object relative to the camera. The green dot shows the place of perfect focus, while the dark blue dots show the limits, or the depth, of positions between which the image of the red light source will be reasonably sharp, as shown by a single outlined pixel on the sensor:
Notice that the larger the diameter of aperture and the shorter the focal length the shorter the distance between the dark blue dots and thus the shallower the depth of field becomes. If you recall our discussion of sharpness this demonstration should make it easier to understand why reducing the angle of the cone increases the depth of field.
If you don’t have perfect vision you may have noticed that squinting your eyes make you see things a little better. Your eyelids covering some part of your iris simply act as an aperture that decreases the angle of the cone of light falling into your eyes making things sightly less blurry on your retina.
An interesting observation is that aperture defines the diameter of the base of the captured cone of light that is emitted from the object. Twice as large aperture diameter captures roughly four times more light due to increased solid angle. In practice, the actual size of the aperture as seen from the point of view of the scene, or the entrance pupil, depends on all the lenses in front of it as the shaped glass may scale the perceived size of the aperture.
On the other hand, when a lens is focused correctly, the focal length defines how large a source object is in the picture. By doubling the focal length we double the width and the height of the object on the sensor thus increasing the area by the factor of four. The light from the source is more spread out and each individual pixel receives less light.
The total amount of light hitting each pixel is proportional to the ratio between the focal length f and the diameter of the entrance pupil D. This ratio is known as the f-number:
N = f
/
D
A lens with a focal length of 50 mm and the entrance pupil of 25 mm would have N equal to 2 and the f-number would be known as f/2. Since the amount of light getting to each pixel of the sensor increases with the diameter of the aperture and decreases with the focal length, the f-number controls the brightness of the projected image.
The f-number with which commercial lenses are marked usually defines the maximum aperture a lens can achieve and the smaller the f-number the more light the lens passes through. Bigger amount of incoming light allows reduction of exposure time, so the smaller the f-number the faster the lens is. By reducing the size of the aperture we can modify the f-number with which a picture is taken.
The f-numbers are often multiples of 1.4 which is an approximation of 2. Scaling the diameter of an adjustable aperture by 2 scales its area by 2 which is a convenient factor to use. Increasing the f-number by a so-called stop halves the amount of received light. The demonstration below shows the relatives sizes of the aperture through which light is being seen:
To maintain the overall brightness of the image when stopping down we’d have to either increase the exposure time or the sensitivity of the sensor.
While aperture settings let us easily control the depth of field, that change comes at a cost. When the f-number increases and the aperture diameter gets smaller we effectively start approaching a pinhole camera with all its related complications.
In the final part of this article we will discuss the entire spectrum of another class of problems that we’ve been conveniently avoiding all this time.
Aberrations
In our examples so far we’ve been using a perfect idealized lens that did exactly what we want and in all the demonstrations I’ve relied on a certain simplification known as the paraxial approximation. However, the physical world is a bit more complicated.
The most common types of lenses are spherical lenses – their curved surfaces are sections of spheres of different radii. These types of lenses are easier to manufacture, however, they actually don’t perfectly converge the rays of incoming light. In the demonstration below you can observe how fuzzy the focus point is for various lens radii:
This imperfection is known as spherical aberration. This specific flaw can be corrected with aspheric lenses, but unfortunately there are other types of problems that may not be easily solved by a single lens. In general, for monochromatic light there are five primary types of aberrations: spherical aberration, coma, astigmatism, field curvature, and distortion.
We’re still not out of the woods even if we manage to minimize these problems. In normal environments light is very non-monochromatic and nature sets another hurdle into optical system design. Let’s quickly go back to the dark environment as we’ll be discussing a single beam of white light.
Observe what happens to that beam when it hits a piece of glass. You can make the sides non-parallel by using the slider:
What we perceive as white light is a combination of lights of different wavelengths. In fact, the index of refraction of materials depends on the wavelength of the light. This phenomena called dispersion splits what seems to be a uniform beam of white light into a fan of color bands. The very same mechanism that we see here is also responsible for a rainbow.
In a lens this causes different wavelengths of light to focus at different offsets – the effect known as chromatic aberration. We can easily visualize the axial chromatic aberration even on a lens with spherical aberration fixed. I’ll only use red, green, and blue dispersed rays to make things less crowded, but remember that other colors of the spectrum are present in between. Using the slider you can control the amount of dispersion the lens material introduces:
Chromatic aberration may be corrected with an achromatic lens, usually in the form of a doublet with two different types of glass fused together.
To minimize the impact of the aberrations, camera lenses use more than one optical element on their pathways. In this article I’ve only shown you simple lens systems, but a high-end camera lens may consist of a lot of elements that were carefully designed to balance the optical performance, weight, and cost.
While we, in our world of computer simulations on this website, can maintain the illusion of simple and perfect systems devoid of aberrations, vignetting, and lens flares, real cameras and lenses have to deal with all these problems to make the final pictures look good.
Further Watching and Reading
Over on YouTube Filmmaker IQ channel has a lot of great content related to lenses and movie making. Two videos especially fitting here are The History and Science of Lenses and Focusing on Depth of Field and Lens Equivalents.
What Makes Cinema Lenses So Special!? on Potato Jet channel is a great interview with Art Adams from ARRI. The video goes over many interesting details of high-end cinema lens design, for example, how the lenses compensate for focus breathing, or how much attention is paid to the quality of bokeh.
For a deeper dive on bokeh itself Jakub Trávník’s On Bokeh is a great article on the subject. The author explains how aberrations may cause bokeh of non uniform intensity and shows many photographs of real cameras and lenses.
In this article I’ve mostly been using geometrical optics with some soft touches of electromagnetism. For a more modern look at the nature of light and its interaction with matter I recommend Richard Feynman’s QED: The Strange Theory of Light and Matter. The book is written in a very approachable style suited for general audience, but it still lets Feynman’s wits and brilliance shine right through.
Final Words
We’ve barely scratched the surface of optics and camera lens design, but even the most complex systems end up serving the same purpose: to tell light where to go. In some sense optical engineering is all about taming the nature of light.
The simple act of pressing the shutter button in a camera app on a smartphone or on the body of a high-end DSLR is effortless, but it’s at this moment when, through carefully guided rays hitting an array of photodetectors, we immortalize reality by painting with light.
| 2024-11-08T08:15:26 | en | train |
42,020,657 | Arefhfz123 | 2024-11-01T19:31:51 | null | null | null | 1 | null | null | null | true | null | null | null | null | null | null | null | train |
42,020,670 | zdw | 2024-11-01T19:33:35 | The Cult of Microsoft | null | https://www.wheresyoured.at/the-cult-of-microsoft/ | 81 | 61 | [
42021055,
42021214,
42021153,
42021167,
42021073,
42021319,
42021112,
42021363,
42021142,
42021394,
42021325,
42020881,
42021602,
42021365,
42021302,
42021074,
42021105,
42021187,
42021161,
42021844,
42021384,
42022227,
42021049,
42020970,
42021192,
42021146,
42020993,
42021234
] | null | null | null | null | null | null | null | null | null | train |
42,020,674 | Michelangelo11 | 2024-11-01T19:34:08 | Walking Phnom Penh | null | https://walkingtheworld.substack.com/p/walking-phnom-penh | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
42,020,684 | tosh | 2024-11-01T19:35:27 | Supabase Partnership: Native Postgres Replication to ClickHouse | null | https://clickhouse.com/blog/supabase-partnership-native-postgres-replication-clickhouse-fdw | 3 | 0 | null | null | null | null | null | null | null | null | null | null | train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.