This was written by someone who never dealt with user requests. Typical user not only doesn’t know how to define requirements in a clear way, they also don’t understand limitations of the technology, side effects their changes can cause or different aspects of usability, compatibility and accessibility.
Those are the abilities that limit who can contribute to projects, not coding skills.
So for example you want an adaptive rewind time. Is it on by default? Where is in the settings? How does it interact with current auto-rewind feature (can you enable both at the same time?)? How do you name it so that typical user knows what it does? It’s not that those are difficult questions to answer. It’s that you need think about all that before you start changing code other people will use. Typical users don’t have the knowledge or experience required to do it. And it gets way more complicated with bigger changes.
Yep, programming is fun but working as a programmer not so much. For me writing software is a creative activity. It’s fun to come up with problems and find solutions for them. In my personal projects I decide what problem I want to solve, choose the technology I think will be fun to solve it in and then come up with a solution I like.
At work you are usually handed a problem you don’t care about (we’re decommissioning X, you don’t have to know why, just change everything to use Y), the solution is described in detail by someone else and you just have to turn it into some code using 5-10 years old stack.
Fortunately at my current job I mostly do projects without much technical oversight (proof-of-concept type project) so I can choose how I want to do then. I dislike the company culture but I know that moving somewhere else would mean going back to boring coding agian.
Yep, that’s correct. I never heard about Z3 and I did it by reverting all the operation. It takes couple of seconds of computer time to solve my way but it took me closer to 7h to figure it out. 1h is impressive.
There are actually two possible solutions because some bits are lost when generating numbers. Can Z3 account for lost bits? Did it come up with just one solution?
I was fully expecting to see this https://www.youtube.com/watch?v=4JkIs37a2JE
You’re mixing AR and VR all the time. VR has a lot of entertainment potential that will be realized once the tech gets better and cheaper, probably fairly soon. For AR to be useful for normal users it will have to replace phones, not PCs. I can see people using it on the subway to browse isntagram or while walking for navigation and answering calls. For this it will have to become super small and light, just like normal glasses. Vision pro is 600g + battery pack. We’re decades away from something that will be able to compete with phones.
Ok, I see how you could get confused and think we’re talking about some non-existing, future product instead of the device this post is actually about. No problem, this happens.
When it comes to AR in general Magic Leap was pushing it hard for a very long time and after they released actual device their value quickly dropped. AR for general public is a gimmick, it doesn’t solve any problems, no one wants it. It has very interesting applications in some very specific fields and definitely will find it uses with professionals but when it comes to your dream of looking at 15 4k screens while sitting on a toilet most people are happy with just their phones.
Sure as long as ‘all the virtual monitors you might ever want’ is exactly one monitor. You do know that Vision Pro can only simulate one display when working with a Mac? We’re talking about specific device not some imaginary thing Apple will release 10 years from now. Jesus, Mac fanboys are just the worst…
You don’t know what effort is needed to update an app for Vision Pro. For most apps it’s probably just marking a checkbox in the XCode and releasing an update. What special features will you add to PCalc? It will just float in front of you like every other app. Do you need to write any special code to make it work on Vision Pro?
the kinds of apps that would actively benefit from this technology and that the users actually want and will use.
Pre-installed apps optimized for Vision Pro:
App Store
Encounter Dinosaurs
Files
Freeform
Keynote
Mail
Messages
Mindfulness
Music
Notes
Photos
Safari
Settings
Tips
TV
Here’s a full list of third-party apps confirmed for VisionOS so far:
Disney+
Microsoft Excel
Microsoft Word
Microsoft Teams
Zoom
WebEx
Adobe Lightroom
Unity-based apps and games (titles TBC)
Sky Guide
Yeah, because when I use Safari, Notes and Word what I REALLY need is augmenter reality.
It’s not 150 unique apps. The article says:
It’s not just Netflix, Spotify, and YouTube that don’t have apps for Apple’s Vision Pro at launch.(…) As of this weekend, the AR/VR device’s App Store has just 150+ apps that were updated for the Vision Pro explicitly
You can watch Netflix on the Vision Pro in a browser but they didn’t create a specific app for it like for example for iOS. 150 other apps were updated to run on the device. We’re not talking about apps that run only on Vision Pro, just apps that have specific Vision Pro version. It’s like if when Apple released the iPad only 150 apps were tested, maybe slightly adapted and marked in AppStore as iPad compatible.
150 is nothing. There are millions of apps in the AppStore, all (if not all, most) of them could be updated to run on the VisionPro and developers of only 150 bothered to do it. That’s terrible result.
Yeah, I had the same issue. Sometimes it was the SD card, sometimes the network interface (not your case obviously), sometimes things connected to USB, sometimes it was running hot… I gave up and now I just run everything on an older Slimbook Zero. Yes, power consumption is higher (still pretty low) but so is stability.
Here’s what I think happened: we got used to shitload of content and personal pages couldn’t keep up.
My first experience with the internet was a dial-up modelm. It wasn’t cheap so we were basically counting minutes. In a short session I would check my email, download new winamp skin, open a link some friend send me and maybe visit some chatroom. That’s it. Back then each page was a gem because the content was super rare. For example I could download all the Monty Python sketches. Where would you find them if not on some obscure website? They didn’t have it in the library.
Then broadband happened so you could spend hours online. People started forming small communities and curating content. bash.org and similar pages happened. We started getting used to opening a link daily and seeing new funny pics and memes.
Finally corporations realized that to keep people on a page it has to show something new every fucking second and social media happened. Today we spend more time online than offline and refresh some pages every 15 minutes to see what’s new. Static, personal pages can’t keep up. Yes, you can create a Melisandre fan page, paste couple of pictures and start writing some fan fiction but who will read it? 30 years ago if I found such website I would save every single pick to disk and put a link to the page on www.myhomepage.com/links but today? It’s pointless. It’s all already on IMDB, one ddg search away. Personal pages are not the rare gems they used to be.
That’s were all the pages are…
Let’s face it, we lost the fun, early web long time ago. It was all taken over by corporations and when Mozilla dies (and that’s not if) they will finish locking it up and the only way to browse it will be by using official, ad filled tools. Best thing we can do is to prepare ourselves for the world without web (www?). We’ll still have apps and communicators and of course will still use websites at work but the days of ‘browsing’ will soon (well, hopefully not very soon) be over.
I still don’t understand why PDAs are no longer a thing. Make a phone a bit thicker, add ports, thumb keyboard… I remember being able to SSH to servers from my Zaurus and actually do things using the hardware keyboard. Or SSH to my n900 and install packages there. With android I just lost the interest. It doesn’t feel like a personal computer anymore.
My opinion is that kids only want to use phones because they see parents use them all the time. If parents would use phones only for calling, kids would not find them interesting. Of course giving up phones is super difficult, beyond what parents are willing to do. And of course I’m talking about small children, not adolescents.
Yes, that’s the whole point. You can turn substitute computer program by a hash map and the results would be the same but everyone in general agree that a hash map is not intelligent. Defining exactly why it’s not intelligent is tricky though. It comes down to some very basic concepts that we understand intuitively but are very hard precisely define like what it means to ‘know’ something or to ‘understand’ something. One famous example is a very good dictionary: let’s say some guy has a very good Chinese dictionary. A Chinese speaking person can write question down and give it to this guy. He will look up every symbol in the question, translate it to English, respond and translate the response back to Chinese using the same dictionary. Does he ‘speak’ Chinese? He can communicate in Chinese but obviously he does not speak it. Does he ‘understand’ Chinese? Again, not really, he can just look up symbols in a dictionary. Specifying the exact reason why we would not say that he can ‘speak’ Chinese is difficult thought. It’s the same with intelligence. We intuitively understand why a book is not intelligent but to say exactly why is tricky.
No, a hash map is not intelligent. There’s no processing in the hash map. The input is not processed in any way, you directly use it to find the corresponding out put. Think about it this way: if you take a hash map with all possible inputs and print it out, will the paper be intelligent? You can still use this paper to map each input to an output, it holds all the same information the hash map did but obviously a mountain of paper is not intelligent. So you scan it back and store in a computer. Did it suddenly become intelligent now? Of course not, it’s still just a static collection of information. Information is not intelligent.
No, infinite hash map is still not intelligent, not even by the standards used in computer science. It’s not a one-layer network, it’s not a network at all. To talk about network nodes form layer 1 would have to connect to multiple nodes in layer 2. The signal would have to be processed somehow. Extremely big one layer neural network could be intelligent for all we know. In theory some consciousness could emerge from sufficiently complex system like that. In a hash map there’s no processing though, not matter how big it is. You simply take element A and return element B mapped to it. The operation is always the same. Making this map bigger does not add complexity, knowledge or alter how it’s processing inputs. Big hash map is just like a small hash map, only bigger.
It’s not that it’s not science. Different sciences simply define intelligence in different ways. In psychology it’s mostly the ability to solve problems by reasoning so ‘human like’ intelligence. They don’t care that computers can solve the same problems without reasoning (by brute force for example) because they don’t study computers. In computer science it’s more fuzzy but pretty much boils down to algorithms solving problems by using some sort of insights that are not simple step-by-step instructions. The problem is that with general AI we’re trying to unify those definitions but when you do this both lose it’s meanings.
Can you put this in a npm package so I can use it in my project, please?