|
TheReverend posted:UI tests. (Sorry for the long delay, I just randomly decided to check the thread today) If these are dynamically added elements, you need to set accessibilityElements on the cell and post a layout changed notification. e.g., at the end of the listener where the UI is added: self.accessibilityElements = [your new UI]; UIAccessibilityPostNotification(UIAccessibilityLayoutChangedNotification, nil) that updates the list of elements that the UI test can access.
|
# ? Feb 4, 2019 20:22 |
|
|
# ? May 22, 2024 05:17 |
|
Edit, just a weird problem I had where it showed two masters in my Source Control Navigator. Unsure how this happened, but it's now fixed. I moved a hidden ".git" (system) folder outside my main project folder. I then could create a new repository within the Source Control Navigator, create a new remote and the appropriate projects are visible in GitHub. Good Sphere fucked around with this message at 18:33 on Feb 13, 2019 |
# ? Feb 13, 2019 17:41 |
|
Remember all the posts I made about my troubles getting worse CPU performance on newer phones using live camera CIFilters? I'm sure you do. I have a simplified example project on GitHub if anyone wants to take a look to investigate why this is happening: https://github.com/PunchyBass/Live-Filter-test-project
|
# ? Feb 13, 2019 18:13 |
|
I just got a fun email from a user. He desperately needs an 'app re-skin' done and is looking for a developer. Of course, it's an easy copy of another existing app, with a new interface 'to avoid copyright infringement.' And he was willing to pay... 'several hundred dollars if necessary!'
|
# ? Feb 13, 2019 20:50 |
|
Starting a new high end car company. Just need a mechanic to look at a picture of a Bugatti Veyron and make one a little bit different to avoid copyright infringement. Will pay several hundred dollars if necessary!
|
# ? Feb 13, 2019 21:05 |
|
Good Sphere posted:Remember all the posts I made about my troubles getting worse CPU performance on newer phones using live camera CIFilters? I'm sure you do. I have a simplified example project on GitHub if anyone wants to take a look to investigate why this is happening: https://github.com/PunchyBass/Live-Filter-test-project I still don't understand what your expectations are or what the use case is. CPU% is meaningless on its own and apples-to-oranges across years of releases. You're carrying this pervasive assumption that it must be lower for the newer generation and you haven't been that direct with it. Do you need to do CPU-bound work in the idle spots? Measure that instead. Is there a "correct" CPU% that you absolutely must hit? Share it.
|
# ? Feb 13, 2019 23:59 |
|
I’ll take a look at your sample tonight or tomorrow, but for now, to follow up what Jawnv6 said, look in Instruments and in Apple’s Metal debug tools to see where the CPU and GPU time is being spent. If your code’s CPU and GPU time is the same or lower, don’t worry about it. While it’s weird that a newer device is taking more CPU and GPU time to do the same task, Apple’s CI filter you’re running could be doing something differently on the newer device, etc
|
# ? Feb 14, 2019 02:18 |
|
JawnV6 posted:I still don't understand what your expectations are or what the use case is. CPU% is meaningless on its own and apples-to-oranges across years of releases. You're carrying this pervasive assumption that it must be lower for the newer generation and you haven't been that direct with it. Do you need to do CPU-bound work in the idle spots? Measure that instead. Is there a "correct" CPU% that you absolutely must hit? Share it. I don't really have any expectations, but I feel that I'm not out of line for being suspicious that a phone with four generations difference is getting way worse performance, and it's worth investigating. If I don't figure it out soon, I'm not going to delay releasing it, but I think it could have a negative impact and scare people off from using it. It's more obvious than the CPU read out I'm getting. I can actually feel the XS get hot and it drains it more quickly. I've looked at the idle time, and decreased it with experimentation. I don't know exactly the best methods precisely using the CPU vs GPU where there's minimal idle times. I know it's best to not trade off work between the CPU and GPU, and I'm suspicious that's still happening. I expect the CPU to be below 20%. Doc Block posted:Ill take a look at your sample tonight or tomorrow, but for now, to follow up what Jawnv6 said, look in Instruments and in Apples Metal debug tools to see where the CPU and GPU time is being spent. If your codes CPU and GPU time is the same or lower, dont worry about it. Thanks. The most CPU impact happens in MTKView's draw function when render is called. I've been afraid of the newer phones using CIFilters differently. In that case, it should be fixed or labeled as deprecated.
|
# ? Feb 14, 2019 17:23 |
|
I need to do a Hilbert transform on some data. A quick google search turned up this answer on stackoverflow, is this about as good as it gets, or is there a better version available? https://stackoverflow.com/a/21907439
|
# ? Feb 17, 2019 04:30 |
|
Does this have a name, besides Smart App Banner? I could've sworn it had a catchier, shorter name.
|
# ? Feb 19, 2019 21:38 |
|
Catchier than what I usually call it, “the lil app banner that you can put on websites that can do the associated domain thingy”.
|
# ? Feb 19, 2019 23:28 |
|
Yeah that's just the Universal Links app banner. Which is different from the meta tag driven Smart Banner.
|
# ? Feb 19, 2019 23:41 |
|
Doh004 posted:Yeah that's just the Universal Links app banner. Is it? I thought they were indistinguishable.
|
# ? Feb 20, 2019 00:46 |
|
Anyone have any good resources for learning audio programming (sequencing in particular) on Apple platforms? I'm currently using a combination of Core Audio, Core MIDI, and AVAudioEngine to do some relatively simple dynamic sequencing based on user input (i.e. the user writes out some music, then I create a MIDI sequence out of it that gets played back), and I've stumbled along for a while getting the basics working, but I keep hitting road blocks when I get to more involved functionality. For example, there are a lot of things that I want to trigger in sync with the music -- looping sections, automatically stopping playback when the end of the music is reached, flashing a UI component in time with metronome clicks, etc. I've set up custom MIDI events that I can respond to, which sort of work, but the timing is so out of sync that it's not really usable, and I don't know how to even begin debugging it. I want to spend some time properly learning about audio programming so I can have more control over things like that instead of just fumbling around, but good resources seem hard to come by. Apple's documentation is usually pretty sparse or seemingly out of date, and I haven't found anything that covers audio at a fundamental level -- it's all about using specific APIs and assumes some prior knowledge. The few audio-centric WWDC sessions I've seen are fairly high-level or mostly just cover API changes. Any links to books or online resources would be really appreciated 🙏 Non-Apple stuff is welcome if it seems helpful, though learning from the perspective of Apple's audio APIs would be nice.
|
# ? Feb 20, 2019 02:17 |
|
pokeyman posted:Is it? I thought they were indistinguishable. Si senor. Old smart banner that you can't deeplink from: Universal Links banner (Apple docs didn't come up in cursory search, stole an image of it):
|
# ? Feb 20, 2019 13:00 |
|
dizzywhip posted:Anyone have any good resources for learning audio programming (sequencing in particular) on Apple platforms? I'm currently using a combination of Core Audio, Core MIDI, and AVAudioEngine to do some relatively simple dynamic sequencing based on user input (i.e. the user writes out some music, then I create a MIDI sequence out of it that gets played back), and I've stumbled along for a while getting the basics working, but I keep hitting road blocks when I get to more involved functionality. For example, there are a lot of things that I want to trigger in sync with the music -- looping sections, automatically stopping playback when the end of the music is reached, flashing a UI component in time with metronome clicks, etc. I've set up custom MIDI events that I can respond to, which sort of work, but the timing is so out of sync that it's not really usable, and I don't know how to even begin debugging it. I develop synths on iOS. You do not want to learn CoreAudio as an approach to developing a sequencer, and AVAudio is totally useless (it's designed for incidental, non-timing essential sound). Rather, you should look at existing audio engines that can be loaded onto iOS. I use libpd. It allows you to run Pd (Pure Data) patches on iOS, and gives you hooks to communicate between the audio graph and the user interface. So the idea is that you design your audio / sequencer in Pd, then load it into an app, and control it from the UI side. There are tons of apps that use it, including Arpeggionome, which is a sequencing app. As for perfect synchronization, you'd want to look into adding Ableton Link into your app, and designing around that paradigm. It's sample-accurate time sync between iOS apps. Any other questions, just let me know.
|
# ? Feb 20, 2019 18:56 |
|
lord funk posted:I develop synths on iOS. You do not want to learn CoreAudio as an approach to developing a sequencer, and AVAudio is totally useless (it's designed for incidental, non-timing essential sound). Rather, you should look at existing audio engines that can be loaded onto iOS. Thanks for your help! Any particular reason to avoid Core Audio? Is it just too limited? I was hoping to stick to system APIs if possible, but if there's something third-party that works better, then maybe that's the way to go, especially in the short term. But longer term I was hoping to learn more about low-level audio programming in general so that I at least understand more about how these libraries work under the hood. In any case, I took a look at libpd and Ableton Link and they seem interesting, but I'm not sure they fit my use case. Correct me if I'm wrong, but it looks like PD has you create patches using a GUI that you can load up to play at runtime, but won't really help if I need to do the actual sequencing at runtime. And Ableton Link seems to be for synchronizing audio across devices, but I don't currently have any plans to involve multiple devices or even interface with any MIDI controllers -- everything is happening locally in one app, I just need to synchronize non-audio code with the audio right now. For now I'm gonna poke around and see if I can find other audio libraries that would be helpful and maybe try diving into the PD source code and see if I can learn anything there. I was looking at AudioKit a while back, but their sequencing APIs were a pretty thin and limited wrapper around Core Audio sequencing, so it wasn't very useful. Nice work on TC-11 by the way! I'm messing around with it and it's pretty sweet.
|
# ? Feb 21, 2019 03:01 |
|
dizzywhip posted:Thanks for your help! Any particular reason to avoid Core Audio? Is it just too limited? I was hoping to stick to system APIs if possible, but if there's something third-party that works better, then maybe that's the way to go, especially in the short term. But longer term I was hoping to learn more about low-level audio programming in general so that I at least understand more about how these libraries work under the hood. CoreAudio is really good. I'm biased but no other platform does nearly as good of a job. You can run live audio mixing through a mac and lots of systems do.
|
# ? Feb 22, 2019 08:23 |
|
Don't get me wrong - CoreAudio is pretty baller. I just played a concert where I had 8 iPads all running as DACs on a Mac, plus a MOTU 8 channel interface, all combined into a new 'Aggregate Device'. It was pretty sweet. My warning is that, like all things, you should consider using existing audio engines before writing your own. It's like using Unity to make a game instead of learning to draw triangles on screen yourself with OpenGL. The benefit of using Pd is that you *can* do low level audio programming by creating your own objects, and that all integrates nicely with Pd's audio callback. It's suuuuper nice to be able to create a great audio graph in an environment built for it (Pd), test it out on the desktop, then load it onto an iOS device and control it from UIKit. Pd patches run realtime -- you can create a sequencer patch in Pd, then as you run it on your iOS device you can alter everything from the sequence contents to the playback controls, etc.. TC-11 is built around this. Every touch controller fires a message into the Pd graph to change the parameters in realtime. CoreAudio is the backend that gets your audio render callback working; it's not where you get creative and build an app. I haven't looked at AudioKit, but that's the right idea! Between that or libpd or something else, you'll be much happier. dizzywhip posted:Nice work on TC-11 by the way! I'm messing around with it and it's pretty sweet. Thanks! 🍻
|
# ? Feb 22, 2019 20:21 |
|
Wrong thread maybe but can anyone recommend a book on RXSwift? I'm old and stubborn and like books for this type of stuff. Also whats the buzz, is this type of stuff just a fad or do you think it catch on to be more widespread? SaTaMaS posted:(Sorry for the long delay, I just randomly decided to check the thread today) If these are dynamically added elements, you need to set accessibilityElements on the cell and post a layout changed notification. e.g., at the end of the listener where the UI is added: Sorry for my delay. They aren't but this is something I didn't know and we have a few VCs with programatically added stuff so this will be great for that time, so thanks! My UI testing mandate is still present and somehow I've become the guy who ends up fixing all the tests when they break which is awful but appreciated by management. Slowly but surely I'm realizing it's usually other's breaking it and not a fault of XCTest.
|
# ? Feb 25, 2019 05:09 |
|
Don’t forget that way more poo poo is asynchronous than you might think when doing UI testing. You really need to leverage expectations. Also, don’t use implicit expectations (pseudocode): code:
Better to write all your test code this: code:
|
# ? Feb 25, 2019 05:25 |
|
TheReverend posted:Also whats the buzz, is [Rx] just a fad or do you think it catch on to be more widespread? I think there's a reasonable chance Rx patterns will catch on; they're relatively language-portable, and there are benefits. However, I've been in codebases that used RxSwift and it was a giant loving mess, and much worse than if they'd just gone with non-RxSwift ways of doing things. However however, the current project I'm on was greenfield with RxSwift in from the get-go (along with a library some of my coworkers wrote: RxSugar), and it's actually been really nice and useful. It saves a fair amount of Observer-pattern boilerplate code, and it helps reinforce that a lot of things are actually asynchronous under the hood, and forces us to contend with that. Like most tools, it can be used poorly and cause more trouble than it's worth. It's still in my bucket of wanting to use it more to get a better sense of where that boundary is, but I can say it definitely has its useful parts. I have a sideproject-at-work sort of thing that I'll eventually make public on Github—conveniently, also showcasing how I like to do UI Tests—without worrying about NDA issues and such. I'm using RxSwift and RxSugar in it and liking it so far. It's just...taking a while to get to a releasable point, sorry.
|
# ? Feb 26, 2019 06:04 |
|
Hot take: not only was it never a fad, it will never really hit an inflection point and gain significant adoption in the same way that React (for example) has. For apps, anyways. Good engineers establishing good conventions, and a product built with some foresight probably will get to experience the best of Rx, because they'd probably make any kind of principled system work in a logically consistent and understandable way. That's good, but personally, I think that Rx is a footgun: bad programmers or impedance mismatches with the systems you need to interact with can screw things up in difficult to debug ways. A system that is heavily loaded and needs async and requires chains of a variety of modular, interchangeable blocks is needed less often than people think, but as you start to distance the way the code is actually executed from the norm, debugging your assumptions about what should have happened on top of debugging the system itself gets super hard. Granted, a big reason I think so is because of a former coworker who grossly abused RxCocoa with inappropriate threading and side effects, but generally, you shouldn't need to pay the cost of a whole lot of hypercomplicated concurrency, and its upkeep afterwards. That's not to say you wouldn't benefit from learning and practicing a somewhat functional style of programming but I personally don't find Rx the best way to get there and I haven't seen it gain a meaningful amount of mindshare in the several years since it first appeared on my radar.
|
# ? Feb 26, 2019 07:34 |
|
Doctor w-rw-rw- posted:That's good, but personally, I think that Rx is a footgun: bad programmers or impedance mismatches with the systems you need to interact with can screw things up in difficult to debug ways. This matches my experience with it as well. I've been splitting the difference and using PromiseKit in projects that seem to warrant it, but I reckon whenever coroutines make it into Swift the Rx stuff is gonna get abandoned and projects that use it are going to become albatrosses.
|
# ? Feb 26, 2019 07:43 |
|
PromiseKit is good poo poo. As soon as I have two asynchronous operations happening either in serial or in parallel, it’s promise time. I do feel a little silly when it takes me a half hour and a whiteboard to figure out what ends up being like four lines of promises and method calls. But that’s less time and fewer calls than the equivalent would usually be without promises. The one downside is that promises can take some decent time to learn, and you’re not really rewarded with a new way of thinking like you might be if you learned e.g. reactive extensions.
|
# ? Feb 26, 2019 13:53 |
|
lord funk posted:Don't get me wrong - CoreAudio is pretty baller. I just played a concert where I had 8 iPads all running as DACs on a Mac, plus a MOTU 8 channel interface, all combined into a new 'Aggregate Device'. It was pretty sweet. Alright, well I've mostly solved my immediate lag problems by dropping AVAudioEngine and moving deeper into Core Audio with AUGraph even though that API was supposed to be deprecated last year according to a WWDC session. I'll take a closer look at Pd in a bit and see if it'll work for me, though I'm still a little wary of moving away from system libraries. I wanna have as much control as possible for some projects I have in mind for the future, and I don't mind taking the time to build up my own audio components on top of the system.
|
# ? Mar 1, 2019 01:35 |
|
If I recall correctly, in Objective-C, methods must not begin with 'new', like [foo newBar]. But can they begin with 'news', like [foo newsBar], does anyone know?
|
# ? Mar 1, 2019 15:10 |
|
I have a super annoying ongoing issue that I have no idea how to solve. When using the front facing camera to record a video in my app, and using UIActivityController to share, some apps receive the video upside down and others don't. Saving to my camera roll looks fine. Even saving to my camera roll and uploading from Facebook or Messenger appears fine. Two major ones that receive upside down from UIActivityController are Facebook and Messenger. I've tried to compare all sorts of information from the resulting video file shared and saved, and they appear identical. I can't seem to locate where the problem is being caused, and the only thing I can think of is flipping the video as the user selects Facebook or Messenger. Is this even possible, and if it is, could it be slow if the video is large? Could I be doing something wrong when sharing the video which contaminates the video when sharing to some platforms? self.fileURL is the video location that it was recorded to using a temporary directory: code:
code:
|
# ? Mar 1, 2019 17:55 |
|
When I did a photo app a while back, I remember there being something about an orientation flag for the photo. Maybe video has the same, and it’s getting stripped out when saving to a file instead of the camera roll?
|
# ? Mar 1, 2019 19:15 |
|
Dog on Fire posted:If I recall correctly, in Objective-C, methods must not begin with 'new', like [foo newBar]. But can they begin with 'news', like [foo newsBar], does anyone know? Are you thinking of ARC method families?
|
# ? Mar 1, 2019 19:51 |
|
Doc Block posted:When I did a photo app a while back, I remember there being something about an orientation flag for the photo. Maybe video has the same, and it’s getting stripped out when saving to a file instead of the camera roll? Yeah I was thinking the same thing, and I tried looking for it using the mdls command and exiftool to check for extra exif information. To rotate a video, I apply a transform. I save my video using PHPhotoLibrary, so maybe it adds something extra about orientation? Can I share a video similar to what I did in my post above, except have PHPhotoLibrary make the asset? I tried doing something like that, but I don't know how if it's possible.
|
# ? Mar 1, 2019 19:51 |
|
Dog on Fire posted:If I recall correctly, in Objective-C, methods must not begin with 'new', like [foo newBar]. But can they begin with 'news', like [foo newsBar], does anyone know? The convention is based on "words", where changes in capitalization trigger a word boundary. So no, news and newsBar are not considered to be in the new family of selectors.
|
# ? Mar 1, 2019 19:52 |
|
And even if it were you could put a objc_method_family attribute of "none" on it to undo the automatic retain behavior.
|
# ? Mar 2, 2019 06:40 |
|
So you're saying "must" to mean "strongly discouraged by convention", not "produces a compile error", right?
|
# ? Mar 2, 2019 20:15 |
|
brap posted:So you're saying "must" to mean "strongly discouraged by convention", not "produces a compile error", right? No, it actually has semantic meaning. "new" methods return an object and pass ownership of it to the caller. It started as a convention, but ARC turned it into a hard rule: an ARC caller will release the object returned by a "new" method, sooner or later, and you drat better take that into account when planning the lifetime of the object
|
# ? Mar 2, 2019 21:47 |
|
brap posted:So you're saying "must" to mean "strongly discouraged by convention", not "produces a compile error", right? It doesn't cause a compile error, but it's not just discouraged by convention, it actually affects how the object is reference counted. If you're a responsible developer you'll try to respect the method family, but if you *really* want to you can override the behavior.
|
# ? Mar 2, 2019 23:33 |
|
Getting method families wrong results in runtime crashes and/or memory corruption rather than compile errors. It's a lot of fun.
|
# ? Mar 3, 2019 00:44 |
|
I work on an old app from maybe 2010 that's was originally Obj-C and since maintained as a mix of swift and Obj-C. We use Reskit for our Rest API. Pretty sure that's old and busted now. Is the new Swift 4.2 codable stuff good enough for most use cases now for rest object mapping?
|
# ? Mar 3, 2019 21:29 |
|
TheReverend posted:I work on an old app from maybe 2010 that's was originally Obj-C and since maintained as a mix of swift and Obj-C. Yes.
|
# ? Mar 3, 2019 23:42 |
|
|
# ? May 22, 2024 05:17 |
|
Codable is really good. Also best of luck extricating yourself from RestKit. Definitely more of a framework than a library in the "who calls who" sense.
|
# ? Mar 4, 2019 03:33 |