Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy

Knyteguy posted:

Anyone have experience with http://jsil.org/? How is it?

It looks pretty neat: http://jsil.org/try/

Seems a little buggy to me after spending a few minutes with it on the try page. Getting anything off of the DateTime object seems to just halt execution. It could be really awesome, though... if they get the bugs worked out. The runtime doesn't seem too large so that's good - only about 500-600kb of unminified JS - but that's likely to grow as they support more stuff. I wouldn't use it anywhere near a production site, but it may be good to keep an eye on.

Adbot
ADBOT LOVES YOU

Calidus
Oct 31, 2011

Stand back I'm going to try science!
I got into a stupid discussion on mvc today. The specific topic was whether using LINQ lambdas on a list to get a scalar value should be done in a view or in the model.

Inverness
Feb 4, 2009

Fully configurable personal assistant.
I have a question about the new Roslyn compiler.

I basically want to do what PostSharp does and inject code at compile time (before, during, or after) by examining attributes. I'd prefer to do this in a way that doesn't require a visual studio extension. What are my options here?

Edit: Oh wait, nevermind:

quote:

Can I rewrite source code within the compiler pipeline?

Roslyn does not provide a plug-in architecture throughout the compiler pipeline so that at each stage you can affect syntax parsed, semantic analysis, optimization algorithms, code emission, etc. However, you can use a pre-build rule to analyze and generate different code that MSBuild then feeds to csc.exe or vbc.exe. You can use Roslyn to parse code and semantically analyze it, and then rewrite the trees, change references, etc. Then compile the result as a new compilation.
Wow, so much for the whole open compiler idea. The whole reason I was interested in Roslyn in the first place is because it would let me explore the kinds of things PostSharp can do without depending on PostSharp itself for casual use.

Hopefully someone will make a fork of Roslyn that can actually do this.

Inverness fucked around with this message at 03:27 on Feb 13, 2015

raminasi
Jan 25, 2005

a last drink with no ice
Is it poor form to use a ReaderWriterLockSlim if the two operations aren't reading and writing? In fact, the operations are basically the opposite - there's an input dictionary for a long-running operation that can be futzed with at will by whomever (it'll be a ConcurrentDictionary, don't worry) up until the long-running operation needs to kick off, at which point I need to freeze the input. I feel like this scenario must have come up somewhere before but I don't know a name for it.

RICHUNCLEPENNYBAGS
Dec 21, 2010

GrumpyDoctor posted:

Is it poor form to use a ReaderWriterLockSlim if the two operations aren't reading and writing? In fact, the operations are basically the opposite - there's an input dictionary for a long-running operation that can be futzed with at will by whomever (it'll be a ConcurrentDictionary, don't worry) up until the long-running operation needs to kick off, at which point I need to freeze the input. I feel like this scenario must have come up somewhere before but I don't know a name for it.

Why not use a SemaphoreSlim?

raminasi
Jan 25, 2005

a last drink with no ice

RICHUNCLEPENNYBAGS posted:

Why not use a SemaphoreSlim?

I don't see how that accomplishes anything I want. It can't distinguish between "modifying the dictionary" and "reading the dictionary".

ljw1004
Jan 18, 2005

rum

GrumpyDoctor posted:

Is it poor form to use a ReaderWriterLockSlim if the two operations aren't reading and writing? In fact, the operations are basically the opposite - there's an input dictionary for a long-running operation that can be futzed with at will by whomever (it'll be a ConcurrentDictionary, don't worry) up until the long-running operation needs to kick off, at which point I need to freeze the input. I feel like this scenario must have come up somewhere before but I don't know a name for it.

That sounds a bit like the "builder" pattern. I wonder if you can separate the two phases of life of that dictionary (1. preparatory and mutable; 2. immutable) into two separate types in your type system?

Gul Banana
Nov 28, 2003

also not supported in the new asp.net: T4 templates :(
I think those are actually *more* important to us than existing non-C# source code. oh well, it's just a matter of putting them in their own old-style project and referencing it via nuget.

raminasi
Jan 25, 2005

a last drink with no ice

ljw1004 posted:

That sounds a bit like the "builder" pattern. I wonder if you can separate the two phases of life of that dictionary (1. preparatory and mutable; 2. immutable) into two separate types in your type system?

Maybe I could, but I don't see how the type system alone can get me what I need, either.

Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy

GrumpyDoctor posted:

Maybe I could, but I don't see how the type system alone can get me what I need, either.

Two types are backed by the same type of data. One type allows mutation and exposes methods to perform said modification, the other type exposes the data and no mutation methods. The mutable type also exposes a method to create the mutable type from its underlying data.

Although, on second thought, why not just create a copy of the dictionary before handing it to the long running process?

raminasi
Jan 25, 2005

a last drink with no ice

Bognar posted:

Two types are backed by the same type of data. One type allows mutation and exposes methods to perform said modification, the other type exposes the data and no mutation methods. The mutable type also exposes a method to create the mutable type from its underlying data.

Although, on second thought, why not just create a copy of the dictionary before handing it to the long running process?

Ok, I guess I should provide the entire problem description so I'm not X/Ying this. It's kind of gnarly to explain, though. I was trying to originally only ask about one piece of it, but I guess I should put it all on the table.

I've got a collection of scene objects. Three things need to happen:
1) Each scene object is preprocessed into a format that is written to disk. The computation cost is not trivial, but not insane; I want to parallelize this.
2) The disk versions of the scene objects are collected into a unified scene data structure S. This happens via separate program, which is why the inputs need to be represented on-disk.
3) For each scene object X in some subset of the original set of scene objects, run another (relatively) expensive computation that takes as input X and S. This also happens via a separate program.

My original question was about the movement between steps 1 and 2. What I was trying to do was set up a way to do this whereby neither the original object set nor the subset in step 3 needed to be known beforehand; clients could add scene objects willy-nilly until step 2 kicked off, at which point additional attempts to add or update scene objects would either block or fail. (In practice, I wouldn't ever expect to get there - the idea would be that a request to begin step 2 would block until any remaining scene object additions were complete.) In describing it, I'm realizing that I could just drop this particular goal (and have clients pass in the object set as a single batch) to simplify everything, so I guess that's what I'll go with. I think the original reason I was reluctant to do that was that parallelization of step 1 would necessarily be hidden from client code, but I guess that's not the end of the world.

Inverness
Feb 4, 2009

Fully configurable personal assistant.
Well that was surprisingly easy. I built my own version of the compiler with hooking functionality then set CscToolPath and CscToolExe to make my project use it. No dependency on Visual Studio 2015 or anything that isn't out right now.

Inverness fucked around with this message at 16:39 on Feb 13, 2015

gariig
Dec 31, 2004
Beaten into submission by my fiance
Pillbug

GrumpyDoctor posted:

Ok, I guess I should provide the entire problem description so I'm not X/Ying this. It's kind of gnarly to explain, though. I was trying to originally only ask about one piece of it, but I guess I should put it all on the table.

I've got a collection of scene objects. Three things need to happen:
1) Each scene object is preprocessed into a format that is written to disk. The computation cost is not trivial, but not insane; I want to parallelize this.
2) The disk versions of the scene objects are collected into a unified scene data structure S. This happens via separate program, which is why the inputs need to be represented on-disk.
3) For each scene object X in some subset of the original set of scene objects, run another (relatively) expensive computation that takes as input X and S. This also happens via a separate program.

My original question was about the movement between steps 1 and 2. What I was trying to do was set up a way to do this whereby neither the original object set nor the subset in step 3 needed to be known beforehand; clients could add scene objects willy-nilly until step 2 kicked off, at which point additional attempts to add or update scene objects would either block or fail. (In practice, I wouldn't ever expect to get there - the idea would be that a request to begin step 2 would block until any remaining scene object additions were complete.) In describing it, I'm realizing that I could just drop this particular goal (and have clients pass in the object set as a single batch) to simplify everything, so I guess that's what I'll go with. I think the original reason I was reluctant to do that was that parallelization of step 1 would necessarily be hidden from client code, but I guess that's not the end of the world.

I don't understand where the ConcurrentDictionary comes into play. I would do some sort of Producer-Consumer model (basically use queues). Where one part of the pipeline is sticking in data and the next part is doing work on it to push it into the next part. How to implement this depends on how much parallelization you need. Do you need cores on a machine or 50 machines? You could look into MSMQ or another message queue system for coordination across process boundaries. You can use a disk as transactional storage but it's definitely a tough problem. You have to make sure files are written completely or you'll get locking issues. I would look into something else to coordinate between steps 2 and 3 (could be queues again that handles the transaction, IE write to file then insert work into queue)

raminasi
Jan 25, 2005

a last drink with no ice

gariig posted:

I don't understand where the ConcurrentDictionary comes into play. I would do some sort of Producer-Consumer model (basically use queues). Where one part of the pipeline is sticking in data and the next part is doing work on it to push it into the next part. How to implement this depends on how much parallelization you need. Do you need cores on a machine or 50 machines? You could look into MSMQ or another message queue system for coordination across process boundaries. You can use a disk as transactional storage but it's definitely a tough problem. You have to make sure files are written completely or you'll get locking issues. I would look into something else to coordinate between steps 2 and 3 (could be queues again that handles the transaction, IE write to file then insert work into queue)

I just need cores on an individual machine. I don't see how queues help here, though; it's not really a "pipeline". It's linear, sure, but step 2 is neither processing nor releasing individual items at a time. It does them all at once.

Mr Shiny Pants
Nov 12, 2012

GrumpyDoctor posted:

Ok, I guess I should provide the entire problem description so I'm not X/Ying this. It's kind of gnarly to explain, though. I was trying to originally only ask about one piece of it, but I guess I should put it all on the table.

I've got a collection of scene objects. Three things need to happen:
1) Each scene object is preprocessed into a format that is written to disk. The computation cost is not trivial, but not insane; I want to parallelize this.
2) The disk versions of the scene objects are collected into a unified scene data structure S. This happens via separate program, which is why the inputs need to be represented on-disk.
3) For each scene object X in some subset of the original set of scene objects, run another (relatively) expensive computation that takes as input X and S. This also happens via a separate program.

My original question was about the movement between steps 1 and 2. What I was trying to do was set up a way to do this whereby neither the original object set nor the subset in step 3 needed to be known beforehand; clients could add scene objects willy-nilly until step 2 kicked off, at which point additional attempts to add or update scene objects would either block or fail. (In practice, I wouldn't ever expect to get there - the idea would be that a request to begin step 2 would block until any remaining scene object additions were complete.) In describing it, I'm realizing that I could just drop this particular goal (and have clients pass in the object set as a single batch) to simplify everything, so I guess that's what I'll go with. I think the original reason I was reluctant to do that was that parallelization of step 1 would necessarily be hidden from client code, but I guess that's not the end of the world.

Let the consumer copy the finished scene objects to a separate directory and let it consume them from there? This way the writers write to a separate directory and can keep writing without screwing with the import in S.

You can manage the writers that are finished within your own program and have the readers just copy the files that are actually finished.

Makes sense?

raminasi
Jan 25, 2005

a last drink with no ice

Mr Shiny Pants posted:

Let the consumer copy the finished scene objects to a separate directory and let it consume them from there? This way the writers write to a separate directory and can keep writing without screwing with the import in S.

You can manage the writers that are finished within your own program and have the readers just copy the files that are actually finished.

Makes sense?

If I do this, then the scene objects that are in the process of being generated or written when aggregation kicks off will be stale when it's done. They then can't be used in step 3.

I think I've overlooked the actual simplest option - to give up on programmatically ensuring that step 2, once invoked, waits until step 1 is actually complete before beginning its aggregation activities. I can just say "don't build the scene until you're sure all the objects have been written." The only client I'm programming for is future me.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
http://blogs.msdn.com/b/visualstudioalm/archive/2015/02/12/build-futures.aspx

Blog on the VSO build stuff that's coming... I was talking about it a few days ago, I guess they're finally ready to start publicizing it a bit.

raminasi
Jan 25, 2005

a last drink with no ice
Here is an async question unrelated to my previous questions: How do I redirect both standard input and standard output for a launched process? It seems like it should be something like this:
C# code:
var startInfo = new ProcessStartInfo("boners.exe")
{
    UseShellExecute = false,
    RedirectStandardInput = true,
    RedirectStandardOutput = true
};
var process = new Process() { StartInfo = startInfo };
process.Start();
process.StandardInput.WriteAsync("input string");
var output = await process.StandardOutput.ReadToEndAsync();
process.WaitForExit();
Never awaiting the Task returned by WriteAsync seems wrong (and the compiler is whining at me), but won't awaiting it before reading to the end of the output stream cause a deadlock if the child process fills up its output stream? Do I grab the Task and await it afterwards or something?

Gul Banana
Nov 28, 2003

if you want it 'detached', observe it on your event loop/in main()
otherwise yes, save the task then await it after the other.

raminasi
Jan 25, 2005

a last drink with no ice
Ok. Is that just about getting exceptions properly re-thrown?

ljw1004
Jan 18, 2005

rum

GrumpyDoctor posted:

Here is an async question unrelated to my previous questions: How do I redirect both standard input and standard output for a launched process?

Here's how I do it...
code:
            Dim outputTask = tidy.StandardOutput.ReadToEndAsync()
            Dim errorTask = tidy.StandardError.ReadToEndAsync()
            Await tidy.StandardInput.WriteAsync(html)
            tidy.StandardInput.Close()
            Dim op = Await outputTask
            Dim err = Await errorTask
My full code looks like this...
code:
        Using tidy As New System.Diagnostics.Process
            Dim cmd = "tidy.exe"
            Dim args = "-asxml -numeric -quiet --force-output true"
            tidy.StartInfo.FileName = cmd
            tidy.StartInfo.Arguments = args
            tidy.StartInfo.UseShellExecute = False
            tidy.StartInfo.RedirectStandardInput = True
            tidy.StartInfo.RedirectStandardOutput = True
            tidy.StartInfo.RedirectStandardError = True
            tidy.Start()
            '
            Dim outputTask = tidy.StandardOutput.ReadToEndAsync
            Dim errorTask = tidy.StandardError.ReadToEndAsync()
            Await tidy.StandardInput.WriteAsync(html)
            tidy.StandardInput.Close()
            Dim op = Await outputTask
            Dim err = Await errorTask

            Await Task.Run(Sub() tidy.WaitForExit(5000))
            If Not tidy.HasExited Then
                tidy.Kill()
                Await Task.Run(Sub() tidy.WaitForExit(2000))
                Return Nothing
            End If

            Return op
    End Using

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Ithaqua posted:

http://blogs.msdn.com/b/visualstudioalm/archive/2015/02/12/build-futures.aspx

Blog on the VSO build stuff that's coming... I was talking about it a few days ago, I guess they're finally ready to start publicizing it a bit.

loving finally. being ordered to move to tfs sans a decent ci system was horrible

Iverron
May 13, 2012

I feel like I'm taking crazy pills, but has anyone else run into major frustrations with MVC 5 / Identity 2.0+ and IoC Containers?

The MVC 5 template for Individual User Account authentication ("Internet") comes out of the box with some psuedo DI for it's Account Management constructs (AccountController, ApplicationUserManager, IUserStore, etc.) using Owin. This trips up SimpleInjector and the like something fierce.

There's some discussion out there regarding this problem:
https://simpleinjector.codeplex.com/discussions/564822
http://tech.trailmax.info/2014/09/aspnet-identity-and-ioc-container-registration/
https://simpleinjector.codeplex.com/discussions/578859

...which all seems like kind of a mess just to get DI working with what's basically the default MVC 5 project template.

The cleanest solution right now seems to be to back out of DI altogether as I'm not deeply entrenched in any kind of Repository Pattern.

Iverron fucked around with this message at 04:18 on Feb 14, 2015

bpower
Feb 19, 2011

Iverron posted:

I feel like I'm taking crazy pills, but has anyone else run into major frustrations with MVC 5 / Identity 2.0+ and IoC Containers?

The MVC 5 template for Individual User Account authentication ("Internet") comes out of the box with some psuedo DI for it's Account Management constructs (AccountController, ApplicationUserManager, IUserStore, etc.) using Owin. This trips up SimpleInjector and the like something fierce.

There's some discussion out there regarding this problem:
https://simpleinjector.codeplex.com/discussions/564822
http://tech.trailmax.info/2014/09/aspnet-identity-and-ioc-container-registration/
https://simpleinjector.codeplex.com/discussions/578859

...which all seems like kind of a mess just to get DI working with what's basically the default MVC 5 project template.

The cleanest solution right now seems to be to back out of DI altogether as I'm not deeply entrenched in any kind of Repository Pattern.

The setup is a little ugly , but once done you can forget about it. I use StructureMap following the advise given in this pluralsight video http://www.pluralsight.com/courses/build-application-framework-aspdotnet-mvc-5
I highly recommend watching that course, I learned a poo poo load from it. Not just DI stuff.

code:
 public class MvcRegistry : Registry
    {
        public MvcRegistry()
        {
            For<BundleCollection>().Use(BundleTable.Bundles);
            For<RouteCollection>().Use(RouteTable.Routes);
            For<IIdentity>().Use(() => HttpContext.Current.User.Identity);
            For<HttpSessionStateBase>()
                .Use(() => new HttpSessionStateWrapper(HttpContext.Current.Session));
            For<HttpContextBase>()
                .Use(() => new HttpContextWrapper(HttpContext.Current));
            For<HttpServerUtilityBase>()
                .Use(() => new HttpServerUtilityWrapper(HttpContext.Current.Server));

            For<IAuthenticationManager>().Use(() => HttpContext.Current.GetOwinContext().Authentication);

            For<DbContext>().Use(() => new ApplicationDbContext());

            For<IUserStore<ApplicationUser>>()
                    .Use<UserStore<ApplicationUser>>();

        }

    }
I have a few Registry classes that I then use in App_start

code:

protected void Application_Start()
        {
            GlobalConfiguration.Configure(WebApiConfig.Register);
            AreaRegistration.RegisterAllAreas();
            FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
            RouteConfig.RegisterRoutes(RouteTable.Routes);
            BundleConfig.RegisterBundles(BundleTable.Bundles);
            DependencyResolver.SetResolver(new StructureMapDependencyResolver(() => Container ?? ObjectFactory.Container));
            ObjectFactory.Configure(cfg =>
             {
                 cfg.AddRegistry(new StandardRegistry());
                 cfg.AddRegistry(new ControllerRegistry());
                 cfg.AddRegistry(new MvcRegistry());
                 cfg.AddRegistry(new TaskRegistry());
                 cfg.AddRegistry(new IfcRegistry());
                 cfg.AddRegistry(new ActionFilterRegistry(
                     () => Container ?? ObjectFactory.Container));
                 cfg.AddRegistry(new ModelMetadataRegistry());
             });

            Debug.WriteLine(container.WhatDoIHave());
            
        }
I never had any issues. I definitely would not abandon DI. The benefits are huge.

Inverness
Feb 4, 2009

Fully configurable personal assistant.
I stumbled upon this really awesome utility: Roslyn Quoter

You give it source code and it gives you back the syntax factory invocations you'd need to recreate it. This alone probably shaved an hour off of what I was doing. The syntax tree stuff isn't well documented it seems.

After making my crappy compiler hook for Roslyn, I went ahead and created a syntax rewriter for a method boundary aspect. It looks for the appropriate attribute then just replaces the method body with the new stuff.

Since this is operating at the syntax tree level the aspect type and the implementation type both need to be available in the code:
code:
    [MethodBoundaryAspect(typeof(TestAspect))]
    public static void AspectTest()
    {
        int e = 5;
        int x = 3;
    }
Which gets rewritten into:
code:
    [MethodBoundaryAspect(typeof(TestAspect))]
    public static void AspectTest()
#line hidden
    {
        var <>z__aspect = new TestAspect();
        bool <>z__fail = false;
        <>z__aspect.OnEntry(null);
        try
#line 46
    {
        int e = 5;
        int x = 3;
    }
#line hidden
        catch (global::System.Exception)
        {
            <>z__fail = true;
            <>z__aspect.OnException(null);
            throw;
        }
        finally
        {
            if (!<>z__fail)
                <>z__aspect.OnSuccess(null);
            <>z__aspect.OnExit(null);
        }
    } 
#line 50
I nearly panicked when I found the debugger was not working on the file because the source code didn't match. I had to disable the option to require source files to match, then figure out how to use line directives and where to insert the trivia so the original method body would have the same indentation and everything. It worked and I'm now quite pleased.

Optimizations would still need to be done, such as the aspect instance being cached statically, and actually writing the part that delivers argument info to the aspect. I made sure my compiler hook could add new syntax trees so I figure I'll just generate a new class with a static field for each type of aspect.

Inverness fucked around with this message at 16:41 on Feb 14, 2015

Iverron
May 13, 2012

bpower posted:

The setup is a little ugly , but once done you can forget about it. I use StructureMap following the advise given in this pluralsight video http://www.pluralsight.com/courses/build-application-framework-aspdotnet-mvc-5
I highly recommend watching that course, I learned a poo poo load from it. Not just DI stuff.

I have a few Registry classes that I then use in App_start

I never had any issues. I definitely would not abandon DI. The benefits are huge.

Appreciate it. This is definitely one of the cleanest looking solutions I've run across to date, most make a complete mess of the project template's code.

Really the only thing I'm using DI for (at this point in time) is injecting "MyContext" into each controller that has need of the context on a per web request basis. If a particular project creeps well beyond the scope of CRUD, I'd probably start looking at something like CQRS instead.

RICHUNCLEPENNYBAGS
Dec 21, 2010

Iverron posted:

Appreciate it. This is definitely one of the cleanest looking solutions I've run across to date, most make a complete mess of the project template's code.

Really the only thing I'm using DI for (at this point in time) is injecting "MyContext" into each controller that has need of the context on a per web request basis. If a particular project creeps well beyond the scope of CRUD, I'd probably start looking at something like CQRS instead.

But that's the thing about DI... you can just write all your code and not worry about how all the dependencies are going to get wired up, or not worry about it till later. That's even bigger than being able to swap out components, in a way.

Iverron
May 13, 2012

RICHUNCLEPENNYBAGS posted:

But that's the thing about DI... you can just write all your code and not worry about how all the dependencies are going to get wired up, or not worry about it till later. That's even bigger than being able to swap out components, in a way.

And I'd prefer to have it available, it's just a little bit mind boggling how the default MVC 5 template breaks most IoC containers out of the box.

raminasi
Jan 25, 2005

a last drink with no ice

ljw1004 posted:

Here's how I do it...
code:
            Dim outputTask = tidy.StandardOutput.ReadToEndAsync()
            Dim errorTask = tidy.StandardError.ReadToEndAsync()
            Await tidy.StandardInput.WriteAsync(html)
            tidy.StandardInput.Close()
            Dim op = Await outputTask
            Dim err = Await errorTask

This is super helpful, but it's not working. The call to StandardInput.WriteAsync is throwing an IOException ("The pipe has been ended") that is surprisingly resistant to Google.

ljw1004
Jan 18, 2005

rum

GrumpyDoctor posted:

This is super helpful, but it's not working. The call to StandardInput.WriteAsync is throwing an IOException ("The pipe has been ended") that is surprisingly resistant to Google.

Could you paste your ENTIRE code from the moment you construct the Process object until the moment you dispose of it. Also tell us what is the process that you're launching so if possible we can discover its behavior wrt input and output.

ljw1004
Jan 18, 2005

rum

Inverness posted:

After making my crappy compiler hook for Roslyn, I went ahead and created a syntax rewriter for a method boundary aspect. It looks for the appropriate attribute then just replaces the method body with the new stuff.

(1) Of all the language features I've implemented in the VB/C# compilers, none of them were done by syntax rewriting. They were all done by bound-tree rewriting. I don't even know how syntax-rewriting could even work without messing up the debugger and everything else! You'll probably also have to disable Edit&Continue (i.e. make all changes to this method count as rude edits).

(2) We were discussing a different way of accomplishing the same end. Imagine if we add a keyword to VB/C# called "supercedes", so the user would write a normal class
code:
// Program.cs
[Notifies]
class C {
   public int i {get; set;}
   public void f() {...}
}
and then a code-generator would spit out an additional class
code:
// Program.g.cs
supercedes class C : INotifyPropertyChanged {
   private int _i;
   public int i {get {return _i;} set {i=value; Notify();}
}
where the meaning is: "If the program declares both a class and a "supercedes" of it, then the supercedes version can replace & add members, can add inheritence+implements clauses, and can add attributes to existing members." It would be a bit like "partial" on steroids.

The end-to-end vision is that you'd write your code as you did, but then write a "Custom Tool" (also known as "Single File Generator") which reads the attribute and spits out a generated file that replaces the method body with your entry code, followed by the original method body, followed by your entry code. You'd still need to do the #line directive for debugging. But it would be a small clean general-purpose change to the compiler that could be used in many ways.


I made an early prototype of this and it worked okay. But we're still mulling over the right way to do it. And chatting with Gael Fraiteur of Postsharp. And looking at Java annotations. One of the irritating things in VB/C# is that you can only put attributes outside methods, but a load of the rewrites you'd like need them inside methods. And probably we'd want compile-time-only attributes, ones that are allowed lambdas as arguments. That way you could write
code:
class C {
   [Validate(x => 0<x && x<100)] public int percent {get; set;}
}

raminasi
Jan 25, 2005

a last drink with no ice

ljw1004 posted:

Could you paste your ENTIRE code from the moment you construct the Process object until the moment you dispose of it. Also tell us what is the process that you're launching so if possible we can discover its behavior wrt input and output.

I've discovered the root problem; the invoked process is discovering an error in its command-line inputs (before standard input comes into play) and terminating early. (That's what the standard error redirection is about.) I'm going to try to track down what those errors are, but I'm also curious about how to best handle their presence.

C# code:
var startInfo = new ProcessStartInfo(rtrace, args)
{
    UseShellExecute = false,
    CreateNoWindow = true,
    RedirectStandardInput = true,
    RedirectStandardOutput = true,
    RedirectStandardError = true
};
using (var p = new Process() { StartInfo = startInfo })
{
    p.ErrorDataReceived += (s, e) => onErrorLine(e.Data);
    p.Start();
    p.BeginErrorReadLine();

    // [url]http://forums.somethingawful.com/showthread.php?threadid=3644791&pagenumber=52#post441538970[/url]
    var outputTask = p.StandardOutput.ReadToEndAsync();
    var pts = CreatePtsString(objectMesh);
    await p.StandardInput.WriteAsync(pts); // exception thrown here
    p.StandardInput.Close();
    var output = await outputTask;
    p.WaitForExit();

    return
        output
        .Split('\n')
        .Select(line => Double.Parse(line.Split('\t').First()) / 100.0)
        .Average();
}

Mr Shiny Pants
Nov 12, 2012
This has stumped me for awhile:

Does anybody know how in F# I can start parallel async tasks and select one that returns a certain value?

I want to download something in parallel from a couple of webservers and need the one that returns 200. The others may not respond that is why I would like it to run in parallel.

I really have no clue, I've looked at joinads and observables but have no idea how to glue it all together.

raminasi
Jan 25, 2005

a last drink with no ice
:downs:

raminasi fucked around with this message at 22:15 on Feb 16, 2015

Inverness
Feb 4, 2009

Fully configurable personal assistant.

ljw1004 posted:

(1) Of all the language features I've implemented in the VB/C# compilers, none of them were done by syntax rewriting. They were all done by bound-tree rewriting. I don't even know how syntax-rewriting could even work without messing up the debugger and everything else! You'll probably also have to disable Edit&Continue (i.e. make all changes to this method count as rude edits).
The bound tree code is all internal, which means I can't use it. The only thing I did to the compiler was insert an extension point after syntax trees are parsed and before the compilation object is created. My intent was to modify the compiler as little as possible. I don't want a custom compiler, just a compiler that supports hooking.

With my syntax rewriting, I used line directives to make the debugger align properly with the original source file. I wouldn't know how that applies to edit & continue since I didn't try.

What I've done so far with syntax rewriting is here. The process is initiated from this code which is loaded by the extension functionality I added.

Inverness fucked around with this message at 17:41 on Feb 17, 2015

Newf
Feb 14, 2006
I appreciate hacky sack on a much deeper level than you.
I'm playing with Visual Studio for html/js editing now with a tiny toy MVC app and just lost half an hour to a button whose 'onclich' attribute was set to some function. I think I was tricked by the autocompletion and intellisense into thinking I didn't have to pay as much attention to the little things as I wrote. Is there a way to get VS to yell at me when these mistakes happen? It isn't even underlined!

EssOEss
Oct 23, 2006
128-bit approved
When spawning new processes and fiddling with their input/output streams, I always create a new thread for each stream and handle any synchronization manually. Perhaps wasteful in some sense but it has been rock solid so far, without any pesky little issues due to assuming an order of data consumption/generation.

Destroyenator
Dec 27, 2004

Don't ask me lady, I live in beer

Mr Shiny Pants posted:

This has stumped me for awhile:

Does anybody know how in F# I can start parallel async tasks and select one that returns a certain value?

I want to download something in parallel from a couple of webservers and need the one that returns 200. The others may not respond that is why I would like it to run in parallel.

I really have no clue, I've looked at joinads and observables but have no idea how to glue it all together.
I feel like there should be a more idiomatic way of doing it, but it works.
code:
open System
open System.Threading.Tasks
open System.Net
open Microsoft.FSharp.Control.WebExtensions

let getWinner urls =
    let result = new TaskCompletionSource<Uri * string>()
    let fetch url = async { let uri, client = new Uri(url), new WebClient()
                            try let! response = client.AsyncDownloadString(uri)
                                result.TrySetResult (uri, response) |> ignore
                            with _ -> () }

    async { do! List.map fetch urls |> Async.Parallel |> Async.Ignore
            result.TrySetException (Exception "all failed") |> ignore } |> Async.Start

    Async.AwaitTask result.Task |> Async.RunSynchronously

let urls = ["http://broken"
           ;"http://google.com"
           ;"http://dnsjkadnksja.dsnakldsaoni.net"
           ;"http://dsadsdsa"]

TheEffect
Aug 12, 2013
I was just wondering if the following was possible with VB.Net-

I want to make an application that automatically names a group chat via Lync. Currently the way we do it (because we don't use Lync's persistent chat feature) is create a word document with the title of the Lync chat we want, open it open and click "Share" and then share by IM. Then you add people and when you start the conversation the title of the document is the title of the conference.

I know PS can obtain quite a bit of Lync data and do all sorts of things with it but I need this to be useable by employees who have limited rights, so assigning execution policy as unsigned wouldn't be possible for them to do, thus PS is out of the question... but can this be accomplished with VB.Net?

Adbot
ADBOT LOVES YOU

chippy
Aug 16, 2006

OK I DON'T GET IT
I'm afraid I don't know the answer to your specific question but this looks to be a good starting point:

https://msdn.microsoft.com/en-us/library/office/hh243703(v=office.14).aspx
https://msdn.microsoft.com/en-us/library/office/hh243697(v=office.14).aspx

edit: Looks like you would want to use the ConversationManager to get the Conversation that interests you, and then you can set the title via its 'Properties' Property dictionary, one of which is 'Subject', which I'm guessing is probably the title you want.

https://msdn.microsoft.com/en-us/li...office.14).aspx

chippy fucked around with this message at 17:59 on Feb 17, 2015

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply