+ Reply to Thread
Results 1 to 1 of 1

Thread: Fiery Crashes and Disconnections from large API jobs

  1. #1
    Join Date
    Feb 2018
    Location
    Utah
    Posts
    1

    Default Fiery Crashes and Disconnections from large API jobs

    Here's a synopsis of the Fiery disconnect issues we've been regularly experiencing.


    WORKFLOW

    We run multiple cart sites for various clients that purchase customized business cards online. When a customer purchases a business card, we generate a front and back PDF of the customized business card. We use some in house software to generate those files. In addition to the business card fronts and backs, we also generate two "Info cards". These info cards contain order information, a barcode for shipping, and other workflow based information. These four PDFs are sent To a web-based utility that we've built in house to track workflow, shipping, and other information. This in-house web based utility uses the Fiery API to submit the PDFs to virtual printers set up for our x1000.

    Because raster times can take 15 minutes or longer (because of our specific impositions), we have the virtual printers set up to "process and hold". We send the PDFs in batches late at night in order to allow them time to rip. We will usually process anywhere between 50-200 order a day in this way. The imposed file sizes range from 2MB to 180MB per job. We then "Print and Delete" each job.


    FIERY DISCONNECTIONS

    Generally speaking, this workflow gets the job done. Everything imposes the way we want it to. However, around 5% of our jobs processed this way crash out the Fiery. For those 5% of jobs, the issue occurs the moment an operator selects to "Print and Delete" a ripped job. The fiery disconnects from our x1000. We will get a screen on Command Work Station that tells us that the x1000 has disconnected. We cannot reconnect the Fiery to the x1000 using CWS. We can't pull up anything at all on CWS. We have to power cycle Fiery and often the x1000 in order to reconnect. This issue will happen multiple times every day. Usually between 3-8 times per day. As an added difficulty, because we "print and delete" jobs, any other jobs that were in queue when the disconnect occurs do not return to the print or hold queue. They are simply deleted. Our current workaround is to only queue up around 5 jobs at a time, screenshot the selected jobs, and then print and delete. If it crashes out CWS, we save the screenshot and resubmit the jobs that have been deleted. Printing from the queue ends up feeling like you're cranking some sadistic jack-in-the-box, just waiting for it to inevitably crash.

    I've tried to find a pattern to the jobs that disconnect the Fiery. They range in size. They can come from any virtual printer we have set up. If we take that same job and remove the raster and re-rip it. The job will not crash the Fiery. Likewise, If we take the PDFs and download them manually and drag them into the "hold" queue, and then process the file with out preferences and impositions, it will print without crashing. I've tried to pinpoint where the failure occurs. I'm afraid I only have guesses and nothing definite. I think that the issues we are having stem from the way we use the impose/compose tool. Instead of having a standard duplex print job that has a front and a back, that then has multiple copies printed, we have up to 84 pages that are only printed once. We do this so that the first page that is printed contains the order information, almost like a cover sheet, or pick ticket. Every other page includes the cards we print. I strongly suspect that Fiery is not caching the duplicated pages, and is instead rasterizing every instance of the image that we've outlined in the imposition. This is what leads to our absurdly long raster times, and I think what may be the root of the Fiery crash issue.

    POSSIBLE CAUSES

    So it may as long of a shot as they come, but my current guess is that these large impositions are getting corrupted in transmission between the Fiery API and CWS. I'm pretty much completely in the dark as to the inner voodoo workings of the API. I'm guessing that the jobs are transferred using TCP/IP. I've been going through some pretty dry bathroom reading, trying to figure out where the hiccups might be occurring. I'm guessing the API uses the newer Standard Port Monitor instead of the old LPRMON. From what I understand, here some of the key differences.

    SPM deviates from the LPR standard in two ways. First, SPM does not conform to the RFC 1179 requirement that the source TCP port lie between port 721 and port 731. SPM uses ports from the general, unreserved pool of ports (ports 1024 and above). Second, the LPR standard states that print jobs must include information about the size of the job the port monitor sends. Sending a print job with job size information requires that the port monitor spool the job twice, once to determine size, and once to send the job to the spooler. Spooling the job only once improves printing performance, so SPM sends the job to the spooler without determining the actual job size, and claims the job is a default size, regardless of the jobs actual size.

    I don't know if there is a way to reconfigure the API to use the older LPR instead of SPM. I don't know for sure if that would fix the issue. My guess is that the file size is too large for the SPM when it sends the job to the spooler, and during that transaction, the file data gets corrupted. That corrupt file crashes out the Fiery when you go to print it. That's just my guess.


    OTHER AVENUES

    Changing the way we impose these files would likely take care of the crashing issue. We want to use our impositions if there is any possible way to accommodate them. We've got a very efficient workflow that those impositions allow us to implement. Having those info cards divide multiple orders in our finishing equipment makes a huge difference in time to sort and pack. We've looked heavily into Freeflow Core. It would also likely solve our Fiery Crash woes. We almost exclusively print work from our in house designers, so there is practically no preflight to speak of. Many of the strengths of Freeflow core are entirely redundant in our workflow. The Impose portion of Freeflow Core is the only element we would use. We are hesitant about shelling out the cash for new software when we have a workflow that *almost* works perfectly.

    Do you think these crashes are Fiery related? We've been talking with level 2 Fiery tech support on and off for about a year. They seem to think that the issue is API related. They do not offer support on the API. Is there a way to turn off byte counting on the SNMP, or to switch back to the unix LPR? Any suggested avenues of approach for narrowing down exactly what is causing the disconnections?


    UPDATE:

    As of June 2018, the issue is still ongoing. We have not been able to pinpoint the criteria that crash out the Fiery. We still have our Fiery crash between 3-8 times a day.
    Last edited by scpod; 06-19-2018 at 03:02 PM. Reason: Update

+ Reply to Thread

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts